WorldWideScience

Sample records for regression interval mapping

  1. Mapping geogenic radon potential by regression kriging

    Energy Technology Data Exchange (ETDEWEB)

    Pásztor, László [Institute for Soil Sciences and Agricultural Chemistry, Centre for Agricultural Research, Hungarian Academy of Sciences, Department of Environmental Informatics, Herman Ottó út 15, 1022 Budapest (Hungary); Szabó, Katalin Zsuzsanna, E-mail: sz_k_zs@yahoo.de [Department of Chemistry, Institute of Environmental Science, Szent István University, Páter Károly u. 1, Gödöllő 2100 (Hungary); Szatmári, Gábor; Laborczi, Annamária [Institute for Soil Sciences and Agricultural Chemistry, Centre for Agricultural Research, Hungarian Academy of Sciences, Department of Environmental Informatics, Herman Ottó út 15, 1022 Budapest (Hungary); Horváth, Ákos [Department of Atomic Physics, Eötvös University, Pázmány Péter sétány 1/A, 1117 Budapest (Hungary)

    2016-02-15

    Radon ({sup 222}Rn) gas is produced in the radioactive decay chain of uranium ({sup 238}U) which is an element that is naturally present in soils. Radon is transported mainly by diffusion and convection mechanisms through the soil depending mainly on the physical and meteorological parameters of the soil and can enter and accumulate in buildings. Health risks originating from indoor radon concentration can be attributed to natural factors and is characterized by geogenic radon potential (GRP). Identification of areas with high health risks require spatial modeling, that is, mapping of radon risk. In addition to geology and meteorology, physical soil properties play a significant role in the determination of GRP. In order to compile a reliable GRP map for a model area in Central-Hungary, spatial auxiliary information representing GRP forming environmental factors were taken into account to support the spatial inference of the locally measured GRP values. Since the number of measured sites was limited, efficient spatial prediction methodologies were searched for to construct a reliable map for a larger area. Regression kriging (RK) was applied for the interpolation using spatially exhaustive auxiliary data on soil, geology, topography, land use and climate. RK divides the spatial inference into two parts. Firstly, the deterministic component of the target variable is determined by a regression model. The residuals of the multiple linear regression analysis represent the spatially varying but dependent stochastic component, which are interpolated by kriging. The final map is the sum of the two component predictions. Overall accuracy of the map was tested by Leave-One-Out Cross-Validation. Furthermore the spatial reliability of the resultant map is also estimated by the calculation of the 90% prediction interval of the local prediction values. The applicability of the applied method as well as that of the map is discussed briefly. - Highlights: • A new method

  2. Mapping geogenic radon potential by regression kriging

    International Nuclear Information System (INIS)

    Pásztor, László; Szabó, Katalin Zsuzsanna; Szatmári, Gábor; Laborczi, Annamária; Horváth, Ákos

    2016-01-01

    Radon ( 222 Rn) gas is produced in the radioactive decay chain of uranium ( 238 U) which is an element that is naturally present in soils. Radon is transported mainly by diffusion and convection mechanisms through the soil depending mainly on the physical and meteorological parameters of the soil and can enter and accumulate in buildings. Health risks originating from indoor radon concentration can be attributed to natural factors and is characterized by geogenic radon potential (GRP). Identification of areas with high health risks require spatial modeling, that is, mapping of radon risk. In addition to geology and meteorology, physical soil properties play a significant role in the determination of GRP. In order to compile a reliable GRP map for a model area in Central-Hungary, spatial auxiliary information representing GRP forming environmental factors were taken into account to support the spatial inference of the locally measured GRP values. Since the number of measured sites was limited, efficient spatial prediction methodologies were searched for to construct a reliable map for a larger area. Regression kriging (RK) was applied for the interpolation using spatially exhaustive auxiliary data on soil, geology, topography, land use and climate. RK divides the spatial inference into two parts. Firstly, the deterministic component of the target variable is determined by a regression model. The residuals of the multiple linear regression analysis represent the spatially varying but dependent stochastic component, which are interpolated by kriging. The final map is the sum of the two component predictions. Overall accuracy of the map was tested by Leave-One-Out Cross-Validation. Furthermore the spatial reliability of the resultant map is also estimated by the calculation of the 90% prediction interval of the local prediction values. The applicability of the applied method as well as that of the map is discussed briefly. - Highlights: • A new method, regression

  3. Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.

    Science.gov (United States)

    Hu, Yi-Chung

    2014-01-01

    On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.

  4. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    Science.gov (United States)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  5. [Studies of marker screening efficiency and corresponding influencing factors in QTL composite interval mapping].

    Science.gov (United States)

    Gao, Yong-Ming; Wan, Ping

    2002-06-01

    Screening markers efficiently is the foundation of mapping QTLs by composite interval mapping. Main and interaction markers distinguished, besides using background control for genetic variation, could also be used to construct intervals of two-way searching for mapping QTLs with epistasis, which can save a lot of calculation time. Therefore, the efficiency of marker screening would affect power and precision of QTL mapping. A doubled haploid population with 200 individuals and 5 chromosomes was constructed, with 50 markers evenly distributed at 10 cM space. Among a total of 6 QTLs, one was placed on chromosome I, two linked on chromosome II, and the other three linked on chromosome IV. QTL setting included additive effects and epistatic effects of additive x additive, the corresponding QTL interaction effects were set if data were collected under multiple environments. The heritability was assumed to be 0.5 if no special declaration. The power of marker screening by stepwise regression, forward regression, and three methods for random effect prediction, e.g. best linear unbiased prediction (BLUP), linear unbiased prediction (LUP) and adjusted unbiased prediction (AUP), was studied and compared through 100 Monte Carlo simulations. The results indicated that the marker screening power by stepwise regression at 0.1, 0.05 and 0.01 significant level changed from 2% to 68%, the power changed from 2% to 72% by forward regression. The larger the QTL effects, the higher the marker screening power. While the power of marker screening by three random effect prediction was very low, the maximum was only 13%. That suggested that regression methods were much better than those by using the approaches of random effect prediction to identify efficient markers flanking QTLs, and forward selection method was more simple and efficient. The results of simulation study on heritability showed that heightening of both general heritability and interaction heritability of genotype x

  6. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    Science.gov (United States)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  7. Zero entropy continuous interval maps and MMLS-MMA property

    Science.gov (United States)

    Jiang, Yunping

    2018-06-01

    We prove that the flow generated by any continuous interval map with zero topological entropy is minimally mean-attractable and minimally mean-L-stable. One of the consequences is that any oscillating sequence is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy. In particular, the Möbius function is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy (Sarnak’s conjecture for continuous interval maps). Another consequence is a non-trivial example of a flow having discrete spectrum. We also define a log-uniform oscillating sequence and show a result in ergodic theory for comparison. This material is based upon work supported by the National Science Foundation. It is also partially supported by a collaboration grant from the Simons Foundation (grant number 523341) and PSC-CUNY awards and a grant from NSFC (grant number 11571122).

  8. Voltage interval mappings for an elliptic bursting model

    OpenAIRE

    Wojcik, Jeremy; Shilnikov, Andrey

    2013-01-01

    We employed Poincar\\'e return mappings for a parameter interval to an exemplary elliptic bursting model, the FitzHugh-Nagumo-Rinzel model. Using the interval mappings, we were able to examine in detail the bifurcations that underlie the complex activity transitions between: tonic spiking and bursting, bursting and mixed-mode oscillations, and finally, mixed-mode oscillations and quiescence in the FitzHugh-Nagumo-Rinzel model. We illustrate the wealth of information, qualitative and quantitati...

  9. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    Science.gov (United States)

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  10. Coexistence of uniquely ergodic subsystems of interval mapping

    International Nuclear Information System (INIS)

    Ye Xiangdong.

    1991-10-01

    The purpose of this paper is to show that uniquely ergodic subsystems of interval mapping also coexist in the same way as minimal sets do. To do this we give some notations in section 2. In section 3 we define D-function of a uniquely ergodic system and show its basic properties. We prove the coexistence of uniquely ergodic subsystems of interval mapping in section 4. Lastly we give the examples of uniquely ergodic systems with given D-functions in section 5. 27 refs

  11. The Initial Regression Statistical Characteristics of Intervals Between Zeros of Random Processes

    Directory of Open Access Journals (Sweden)

    V. K. Hohlov

    2014-01-01

    Full Text Available The article substantiates the initial regression statistical characteristics of intervals between zeros of realizing random processes, studies their properties allowing the use these features in the autonomous information systems (AIS of near location (NL. Coefficients of the initial regression (CIR to minimize the residual sum of squares of multiple initial regression views are justified on the basis of vector representations associated with a random vector notion of analyzed signal parameters. It is shown that even with no covariance-based private CIR it is possible to predict one random variable through another with respect to the deterministic components. The paper studies dependences of CIR interval sizes between zeros of the narrowband stationary in wide-sense random process with its energy spectrum. Particular CIR for random processes with Gaussian and rectangular energy spectra are obtained. It is shown that the considered CIRs do not depend on the average frequency of spectra, are determined by the relative bandwidth of the energy spectra, and weakly depend on the type of spectrum. CIR properties enable its use as an informative parameter when implementing temporary regression methods of signal processing, invariant to the average rate and variance of the input implementations. We consider estimates of the average energy spectrum frequency of the random stationary process by calculating the length of the time interval corresponding to the specified number of intervals between zeros. It is shown that the relative variance in estimation of the average energy spectrum frequency of stationary random process with increasing relative bandwidth ceases to depend on the last process implementation in processing above ten intervals between zeros. The obtained results can be used in the AIS NL to solve the tasks of detection and signal recognition, when a decision is made in conditions of unknown mathematical expectations on a limited observation

  12. Complexity of a kind of interval continuous self-map of finite type

    International Nuclear Information System (INIS)

    Wang Lidong; Chu Zhenyan; Liao Gongfu

    2011-01-01

    Highlights: → We find the Hausdorff dimension for an interval continuous self-map f of finite type is s element of (0,1) on a non-wandering set. → f| Ω(f) has positive topological entropy. → f| Ω(f) is chaotic such as Devaney chaos, Kato chaos, two point distributional chaos and so on. - Abstract: An interval map is called finitely typal, if the restriction of the map to non-wandering set is topologically conjugate with a subshift of finite type. In this paper, we prove that there exists an interval continuous self-map of finite type such that the Hausdorff dimension is an arbitrary number in the interval (0, 1), discuss various chaotic properties of the map and the relations between chaotic set and the set of recurrent points.

  13. Complexity of a kind of interval continuous self-map of finite type

    Energy Technology Data Exchange (ETDEWEB)

    Wang Lidong, E-mail: wld@dlnu.edu.cn [Institute of Mathematics, Dalian Nationalities University, Dalian 116600 (China); Institute of Mathematics, Jilin Normal University, Siping 136000 (China); Chu Zhenyan, E-mail: chuzhenyan8@163.com [Institute of Mathematics, Dalian Nationalities University, Dalian 116600 (China) and Institute of Mathematics, Jilin University, Changchun 130023 (China); Liao Gongfu, E-mail: liaogf@email.jlu.edu.cn [Institute of Mathematics, Jilin University, Changchun 130023 (China)

    2011-10-15

    Highlights: > We find the Hausdorff dimension for an interval continuous self-map f of finite type is s element of (0,1) on a non-wandering set. > f|{sub {Omega}(f)} has positive topological entropy. > f|{sub {Omega}(f)} is chaotic such as Devaney chaos, Kato chaos, two point distributional chaos and so on. - Abstract: An interval map is called finitely typal, if the restriction of the map to non-wandering set is topologically conjugate with a subshift of finite type. In this paper, we prove that there exists an interval continuous self-map of finite type such that the Hausdorff dimension is an arbitrary number in the interval (0, 1), discuss various chaotic properties of the map and the relations between chaotic set and the set of recurrent points.

  14. Chaoticity of interval self-maps with positive entropy

    International Nuclear Information System (INIS)

    Xiong Jincheng.

    1988-12-01

    Li and Yorke originally introduced the notion of chaos for continuous self-map of the interval I = (0,1). In the present paper we show that an interval self-map with positive topological entropy has a chaoticity more complicated than the chaoticity in the sense of Li and Yorke. The main result is that if f:I → I is continuous and has a periodic point with odd period > 1 then there exists a closed subset K of I invariant with respect to f such that the periodic points are dense in K, the periods of periodic points in K form an infinite set and f|K is topologically mixing. (author). 9 refs

  15. Semiparametric regression analysis of interval-censored competing risks data.

    Science.gov (United States)

    Mao, Lu; Lin, Dan-Yu; Zeng, Donglin

    2017-09-01

    Interval-censored competing risks data arise when each study subject may experience an event or failure from one of several causes and the failure time is not observed directly but rather is known to lie in an interval between two examinations. We formulate the effects of possibly time-varying (external) covariates on the cumulative incidence or sub-distribution function of competing risks (i.e., the marginal probability of failure from a specific cause) through a broad class of semiparametric regression models that captures both proportional and non-proportional hazards structures for the sub-distribution. We allow each subject to have an arbitrary number of examinations and accommodate missing information on the cause of failure. We consider nonparametric maximum likelihood estimation and devise a fast and stable EM-type algorithm for its computation. We then establish the consistency, asymptotic normality, and semiparametric efficiency of the resulting estimators for the regression parameters by appealing to modern empirical process theory. In addition, we show through extensive simulation studies that the proposed methods perform well in realistic situations. Finally, we provide an application to a study on HIV-1 infection with different viral subtypes. © 2017, The International Biometric Society.

  16. Iterates of piecewise monotone mappings on an interval

    CERN Document Server

    Preston, Chris

    1988-01-01

    Piecewise monotone mappings on an interval provide simple examples of discrete dynamical systems whose behaviour can be very complicated. These notes are concerned with the properties of the iterates of such mappings. The material presented can be understood by anyone who has had a basic course in (one-dimensional) real analysis. The account concentrates on the topological (as opposed to the measure theoretical) aspects of the theory of piecewise monotone mappings. As well as offering an elementary introduction to this theory, these notes also contain a more advanced treatment of the problem of classifying such mappings up to topological conjugacy.

  17. Learning Inverse Rig Mappings by Nonlinear Regression.

    Science.gov (United States)

    Holden, Daniel; Saito, Jun; Komura, Taku

    2017-03-01

    We present a framework to design inverse rig-functions-functions that map low level representations of a character's pose such as joint positions or surface geometry to the representation used by animators called the animation rig. Animators design scenes using an animation rig, a framework widely adopted in animation production which allows animators to design character poses and geometry via intuitive parameters and interfaces. Yet most state-of-the-art computer animation techniques control characters through raw, low level representations such as joint angles, joint positions, or vertex coordinates. This difference often stops the adoption of state-of-the-art techniques in animation production. Our framework solves this issue by learning a mapping between the low level representations of the pose and the animation rig. We use nonlinear regression techniques, learning from example animation sequences designed by the animators. When new motions are provided in the skeleton space, the learned mapping is used to estimate the rig controls that reproduce such a motion. We introduce two nonlinear functions for producing such a mapping: Gaussian process regression and feedforward neural networks. The appropriate solution depends on the nature of the rig and the amount of data available for training. We show our framework applied to various examples including articulated biped characters, quadruped characters, facial animation rigs, and deformable characters. With our system, animators have the freedom to apply any motion synthesis algorithm to arbitrary rigging and animation pipelines for immediate editing. This greatly improves the productivity of 3D animation, while retaining the flexibility and creativity of artistic input.

  18. Regression models for interval censored survival data: Application to HIV infection in Danish homosexual men

    DEFF Research Database (Denmark)

    Carstensen, Bendix

    1996-01-01

    This paper shows how to fit excess and relative risk regression models to interval censored survival data, and how to implement the models in standard statistical software. The methods developed are used for the analysis of HIV infection rates in a cohort of Danish homosexual men.......This paper shows how to fit excess and relative risk regression models to interval censored survival data, and how to implement the models in standard statistical software. The methods developed are used for the analysis of HIV infection rates in a cohort of Danish homosexual men....

  19. A new fuzzy regression model based on interval-valued fuzzy neural network and its applications to management

    Directory of Open Access Journals (Sweden)

    Somaye Yeylaghi

    2017-06-01

    Full Text Available In this paper, a novel hybrid method based on interval-valued fuzzy neural network for approximate of interval-valued fuzzy regression models, is presented. The work of this paper is an expansion of the research of real fuzzy regression models. In this paper interval-valued fuzzy neural network (IVFNN can be trained with crisp and interval-valued fuzzy data. Here a neural network is considered as a part of a large field called neural computing or soft computing. Moreover, in order to find the approximate parameters, a simple algorithm from the cost function of the fuzzy neural network is proposed. Finally, we illustrate our approach by some numerical examples and compare this method with existing methods.

  20. Landslide Hazard Mapping in Rwanda Using Logistic Regression

    Science.gov (United States)

    Piller, A.; Anderson, E.; Ballard, H.

    2015-12-01

    Landslides in the United States cause more than $1 billion in damages and 50 deaths per year (USGS 2014). Globally, figures are much more grave, yet monitoring, mapping and forecasting of these hazards are less than adequate. Seventy-five percent of the population of Rwanda earns a living from farming, mostly subsistence. Loss of farmland, housing, or life, to landslides is a very real hazard. Landslides in Rwanda have an impact at the economic, social, and environmental level. In a developing nation that faces challenges in tracking, cataloging, and predicting the numerous landslides that occur each year, satellite imagery and spatial analysis allow for remote study. We have focused on the development of a landslide inventory and a statistical methodology for assessing landslide hazards. Using logistic regression on approximately 30 test variables (i.e. slope, soil type, land cover, etc.) and a sample of over 200 landslides, we determine which variables are statistically most relevant to landslide occurrence in Rwanda. A preliminary predictive hazard map for Rwanda has been produced, using the variables selected from the logistic regression analysis.

  1. Differential properties and attracting sets of a simplest skew product of interval maps

    International Nuclear Information System (INIS)

    Efremova, Lyudmila S

    2010-01-01

    For a skew product of interval maps with a closed set of periodic points, the dependence of the structure of its ω-limit sets on its differential properties is investigated. An example of a map in this class is constructed which has the maximal differentiability properties (within a certain subclass) with respect to the variable x, is C 1 -smooth in the y-variable and has one-dimensional ω-limit sets. Theorems are proved that give necessary conditions for one-dimensional ω-limit sets to exist. One of them is formulated in terms of the divergence of the series consisting of the values of a function of x; this function is the C 0 -norm of the deviation of the restrictions of the fibre maps to some nondegenerate closed interval from the identity on the same interval. Another theorem is formulated in terms of the properties of the partial derivative with respect to x of the fibre maps. A complete description is given of the ω-limit sets of certain class of C 1 -smooth skew products satisfying some natural conditions. Bibliography: 33 titles.

  2. Isopach map of interval between top of the Pictured Cliffs Sandstone and the Huerfanito Bentonite bed of the Lewis Shale, La Plata County, Colorado, and Rio Arriba and San Juan counties, New Mexico

    Science.gov (United States)

    Sandberg, D.T.

    1986-01-01

    This thickness map of a Late Cretaceous interval in the northwestern part of the San Juan Basin is part of a study of the relationship between ancient shore 1ines and coal-forming swamps during the filial regression of the Cretaceous epicontinental sea. The top of the thickness interval is the top of the Pictured Cliffs Sands tone. The base of the interval is a thin time marker, the Huerfanito Bentonite Bed of the Lewis Shale. The interval includes all of the Pictured Cliffs Sandstone and the upper part of the Lewis Shale. The northwest boundary of the map area is the outcrop of the Pictured Cliffs Sandstone and the Lewis Shale.

  3. Regression analysis of case K interval-censored failure time data in the presence of informative censoring.

    Science.gov (United States)

    Wang, Peijie; Zhao, Hui; Sun, Jianguo

    2016-12-01

    Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.

  4. Analyzing Big Data with the Hybrid Interval Regression Methods

    Directory of Open Access Journals (Sweden)

    Chia-Hui Huang

    2014-01-01

    Full Text Available Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM to analyze big data. Recently, the smooth support vector machine (SSVM was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.

  5. Recurrence determinism and Li-Yorke chaos for interval maps

    OpenAIRE

    Špitalský, Vladimír

    2017-01-01

    Recurrence determinism, one of the fundamental characteristics of recurrence quantification analysis, measures predictability of a trajectory of a dynamical system. It is tightly connected with the conditional probability that, given a recurrence, following states of the trajectory will be recurrences. In this paper we study recurrence determinism of interval dynamical systems. We show that recurrence determinism distinguishes three main types of $\\omega$-limit sets of zero entropy maps: fini...

  6. Mapping the results of local statistics: Using geographically weighted regression

    Directory of Open Access Journals (Sweden)

    Stephen A. Matthews

    2012-03-01

    Full Text Available BACKGROUND The application of geographically weighted regression (GWR - a local spatial statistical technique used to test for spatial nonstationarity - has grown rapidly in the social, health, and demographic sciences. GWR is a useful exploratory analytical tool that generates a set of location-specific parameter estimates which can be mapped and analysed to provide information on spatial nonstationarity in the relationships between predictors and the outcome variable. OBJECTIVE A major challenge to users of GWR methods is how best to present and synthesize the large number of mappable results, specifically the local parameter parameter estimates and local t-values, generated from local GWR models. We offer an elegant solution. METHODS This paper introduces a mapping technique to simultaneously display local parameter estimates and local t-values on one map based on the use of data selection and transparency techniques. We integrate GWR software and GIS software package (ArcGIS and adapt earlier work in cartography on bivariate mapping. We compare traditional mapping strategies (i.e., side-by-side comparison and isoline overlay maps with our method using an illustration focusing on US county infant mortality data. CONCLUSIONS The resultant map design is more elegant than methods used to date. This type of map presentation can facilitate the exploration and interpretation of nonstationarity, focusing map reader attention on the areas of primary interest.

  7. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Science.gov (United States)

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  8. GIS-based rare events logistic regression for mineral prospectivity mapping

    Science.gov (United States)

    Xiong, Yihui; Zuo, Renguang

    2018-02-01

    Mineralization is a special type of singularity event, and can be considered as a rare event, because within a specific study area the number of prospective locations (1s) are considerably fewer than the number of non-prospective locations (0s). In this study, GIS-based rare events logistic regression (RELR) was used to map the mineral prospectivity in the southwestern Fujian Province, China. An odds ratio was used to measure the relative importance of the evidence variables with respect to mineralization. The results suggest that formations, granites, and skarn alterations, followed by faults and aeromagnetic anomaly are the most important indicators for the formation of Fe-related mineralization in the study area. The prediction rate and the area under the curve (AUC) values show that areas with higher probability have a strong spatial relationship with the known mineral deposits. Comparing the results with original logistic regression (OLR) demonstrates that the GIS-based RELR performs better than OLR. The prospectivity map obtained in this study benefits the search for skarn Fe-related mineralization in the study area.

  9. Monopole and dipole estimation for multi-frequency sky maps by linear regression

    Science.gov (United States)

    Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.; Banday, A. J.; Dickinson, C.; Ghosh, T.; Górski, K. M.; Lawrence, C. R.; Leahy, J. P.; Maino, D.; Reich, P.; Reich, W.

    2017-01-01

    We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called T-T plots. Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted whenever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the nine-year WMAP, Planck 2013, SFD 100 μm, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provide an estimate of residual systematic uncertainties. In general, our values are in good agreement with previously published results. Among the most notable results are a relative dipole between the WMAP and Planck experiments of 10-15μK (depending on frequency), an estimate of the 408 MHz map monopole of 8.9 ± 1.3 K, and a non-zero dipole in the 1420 MHz map of 0.15 ± 0.03 K pointing towards Galactic coordinates (l,b) = (308°,-36°) ± 14°. These values represent the sum of any instrumental and data processing offsets, as well as any Galactic or extra-Galactic component that is spectrally uniform over the full sky.

  10. Comparing Kriging and Regression Approaches for Mapping Soil Clay Content in a diverse Danish Landscape

    DEFF Research Database (Denmark)

    Adhikari, Kabindra; Bou Kheir, Rania; Greve, Mette Balslev

    2013-01-01

    Information on the spatial variability of soil texture including soil clay content in a landscape is very important for agricultural and environmental use. Different prediction techniques are available to assess and map spatial variability of soil properties, but selecting the most suitable techn...... the prediction in OKst compared with that in OK, whereas RT showed the lowest performance of all (R2 = 0.52; RMSE = 0.52; and RPD = 1.17). We found RKrr to be an effective prediction method and recommend this method for any future soil mapping activities in Denmark....... technique at a given site has always been a major issue in all soil mapping applications. We studied the prediction performance of ordinary kriging (OK), stratified OK (OKst), regression trees (RT), and rule-based regression kriging (RKrr) for digital mapping of soil clay content at 30.4-m grid size using 6...

  11. Landslide susceptibility mapping using logistic statistical regression in Babaheydar Watershed, Chaharmahal Va Bakhtiari Province, Iran

    Directory of Open Access Journals (Sweden)

    Ebrahim Karimi Sangchini

    2015-01-01

    Full Text Available Landslides are amongst the most damaging natural hazards in mountainous regions. Every year, hundreds of people all over the world lose their lives in landslides; furthermore, there are large impacts on the local and global economy from these events. In this study, landslide hazard zonation in Babaheydar watershed using logistic regression was conducted to determine landslide hazard areas. At first, the landslide inventory map was prepared using aerial photograph interpretations and field surveys. The next step, ten landslide conditioning factors such as altitude, slope percentage, slope aspect, lithology, distance from faults, rivers, settlement and roads, land use, and precipitation were chosen as effective factors on landsliding in the study area. Subsequently, landslide susceptibility map was constructed using the logistic regression model in Geographic Information System (GIS. The ROC and Pseudo-R2 indexes were used for model assessment. Results showed that the logistic regression model provided slightly high prediction accuracy of landslide susceptibility maps in the Babaheydar Watershed with ROC equal to 0.876. Furthermore, the results revealed that about 44% of the watershed areas were located in high and very high hazard classes. The resultant landslide susceptibility maps can be useful in appropriate watershed management practices and for sustainable development in the region.

  12. Microbiome Data Accurately Predicts the Postmortem Interval Using Random Forest Regression Models

    Directory of Open Access Journals (Sweden)

    Aeriel Belk

    2018-02-01

    Full Text Available Death investigations often include an effort to establish the postmortem interval (PMI in cases in which the time of death is uncertain. The postmortem interval can lead to the identification of the deceased and the validation of witness statements and suspect alibis. Recent research has demonstrated that microbes provide an accurate clock that starts at death and relies on ecological change in the microbial communities that normally inhabit a body and its surrounding environment. Here, we explore how to build the most robust Random Forest regression models for prediction of PMI by testing models built on different sample types (gravesoil, skin of the torso, skin of the head, gene markers (16S ribosomal RNA (rRNA, 18S rRNA, internal transcribed spacer regions (ITS, and taxonomic levels (sequence variants, species, genus, etc.. We also tested whether particular suites of indicator microbes were informative across different datasets. Generally, results indicate that the most accurate models for predicting PMI were built using gravesoil and skin data using the 16S rRNA genetic marker at the taxonomic level of phyla. Additionally, several phyla consistently contributed highly to model accuracy and may be candidate indicators of PMI.

  13. Flexible regression models for estimating postmortem interval (PMI) in forensic medicine.

    Science.gov (United States)

    Muñoz Barús, José Ignacio; Febrero-Bande, Manuel; Cadarso-Suárez, Carmen

    2008-10-30

    Correct determination of time of death is an important goal in forensic medicine. Numerous methods have been described for estimating postmortem interval (PMI), but most are imprecise, poorly reproducible and/or have not been validated with real data. In recent years, however, some progress in PMI estimation has been made, notably through the use of new biochemical methods for quantifying relevant indicator compounds in the vitreous humour. The best, but unverified, results have been obtained with [K+] and hypoxanthine [Hx], using simple linear regression (LR) models. The main aim of this paper is to offer more flexible alternatives to LR, such as generalized additive models (GAMs) and support vector machines (SVMs) in order to obtain improved PMI estimates. The present study, based on detailed analysis of [K+] and [Hx] in more than 200 vitreous humour samples from subjects with known PMI, compared classical LR methodology with GAM and SVM methodologies. Both proved better than LR for estimation of PMI. SVM showed somewhat greater precision than GAM, but GAM offers a readily interpretable graphical output, facilitating understanding of findings by legal professionals; there are thus arguments for using both types of models. R code for these methods is available from the authors, permitting accurate prediction of PMI from vitreous humour [K+], [Hx] and [U], with confidence intervals and graphical output provided. Copyright 2008 John Wiley & Sons, Ltd.

  14. Subpixel Snow Cover Mapping from MODIS Data by Nonparametric Regression Splines

    Science.gov (United States)

    Akyurek, Z.; Kuter, S.; Weber, G. W.

    2016-12-01

    Spatial extent of snow cover is often considered as one of the key parameters in climatological, hydrological and ecological modeling due to its energy storage, high reflectance in the visible and NIR regions of the electromagnetic spectrum, significant heat capacity and insulating properties. A significant challenge in snow mapping by remote sensing (RS) is the trade-off between the temporal and spatial resolution of satellite imageries. In order to tackle this issue, machine learning-based subpixel snow mapping methods, like Artificial Neural Networks (ANNs), from low or moderate resolution images have been proposed. Multivariate Adaptive Regression Splines (MARS) is a nonparametric regression tool that can build flexible models for high dimensional and complex nonlinear data. Although MARS is not often employed in RS, it has various successful implementations such as estimation of vertical total electron content in ionosphere, atmospheric correction and classification of satellite images. This study is the first attempt in RS to evaluate the applicability of MARS for subpixel snow cover mapping from MODIS data. Total 16 MODIS-Landsat ETM+ image pairs taken over European Alps between March 2000 and April 2003 were used in the study. MODIS top-of-atmospheric reflectance, NDSI, NDVI and land cover classes were used as predictor variables. Cloud-covered, cloud shadow, water and bad-quality pixels were excluded from further analysis by a spatial mask. MARS models were trained and validated by using reference fractional snow cover (FSC) maps generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also developed. The mutual comparison of obtained MARS and ANN models was accomplished on independent test areas. The MARS model performed better than the ANN model with an average RMSE of 0.1288 over the independent test areas; whereas the average RMSE of the ANN model

  15. How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models.

    Science.gov (United States)

    Francq, Bernard G; Govaerts, Bernadette

    2016-06-30

    Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Evaluation of Logistic Regression and Multivariate Adaptive Regression Spline Models for Groundwater Potential Mapping Using R and GIS

    Directory of Open Access Journals (Sweden)

    Soyoung Park

    2017-07-01

    Full Text Available This study mapped and analyzed groundwater potential using two different models, logistic regression (LR and multivariate adaptive regression splines (MARS, and compared the results. A spatial database was constructed for groundwater well data and groundwater influence factors. Groundwater well data with a high potential yield of ≥70 m3/d were extracted, and 859 locations (70% were used for model training, whereas the other 365 locations (30% were used for model validation. We analyzed 16 groundwater influence factors including altitude, slope degree, slope aspect, plan curvature, profile curvature, topographic wetness index, stream power index, sediment transport index, distance from drainage, drainage density, lithology, distance from fault, fault density, distance from lineament, lineament density, and land cover. Groundwater potential maps (GPMs were constructed using LR and MARS models and tested using a receiver operating characteristics curve. Based on this analysis, the area under the curve (AUC for the success rate curve of GPMs created using the MARS and LR models was 0.867 and 0.838, and the AUC for the prediction rate curve was 0.836 and 0.801, respectively. This implies that the MARS model is useful and effective for groundwater potential analysis in the study area.

  17. Dynamical zeta functions for piecewise monotone maps of the interval

    CERN Document Server

    Ruelle, David

    2004-01-01

    Consider a space M, a map f:M\\to M, and a function g:M \\to {\\mathbb C}. The formal power series \\zeta (z) = \\exp \\sum ^\\infty _{m=1} \\frac {z^m}{m} \\sum _{x \\in \\mathrm {Fix}\\,f^m} \\prod ^{m-1}_{k=0} g (f^kx) yields an example of a dynamical zeta function. Such functions have unexpected analytic properties and interesting relations to the theory of dynamical systems, statistical mechanics, and the spectral theory of certain operators (transfer operators). The first part of this monograph presents a general introduction to this subject. The second part is a detailed study of the zeta functions associated with piecewise monotone maps of the interval [0,1]. In particular, Ruelle gives a proof of a generalized form of the Baladi-Keller theorem relating the poles of \\zeta (z) and the eigenvalues of the transfer operator. He also proves a theorem expressing the largest eigenvalue of the transfer operator in terms of the ergodic properties of (M,f,g).

  18. Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.

    Science.gov (United States)

    Choi, Jae-Seok; Kim, Munchurl

    2017-03-01

    Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower

  19. Mapping urban environmental noise: a land use regression method.

    Science.gov (United States)

    Xie, Dan; Liu, Yi; Chen, Jining

    2011-09-01

    Forecasting and preventing urban noise pollution are major challenges in urban environmental management. Most existing efforts, including experiment-based models, statistical models, and noise mapping, however, have limited capacity to explain the association between urban growth and corresponding noise change. Therefore, these conventional methods can hardly forecast urban noise at a given outlook of development layout. This paper, for the first time, introduces a land use regression method, which has been applied for simulating urban air quality for a decade, to construct an urban noise model (LUNOS) in Dalian Municipality, Northwest China. The LUNOS model describes noise as a dependent variable of surrounding various land areas via a regressive function. The results suggest that a linear model performs better in fitting monitoring data, and there is no significant difference of the LUNOS's outputs when applied to different spatial scales. As the LUNOS facilitates a better understanding of the association between land use and urban environmental noise in comparison to conventional methods, it can be regarded as a promising tool for noise prediction for planning purposes and aid smart decision-making.

  20. Two-part zero-inflated negative binomial regression model for quantitative trait loci mapping with count trait.

    Science.gov (United States)

    Moghimbeigi, Abbas

    2015-05-07

    Poisson regression models provide a standard framework for quantitative trait locus (QTL) mapping of count traits. In practice, however, count traits are often over-dispersed relative to the Poisson distribution. In these situations, the zero-inflated Poisson (ZIP), zero-inflated generalized Poisson (ZIGP) and zero-inflated negative binomial (ZINB) regression may be useful for QTL mapping of count traits. Added genetic variables to the negative binomial part equation, may also affect extra zero data. In this study, to overcome these challenges, I apply two-part ZINB model. The EM algorithm with Newton-Raphson method in the M-step uses for estimating parameters. An application of the two-part ZINB model for QTL mapping is considered to detect associations between the formation of gallstone and the genotype of markers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Tridimensional Regression for Comparing and Mapping 3D Anatomical Structures

    Directory of Open Access Journals (Sweden)

    Kendra K. Schmid

    2012-01-01

    Full Text Available Shape analysis is useful for a wide variety of disciplines and has many applications. There are many approaches to shape analysis, one of which focuses on the analysis of shapes that are represented by the coordinates of predefined landmarks on the object. This paper discusses Tridimensional Regression, a technique that can be used for mapping images and shapes that are represented by sets of three-dimensional landmark coordinates, for comparing and mapping 3D anatomical structures. The degree of similarity between shapes can be quantified using the tridimensional coefficient of determination (2. An experiment was conducted to evaluate the effectiveness of this technique to correctly match the image of a face with another image of the same face. These results were compared to the 2 values obtained when only two dimensions are used and show that using three dimensions increases the ability to correctly match and discriminate between faces.

  2. Empirical evaluation of selective DNA pooling to map QTL in dairy cattle using a half-sib design by comparison to individual genotyping and interval mapping

    Directory of Open Access Journals (Sweden)

    Robinson Nicholas

    2007-04-01

    Full Text Available Abstract This study represents the first attempt at an empirical evaluation of the DNA pooling methodology by comparing it to individual genotyping and interval mapping to detect QTL in a dairy half-sib design. The findings indicated that the use of peak heights from the pool electropherograms without correction for stutter (shadow product and preferential amplification performed as well as corrected estimates of frequencies. However, errors were found to decrease the power of the experiment at every stage of the pooling and analysis. The main sources of errors include technical errors from DNA quantification, pool construction, inconsistent differential amplification, and from the prevalence of sire alleles in the dams. Additionally, interval mapping using individual genotyping gains information from phenotypic differences between individuals in the same pool and from neighbouring markers, which is lost in a DNA pooling design. These errors cause some differences between the markers detected as significant by pooling and those found significant by interval mapping based on individual selective genotyping. Therefore, it is recommended that pooled genotyping only be used as part of an initial screen with significant results to be confirmed by individual genotyping. Strategies for improving the efficiency of the DNA pooling design are also presented.

  3. Mapping of the DLQI scores to EQ-5D utility values using ordinal logistic regression.

    Science.gov (United States)

    Ali, Faraz Mahmood; Kay, Richard; Finlay, Andrew Y; Piguet, Vincent; Kupfer, Joerg; Dalgard, Florence; Salek, M Sam

    2017-11-01

    The Dermatology Life Quality Index (DLQI) and the European Quality of Life-5 Dimension (EQ-5D) are separate measures that may be used to gather health-related quality of life (HRQoL) information from patients. The EQ-5D is a generic measure from which health utility estimates can be derived, whereas the DLQI is a specialty-specific measure to assess HRQoL. To reduce the burden of multiple measures being administered and to enable a more disease-specific calculation of health utility estimates, we explored an established mathematical technique known as ordinal logistic regression (OLR) to develop an appropriate model to map DLQI data to EQ-5D-based health utility estimates. Retrospective data from 4010 patients were randomly divided five times into two groups for the derivation and testing of the mapping model. Split-half cross-validation was utilized resulting in a total of ten ordinal logistic regression models for each of the five EQ-5D dimensions against age, sex, and all ten items of the DLQI. Using Monte Carlo simulation, predicted health utility estimates were derived and compared against those observed. This method was repeated for both OLR and a previously tested mapping methodology based on linear regression. The model was shown to be highly predictive and its repeated fitting demonstrated a stable model using OLR as well as linear regression. The mean differences between OLR-predicted health utility estimates and observed health utility estimates ranged from 0.0024 to 0.0239 across the ten modeling exercises, with an average overall difference of 0.0120 (a 1.6% underestimate, not of clinical importance). This modeling framework developed in this study will enable researchers to calculate EQ-5D health utility estimates from a specialty-specific study population, reducing patient and economic burden.

  4. Improved predictive mapping of indoor radon concentrations using ensemble regression trees based on automatic clustering of geological units

    International Nuclear Information System (INIS)

    Kropat, Georg; Bochud, Francois; Jaboyedoff, Michel; Laedermann, Jean-Pascal; Murith, Christophe; Palacios, Martha; Baechler, Sébastien

    2015-01-01

    Purpose: According to estimations around 230 people die as a result of radon exposure in Switzerland. This public health concern makes reliable indoor radon prediction and mapping methods necessary in order to improve risk communication to the public. The aim of this study was to develop an automated method to classify lithological units according to their radon characteristics and to develop mapping and predictive tools in order to improve local radon prediction. Method: About 240 000 indoor radon concentration (IRC) measurements in about 150 000 buildings were available for our analysis. The automated classification of lithological units was based on k-medoids clustering via pair-wise Kolmogorov distances between IRC distributions of lithological units. For IRC mapping and prediction we used random forests and Bayesian additive regression trees (BART). Results: The automated classification groups lithological units well in terms of their IRC characteristics. Especially the IRC differences in metamorphic rocks like gneiss are well revealed by this method. The maps produced by random forests soundly represent the regional difference of IRCs in Switzerland and improve the spatial detail compared to existing approaches. We could explain 33% of the variations in IRC data with random forests. Additionally, the influence of a variable evaluated by random forests shows that building characteristics are less important predictors for IRCs than spatial/geological influences. BART could explain 29% of IRC variability and produced maps that indicate the prediction uncertainty. Conclusion: Ensemble regression trees are a powerful tool to model and understand the multidimensional influences on IRCs. Automatic clustering of lithological units complements this method by facilitating the interpretation of radon properties of rock types. This study provides an important element for radon risk communication. Future approaches should consider taking into account further variables

  5. Semiparametric Allelic Tests for Mapping Multiple Phenotypes: Binomial Regression and Mahalanobis Distance.

    Science.gov (United States)

    Majumdar, Arunabha; Witte, John S; Ghosh, Saurabh

    2015-12-01

    Binary phenotypes commonly arise due to multiple underlying quantitative precursors and genetic variants may impact multiple traits in a pleiotropic manner. Hence, simultaneously analyzing such correlated traits may be more powerful than analyzing individual traits. Various genotype-level methods, e.g., MultiPhen (O'Reilly et al. []), have been developed to identify genetic factors underlying a multivariate phenotype. For univariate phenotypes, the usefulness and applicability of allele-level tests have been investigated. The test of allele frequency difference among cases and controls is commonly used for mapping case-control association. However, allelic methods for multivariate association mapping have not been studied much. In this article, we explore two allelic tests of multivariate association: one using a Binomial regression model based on inverted regression of genotype on phenotype (Binomial regression-based Association of Multivariate Phenotypes [BAMP]), and the other employing the Mahalanobis distance between two sample means of the multivariate phenotype vector for two alleles at a single-nucleotide polymorphism (Distance-based Association of Multivariate Phenotypes [DAMP]). These methods can incorporate both discrete and continuous phenotypes. Some theoretical properties for BAMP are studied. Using simulations, the power of the methods for detecting multivariate association is compared with the genotype-level test MultiPhen's. The allelic tests yield marginally higher power than MultiPhen for multivariate phenotypes. For one/two binary traits under recessive mode of inheritance, allelic tests are found to be substantially more powerful. All three tests are applied to two different real data and the results offer some support for the simulation study. We propose a hybrid approach for testing multivariate association that implements MultiPhen when Hardy-Weinberg Equilibrium (HWE) is violated and BAMP otherwise, because the allelic approaches assume HWE

  6. The best of both worlds: Phylogenetic eigenvector regression and mapping

    Directory of Open Access Journals (Sweden)

    José Alexandre Felizola Diniz Filho

    2015-09-01

    Full Text Available Eigenfunction analyses have been widely used to model patterns of autocorrelation in time, space and phylogeny. In a phylogenetic context, Diniz-Filho et al. (1998 proposed what they called Phylogenetic Eigenvector Regression (PVR, in which pairwise phylogenetic distances among species are submitted to a Principal Coordinate Analysis, and eigenvectors are then used as explanatory variables in regression, correlation or ANOVAs. More recently, a new approach called Phylogenetic Eigenvector Mapping (PEM was proposed, with the main advantage of explicitly incorporating a model-based warping in phylogenetic distance in which an Ornstein-Uhlenbeck (O-U process is fitted to data before eigenvector extraction. Here we compared PVR and PEM in respect to estimated phylogenetic signal, correlated evolution under alternative evolutionary models and phylogenetic imputation, using simulated data. Despite similarity between the two approaches, PEM has a slightly higher prediction ability and is more general than the original PVR. Even so, in a conceptual sense, PEM may provide a technique in the best of both worlds, combining the flexibility of data-driven and empirical eigenfunction analyses and the sounding insights provided by evolutionary models well known in comparative analyses.

  7. QTL mapping for yield components and agronomic traits in a Brazilian soybean population

    Directory of Open Access Journals (Sweden)

    Josiane Isabela da Silva Rodrigues

    2016-11-01

    Full Text Available The objective of this work was to map QTL for agronomic traits in a Brazilian soybean population. For this, 207 F2:3 progenies from the cross CS3035PTA276-1-5-2x UFVS2012 were genotyped and cultivated in Viçosa-MG, using randomized block design with three replications. QTL detection was carried out by linear regression and composite interval mapping. Thirty molecular markers linked to QTL were detected by linear regression for the total of nine agronomic traits. QTL for SWP (seed weight per plant, W100S (weight of 100 seeds, NPP (number of pods per plant, and NSP (number of seeds per plant were detected by composite interval mapping. Four QTL with additive effect are promising for marker-assisted selection (MAS. Particularly, the markers Satt155 and Satt300 could be useful in simultaneous selection for greater SWP, NPP, and NSP.

  8. The use of regression analysis in determining reference intervals for low hematocrit and thrombocyte count in multiple electrode aggregometry and platelet function analyzer 100 testing of platelet function.

    Science.gov (United States)

    Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D

    2017-11-01

    Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.

  9. Computing confidence and prediction intervals of industrial equipment degradation by bootstrapped support vector regression

    International Nuclear Information System (INIS)

    Lins, Isis Didier; Droguett, Enrique López; Moura, Márcio das Chagas; Zio, Enrico; Jacinto, Carlos Magno

    2015-01-01

    Data-driven learning methods for predicting the evolution of the degradation processes affecting equipment are becoming increasingly attractive in reliability and prognostics applications. Among these, we consider here Support Vector Regression (SVR), which has provided promising results in various applications. Nevertheless, the predictions provided by SVR are point estimates whereas in order to take better informed decisions, an uncertainty assessment should be also carried out. For this, we apply bootstrap to SVR so as to obtain confidence and prediction intervals, without having to make any assumption about probability distributions and with good performance even when only a small data set is available. The bootstrapped SVR is first verified on Monte Carlo experiments and then is applied to a real case study concerning the prediction of degradation of a component from the offshore oil industry. The results obtained indicate that the bootstrapped SVR is a promising tool for providing reliable point and interval estimates, which can inform maintenance-related decisions on degrading components. - Highlights: • Bootstrap (pairs/residuals) and SVR are used as an uncertainty analysis framework. • Numerical experiments are performed to assess accuracy and coverage properties. • More bootstrap replications does not significantly improve performance. • Degradation of equipment of offshore oil wells is estimated by bootstrapped SVR. • Estimates about the scale growth rate can support maintenance-related decisions

  10. Interval ridge regression (iRR) as a fast and robust method for quantitative prediction and variable selection applied to edible oil adulteration.

    Science.gov (United States)

    Jović, Ozren; Smrečki, Neven; Popović, Zora

    2016-04-01

    A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for poil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEPoil (R(2)>0.99). Copyright © 2015 Elsevier B.V. All rights reserved.

  11. On Bayesian shared component disease mapping and ecological regression with errors in covariates.

    Science.gov (United States)

    MacNab, Ying C

    2010-05-20

    Recent literature on Bayesian disease mapping presents shared component models (SCMs) for joint spatial modeling of two or more diseases with common risk factors. In this study, Bayesian hierarchical formulations of shared component disease mapping and ecological models are explored and developed in the context of ecological regression, taking into consideration errors in covariates. A review of multivariate disease mapping models (MultiVMs) such as the multivariate conditional autoregressive models that are also part of the more recent Bayesian disease mapping literature is presented. Some insights into the connections and distinctions between the SCM and MultiVM procedures are communicated. Important issues surrounding (appropriate) formulation of shared- and disease-specific components, consideration/choice of spatial or non-spatial random effects priors, and identification of model parameters in SCMs are explored and discussed in the context of spatial and ecological analysis of small area multivariate disease or health outcome rates and associated ecological risk factors. The methods are illustrated through an in-depth analysis of four-variate road traffic accident injury (RTAI) data: gender-specific fatal and non-fatal RTAI rates in 84 local health areas in British Columbia (Canada). Fully Bayesian inference via Markov chain Monte Carlo simulations is presented. Copyright 2010 John Wiley & Sons, Ltd.

  12. Collapse susceptibility mapping in karstified gypsum terrain (Sivas basin - Turkey) by conditional probability, logistic regression, artificial neural network models

    Science.gov (United States)

    Yilmaz, Isik; Keskin, Inan; Marschalko, Marian; Bednarik, Martin

    2010-05-01

    This study compares the GIS based collapse susceptibility mapping methods such as; conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) applied in gypsum rock masses in Sivas basin (Turkey). Digital Elevation Model (DEM) was first constructed using GIS software. Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index- TWI, stream power index- SPI, Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from CP, LR and ANN models, and they were then compared by means of their validations. Area Under Curve (AUC) values obtained from all three methodologies showed that the map obtained from ANN model looks like more accurate than the other models, and the results also showed that the artificial neural networks is a usefull tool in preparation of collapse susceptibility map and highly compatible with GIS operating features. Key words: Collapse; doline; susceptibility map; gypsum; GIS; conditional probability; logistic regression; artificial neural networks.

  13. Desertification Susceptibility Mapping Using Logistic Regression Analysis in the Djelfa Area, Algeria

    Directory of Open Access Journals (Sweden)

    Farid Djeddaoui

    2017-10-01

    Full Text Available The main goal of this work was to identify the areas that are most susceptible to desertification in a part of the Algerian steppe, and to quantitatively assess the key factors that contribute to this desertification. In total, 139 desertified zones were mapped using field surveys and photo-interpretation. We selected 16 spectral and geomorphic predictive factors, which a priori play a significant role in desertification. They were mainly derived from Landsat 8 imagery and Shuttle Radar Topographic Mission digital elevation model (SRTM DEM. Some factors, such as the topographic position index (TPI and curvature, were used for the first time in this kind of study. For this purpose, we adapted the logistic regression algorithm for desertification susceptibility mapping, which has been widely used for landslide susceptibility mapping. The logistic model was evaluated using the area under the receiver operating characteristic (ROC curve. The model accuracy was 87.8%. We estimated the model uncertainties using a bootstrap method. Our analysis suggests that the predictive model is robust and stable. Our results indicate that land cover factors, including normalized difference vegetation index (NDVI and rangeland classes, play a major role in determining desertification occurrence, while geomorphological factors have a limited impact. The predictive map shows that 44.57% of the area is classified as highly to very highly susceptible to desertification. The developed approach can be used to assess desertification in areas with similar characteristics and to guide possible actions to combat desertification.

  14. Bootstrapped neural nets versus regression kriging in the digital mapping of pedological attributes: the automatic and time-consuming perspectives

    Science.gov (United States)

    Langella, Giuliano; Basile, Angelo; Bonfante, Antonello; Manna, Piero; Terribile, Fabio

    2013-04-01

    Digital soil mapping procedures are widespread used to build two-dimensional continuous maps about several pedological attributes. Our work addressed a regression kriging (RK) technique and a bootstrapped artificial neural network approach in order to evaluate and compare (i) the accuracy of prediction, (ii) the susceptibility of being included in automatic engines (e.g. to constitute web processing services), and (iii) the time cost needed for calibrating models and for making predictions. Regression kriging is maybe the most widely used geostatistical technique in the digital soil mapping literature. Here we tried to apply the EBLUP regression kriging as it is deemed to be the most statistically sound RK flavor by pedometricians. An unusual multi-parametric and nonlinear machine learning approach was accomplished, called BAGAP (Bootstrap aggregating Artificial neural networks with Genetic Algorithms and Principal component regression). BAGAP combines a selected set of weighted neural nets having specified characteristics to yield an ensemble response. The purpose of applying these two particular models is to ascertain whether and how much a more cumbersome machine learning method could be much promising in making more accurate/precise predictions. Being aware of the difficulty to handle objects based on EBLUP-RK as well as BAGAP when they are embedded in environmental applications, we explore the susceptibility of them in being wrapped within Web Processing Services. Two further kinds of aspects are faced for an exhaustive evaluation and comparison: automaticity and time of calculation with/without high performance computing leverage.

  15. Relative performances of artificial neural network and regression mapping tools in evaluation of spinal loads and muscle forces during static lifting.

    Science.gov (United States)

    Arjmand, N; Ekrami, O; Shirazi-Adl, A; Plamondon, A; Parnianpour, M

    2013-05-31

    Two artificial neural networks (ANNs) are constructed, trained, and tested to map inputs of a complex trunk finite element (FE) model to its outputs for spinal loads and muscle forces. Five input variables (thorax flexion angle, load magnitude, its anterior and lateral positions, load handling technique, i.e., one- or two-handed static lifting) and four model outputs (L4-L5 and L5-S1 disc compression and anterior-posterior shear forces) for spinal loads and 76 model outputs (forces in individual trunk muscles) are considered. Moreover, full quadratic regression equations mapping input-outputs of the model developed here for muscle forces and previously for spine loads are used to compare the relative accuracy of these two mapping tools (ANN and regression equations). Results indicate that the ANNs are more accurate in mapping input-output relationships of the FE model (RMSE= 20.7 N for spinal loads and RMSE= 4.7 N for muscle forces) as compared to regression equations (RMSE= 120.4 N for spinal loads and RMSE=43.2 N for muscle forces). Quadratic regression equations map up to second order variations of outputs with inputs while ANNs capture higher order variations too. Despite satisfactory achievement in estimating overall muscle forces by the ANN, some inadequacies are noted including assigning force to antagonistic muscles with no activity in the optimization algorithm of the FE model or predicting slightly different forces in bilateral pair muscles in symmetric lifting activities. Using these user-friendly tools spine loads and trunk muscle forces during symmetric and asymmetric static lifts can be easily estimated. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Remote-sensing data processing with the multivariate regression analysis method for iron mineral resource potential mapping: a case study in the Sarvian area, central Iran

    Science.gov (United States)

    Mansouri, Edris; Feizi, Faranak; Jafari Rad, Alireza; Arian, Mehran

    2018-03-01

    This paper uses multivariate regression to create a mathematical model for iron skarn exploration in the Sarvian area, central Iran, using multivariate regression for mineral prospectivity mapping (MPM). The main target of this paper is to apply multivariate regression analysis (as an MPM method) to map iron outcrops in the northeastern part of the study area in order to discover new iron deposits in other parts of the study area. Two types of multivariate regression models using two linear equations were employed to discover new mineral deposits. This method is one of the reliable methods for processing satellite images. ASTER satellite images (14 bands) were used as unique independent variables (UIVs), and iron outcrops were mapped as dependent variables for MPM. According to the results of the probability value (p value), coefficient of determination value (R2) and adjusted determination coefficient (Radj2), the second regression model (which consistent of multiple UIVs) fitted better than other models. The accuracy of the model was confirmed by iron outcrops map and geological observation. Based on field observation, iron mineralization occurs at the contact of limestone and intrusive rocks (skarn type).

  17. Landslide susceptibility mapping using frequency ratio, logistic regression, artificial neural networks and their comparison: A case study from Kat landslides (Tokat—Turkey)

    Science.gov (United States)

    Yilmaz, Işık

    2009-06-01

    The purpose of this study is to compare the landslide susceptibility mapping methods of frequency ratio (FR), logistic regression and artificial neural networks (ANN) applied in the Kat County (Tokat—Turkey). Digital elevation model (DEM) was first constructed using GIS software. Landslide-related factors such as geology, faults, drainage system, topographical elevation, slope angle, slope aspect, topographic wetness index (TWI) and stream power index (SPI) were used in the landslide susceptibility analyses. Landslide susceptibility maps were produced from the frequency ratio, logistic regression and neural networks models, and they were then compared by means of their validations. The higher accuracies of the susceptibility maps for all three models were obtained from the comparison of the landslide susceptibility maps with the known landslide locations. However, respective area under curve (AUC) values of 0.826, 0.842 and 0.852 for frequency ratio, logistic regression and artificial neural networks showed that the map obtained from ANN model is more accurate than the other models, accuracies of all models can be evaluated relatively similar. The results obtained in this study also showed that the frequency ratio model can be used as a simple tool in assessment of landslide susceptibility when a sufficient number of data were obtained. Input process, calculations and output process are very simple and can be readily understood in the frequency ratio model, however logistic regression and neural networks require the conversion of data to ASCII or other formats. Moreover, it is also very hard to process the large amount of data in the statistical package.

  18. Landslide susceptibility mapping on a global scale using the method of logistic regression

    Directory of Open Access Journals (Sweden)

    L. Lin

    2017-08-01

    Full Text Available This paper proposes a statistical model for mapping global landslide susceptibility based on logistic regression. After investigating explanatory factors for landslides in the existing literature, five factors were selected for model landslide susceptibility: relative relief, extreme precipitation, lithology, ground motion and soil moisture. When building the model, 70 % of landslide and nonlandslide points were randomly selected for logistic regression, and the others were used for model validation. To evaluate the accuracy of predictive models, this paper adopts several criteria including a receiver operating characteristic (ROC curve method. Logistic regression experiments found all five factors to be significant in explaining landslide occurrence on a global scale. During the modeling process, percentage correct in confusion matrix of landslide classification was approximately 80 % and the area under the curve (AUC was nearly 0.87. During the validation process, the above statistics were about 81 % and 0.88, respectively. Such a result indicates that the model has strong robustness and stable performance. This model found that at a global scale, soil moisture can be dominant in the occurrence of landslides and topographic factor may be secondary.

  19. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  20. Prediction of the distillation temperatures of crude oils using ¹H NMR and support vector regression with estimated confidence intervals.

    Science.gov (United States)

    Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J

    2015-09-01

    This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Landslide susceptibility mapping for a part of North Anatolian Fault Zone (Northeast Turkey) using logistic regression model

    Science.gov (United States)

    Demir, Gökhan; aytekin, mustafa; banu ikizler, sabriye; angın, zekai

    2013-04-01

    The North Anatolian Fault is know as one of the most active and destructive fault zone which produced many earthquakes with high magnitudes. Along this fault zone, the morphology and the lithological features are prone to landsliding. However, many earthquake induced landslides were recorded by several studies along this fault zone, and these landslides caused both injuiries and live losts. Therefore, a detailed landslide susceptibility assessment for this area is indispancable. In this context, a landslide susceptibility assessment for the 1445 km2 area in the Kelkit River valley a part of North Anatolian Fault zone (Eastern Black Sea region of Turkey) was intended with this study, and the results of this study are summarized here. For this purpose, geographical information system (GIS) and a bivariate statistical model were used. Initially, Landslide inventory maps are prepared by using landslide data determined by field surveys and landslide data taken from General Directorate of Mineral Research and Exploration. The landslide conditioning factors are considered to be lithology, slope gradient, slope aspect, topographical elevation, distance to streams, distance to roads and distance to faults, drainage density and fault density. ArcGIS package was used to manipulate and analyze all the collected data Logistic regression method was applied to create a landslide susceptibility map. Landslide susceptibility maps were divided into five susceptibility regions such as very low, low, moderate, high and very high. The result of the analysis was verified using the inventoried landslide locations and compared with the produced probability model. For this purpose, Area Under Curvature (AUC) approach was applied, and a AUC value was obtained. Based on this AUC value, the obtained landslide susceptibility map was concluded as satisfactory. Keywords: North Anatolian Fault Zone, Landslide susceptibility map, Geographical Information Systems, Logistic Regression Analysis.

  2. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  3. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  4. Thorough statistical comparison of machine learning regression models and their ensembles for sub-pixel imperviousness and imperviousness change mapping

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2017-12-01

    Full Text Available We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.

  5. Thorough statistical comparison of machine learning regression models and their ensembles for sub-pixel imperviousness and imperviousness change mapping

    Science.gov (United States)

    Drzewiecki, Wojciech

    2017-12-01

    We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R2. These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for fi ve percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifi er. For the threshold of relevant change set as ten percent all approaches performed satisfactory.

  6. Interpreting parameters in the logistic regression model with random effects

    DEFF Research Database (Denmark)

    Larsen, Klaus; Petersen, Jørgen Holm; Budtz-Jørgensen, Esben

    2000-01-01

    interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects......interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects...

  7. Chaos on the interval

    CERN Document Server

    Ruette, Sylvie

    2017-01-01

    The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...

  8. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface......-product we obtain fast access to the baseline hazards (compared to survival::basehaz()) and predictions of survival probabilities, their confidence intervals and confidence bands. Confidence intervals and confidence bands are based on point-wise asymptotic expansions of the corresponding statistical...

  9. Using a binary logistic regression method and GIS for evaluating and mapping the groundwater spring potential in the Sultan Mountains (Aksehir, Turkey)

    Science.gov (United States)

    Ozdemir, Adnan

    2011-07-01

    SummaryThe purpose of this study is to produce a groundwater spring potential map of the Sultan Mountains in central Turkey, based on a logistic regression method within a Geographic Information System (GIS) environment. Using field surveys, the locations of the springs (440 springs) were determined in the study area. In this study, 17 spring-related factors were used in the analysis: geology, relative permeability, land use/land cover, precipitation, elevation, slope, aspect, total curvature, plan curvature, profile curvature, wetness index, stream power index, sediment transport capacity index, distance to drainage, distance to fault, drainage density, and fault density map. The coefficients of the predictor variables were estimated using binary logistic regression analysis and were used to calculate the groundwater spring potential for the entire study area. The accuracy of the final spring potential map was evaluated based on the observed springs. The accuracy of the model was evaluated by calculating the relative operating characteristics. The area value of the relative operating characteristic curve model was found to be 0.82. These results indicate that the model is a good estimator of the spring potential in the study area. The spring potential map shows that the areas of very low, low, moderate and high groundwater spring potential classes are 105.586 km 2 (28.99%), 74.271 km 2 (19.906%), 101.203 km 2 (27.14%), and 90.05 km 2 (24.671%), respectively. The interpretations of the potential map showed that stream power index, relative permeability of lithologies, geology, elevation, aspect, wetness index, plan curvature, and drainage density play major roles in spring occurrence and distribution in the Sultan Mountains. The logistic regression approach has not yet been used to delineate groundwater potential zones. In this study, the logistic regression method was used to locate potential zones for groundwater springs in the Sultan Mountains. The evolved model

  10. A comparative study of frequency ratio, weights of evidence and logistic regression methods for landslide susceptibility mapping: Sultan Mountains, SW Turkey

    Science.gov (United States)

    Ozdemir, Adnan; Altural, Tolga

    2013-03-01

    This study evaluated and compared landslide susceptibility maps produced with three different methods, frequency ratio, weights of evidence, and logistic regression, by using validation datasets. The field surveys performed as part of this investigation mapped the locations of 90 landslides that had been identified in the Sultan Mountains of south-western Turkey. The landslide influence parameters used for this study are geology, relative permeability, land use/land cover, precipitation, elevation, slope, aspect, total curvature, plan curvature, profile curvature, wetness index, stream power index, sediment transportation capacity index, distance to drainage, distance to fault, drainage density, fault density, and spring density maps. The relationships between landslide distributions and these parameters were analysed using the three methods, and the results of these methods were then used to calculate the landslide susceptibility of the entire study area. The accuracy of the final landslide susceptibility maps was evaluated based on the landslides observed during the fieldwork, and the accuracy of the models was evaluated by calculating each model's relative operating characteristic curve. The predictive capability of each model was determined from the area under the relative operating characteristic curve and the areas under the curves obtained using the frequency ratio, logistic regression, and weights of evidence methods are 0.976, 0.952, and 0.937, respectively. These results indicate that the frequency ratio and weights of evidence models are relatively good estimators of landslide susceptibility in the study area. Specifically, the results of the correlation analysis show a high correlation between the frequency ratio and weights of evidence results, and the frequency ratio and logistic regression methods exhibit correlation coefficients of 0.771 and 0.727, respectively. The frequency ratio model is simple, and its input, calculation and output processes are

  11. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

    Directory of Open Access Journals (Sweden)

    Qiutong Jin

    2016-06-01

    Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

  12. Spatial vulnerability assessments by regression kriging

    Science.gov (United States)

    Pásztor, László; Laborczi, Annamária; Takács, Katalin; Szatmári, Gábor

    2016-04-01

    information representing IEW or GRP forming environmental factors were taken into account to support the spatial inference of the locally experienced IEW frequency and measured GRP values respectively. An efficient spatial prediction methodology was applied to construct reliable maps, namely regression kriging (RK) using spatially exhaustive auxiliary data on soil, geology, topography, land use and climate. RK divides the spatial inference into two parts. Firstly the deterministic component of the target variable is determined by a regression model. The residuals of the multiple linear regression analysis represent the spatially varying but dependent stochastic component, which are interpolated by kriging. The final map is the sum of the two component predictions. Application of RK also provides the possibility of inherent accuracy assessment. The resulting maps are characterized by global and local measures of its accuracy. Additionally the method enables interval estimation for spatial extension of the areas of predefined risk categories. All of these outputs provide useful contribution to spatial planning, action planning and decision making. Acknowledgement: Our work was partly supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).

  13. A comparison between univariate probabilistic and multivariate (logistic regression) methods for landslide susceptibility analysis: the example of the Febbraro valley (Northern Alps, Italy)

    Science.gov (United States)

    Rossi, M.; Apuani, T.; Felletti, F.

    2009-04-01

    The aim of this paper is to compare the results of two statistical methods for landslide susceptibility analysis: 1) univariate probabilistic method based on landslide susceptibility index, 2) multivariate method (logistic regression). The study area is the Febbraro valley, located in the central Italian Alps, where different types of metamorphic rocks croup out. On the eastern part of the studied basin a quaternary cover represented by colluvial and secondarily, by glacial deposits, is dominant. In this study 110 earth flows, mainly located toward NE portion of the catchment, were analyzed. They involve only the colluvial deposits and their extension mainly ranges from 36 to 3173 m2. Both statistical methods require to establish a spatial database, in which each landslide is described by several parameters that can be assigned using a main scarp central point of landslide. The spatial database is constructed using a Geographical Information System (GIS). Each landslide is described by several parameters corresponding to the value of main scarp central point of the landslide. Based on bibliographic review a total of 15 predisposing factors were utilized. The width of the intervals, in which the maps of the predisposing factors have to be reclassified, has been defined assuming constant intervals to: elevation (100 m), slope (5 °), solar radiation (0.1 MJ/cm2/year), profile curvature (1.2 1/m), tangential curvature (2.2 1/m), drainage density (0.5), lineament density (0.00126). For the other parameters have been used the results of the probability-probability plots analysis and the statistical indexes of landslides site. In particular slope length (0 ÷ 2, 2 ÷ 5, 5 ÷ 10, 10 ÷ 20, 20 ÷ 35, 35 ÷ 260), accumulation flow (0 ÷ 1, 1 ÷ 2, 2 ÷ 5, 5 ÷ 12, 12 ÷ 60, 60 ÷27265), Topographic Wetness Index 0 ÷ 0.74, 0.74 ÷ 1.94, 1.94 ÷ 2.62, 2.62 ÷ 3.48, 3.48 ÷ 6,00, 6.00 ÷ 9.44), Stream Power Index (0 ÷ 0.64, 0.64 ÷ 1.28, 1.28 ÷ 1.81, 1.81 ÷ 4.20, 4.20 ÷ 9

  14. Statistical properties of interval mapping methods on quantitative trait loci location: impact on QTL/eQTL analyses

    Directory of Open Access Journals (Sweden)

    Wang Xiaoqiang

    2012-04-01

    Full Text Available Abstract Background Quantitative trait loci (QTL detection on a huge amount of phenotypes, like eQTL detection on transcriptomic data, can be dramatically impaired by the statistical properties of interval mapping methods. One of these major outcomes is the high number of QTL detected at marker locations. The present study aims at identifying and specifying the sources of this bias, in particular in the case of analysis of data issued from outbred populations. Analytical developments were carried out in a backcross situation in order to specify the bias and to propose an algorithm to control it. The outbred population context was studied through simulated data sets in a wide range of situations. The likelihood ratio test was firstly analyzed under the "one QTL" hypothesis in a backcross population. Designs of sib families were then simulated and analyzed using the QTL Map software. On the basis of the theoretical results in backcross, parameters such as the population size, the density of the genetic map, the QTL effect and the true location of the QTL, were taken into account under the "no QTL" and the "one QTL" hypotheses. A combination of two non parametric tests - the Kolmogorov-Smirnov test and the Mann-Whitney-Wilcoxon test - was used in order to identify the parameters that affected the bias and to specify how much they influenced the estimation of QTL location. Results A theoretical expression of the bias of the estimated QTL location was obtained for a backcross type population. We demonstrated a common source of bias under the "no QTL" and the "one QTL" hypotheses and qualified the possible influence of several parameters. Simulation studies confirmed that the bias exists in outbred populations under both the hypotheses of "no QTL" and "one QTL" on a linkage group. The QTL location was systematically closer to marker locations than expected, particularly in the case of low QTL effect, small population size or low density of markers, i

  15. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  16. Regression modeling and mapping of coniferous forest basal area and tree density from discrete-return lidar and multispectral data

    Science.gov (United States)

    Andrew T. Hudak; Nicholas L. Crookston; Jeffrey S. Evans; Michael K. Falkowski; Alistair M. S. Smith; Paul E. Gessler; Penelope Morgan

    2006-01-01

    We compared the utility of discrete-return light detection and ranging (lidar) data and multispectral satellite imagery, and their integration, for modeling and mapping basal area and tree density across two diverse coniferous forest landscapes in north-central Idaho. We applied multiple linear regression models subset from a suite of 26 predictor variables derived...

  17. riskRegression

    DEFF Research Database (Denmark)

    Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

    2017-01-01

    In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....

  18. Semiparametric regression analysis of failure time data with dependent interval censoring.

    Science.gov (United States)

    Chen, Chyong-Mei; Shen, Pao-Sheng

    2017-09-20

    Interval-censored failure-time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval-censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health-related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Transmission of linear regression patterns between time series: from relationship in time series to complex networks.

    Science.gov (United States)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  20. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  1. Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing

    NARCIS (Netherlands)

    Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.

    2006-01-01

    The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval

  2. A gentle introduction to quantile regression for ecologists

    Science.gov (United States)

    Cade, B.S.; Noon, B.R.

    2003-01-01

    Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.

  3. New machine learning tools for predictive vegetation mapping after climate change: Bagging and Random Forest perform better than Regression Tree Analysis

    Science.gov (United States)

    L.R. Iverson; A.M. Prasad; A. Liaw

    2004-01-01

    More and better machine learning tools are becoming available for landscape ecologists to aid in understanding species-environment relationships and to map probable species occurrence now and potentially into the future. To thal end, we evaluated three statistical models: Regression Tree Analybib (RTA), Bagging Trees (BT) and Random Forest (RF) for their utility in...

  4. Hypotensive Response Magnitude and Duration in Hypertensives: Continuous and Interval Exercise

    Directory of Open Access Journals (Sweden)

    Raphael Santos Teodoro de Carvalho

    2015-03-01

    Full Text Available Background: Although exercise training is known to promote post-exercise hypotension, there is currently no consistent argument about the effects of manipulating its various components (intensity, duration, rest periods, types of exercise, training methods on the magnitude and duration of hypotensive response. Objective: To compare the effect of continuous and interval exercises on hypotensive response magnitude and duration in hypertensive patients by using ambulatory blood pressure monitoring (ABPM. Methods: The sample consisted of 20 elderly hypertensives. Each participant underwent three ABPM sessions: one control ABPM, without exercise; one ABPM after continuous exercise; and one ABPM after interval exercise. Systolic blood pressure (SBP, diastolic blood pressure (DBP, mean arterial pressure (MAP, heart rate (HR and double product (DP were monitored to check post-exercise hypotension and for comparison between each ABPM. Results: ABPM after continuous exercise and after interval exercise showed post-exercise hypotension and a significant reduction (p < 0.05 in SBP, DBP, MAP and DP for 20 hours as compared with control ABPM. Comparing ABPM after continuous and ABPM after interval exercise, a significant reduction (p < 0.05 in SBP, DBP, MAP and DP was observed in the latter. Conclusion: Continuous and interval exercise trainings promote post-exercise hypotension with reduction in SBP, DBP, MAP and DP in the 20 hours following exercise. Interval exercise training causes greater post-exercise hypotension and lower cardiovascular overload as compared with continuous exercise.

  5. The number of subjects per variable required in linear regression analyses.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2015-06-01

    To determine the number of independent variables that can be included in a linear regression model. We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R(2) of the fitted model. A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R(2), although adjusted R(2) estimates behaved well. The bias in estimating the model R(2) statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Towards molecular design using 2D-molecular contour maps obtained from PLS regression coefficients

    Science.gov (United States)

    Borges, Cleber N.; Barigye, Stephen J.; Freitas, Matheus P.

    2017-12-01

    The multivariate image analysis descriptors used in quantitative structure-activity relationships are direct representations of chemical structures as they are simply numerical decodifications of pixels forming the 2D chemical images. These MDs have found great utility in the modeling of diverse properties of organic molecules. Given the multicollinearity and high dimensionality of the data matrices generated with the MIA-QSAR approach, modeling techniques that involve the projection of the data space onto orthogonal components e.g. Partial Least Squares (PLS) have been generally used. However, the chemical interpretation of the PLS-based MIA-QSAR models, in terms of the structural moieties affecting the modeled bioactivity has not been straightforward. This work describes the 2D-contour maps based on the PLS regression coefficients, as a means of assessing the relevance of single MIA predictors to the response variable, and thus allowing for the structural, electronic and physicochemical interpretation of the MIA-QSAR models. A sample study to demonstrate the utility of the 2D-contour maps to design novel drug-like molecules is performed using a dataset of some anti-HIV-1 2-amino-6-arylsulfonylbenzonitriles and derivatives, and the inferences obtained are consistent with other reports in the literature. In addition, the different schemes for encoding atomic properties in molecules are discussed and evaluated.

  7. From Rasch scores to regression

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

  8. Is Posidonia oceanica regression a general feature in the Mediterranean Sea?

    Directory of Open Access Journals (Sweden)

    M. BONACORSI

    2013-03-01

    Full Text Available Over the last few years, a widespread regression of Posidonia oceanica meadows has been noticed in the Mediterranean Sea. However, the magnitude of this decline is still debated. The objectives of this study are (i to assess the spatio-temporal evolution of Posidonia oceanica around Cap Corse (Corsica over time comparing available ancient maps (from 1960 with a new (2011 detailed map realized combining different techniques (aerial photographs, SSS, ROV, scuba diving; (ii evaluate the reliability of ancient maps; (iii discuss observed regression of the meadows in relation to human pressure along the 110 km of coast. Thus, the comparison with previous data shows that, apart from sites clearly identified with the actual evolution, there is a relative stability of the surfaces occupied by the seagrass Posidonia oceanica. The recorded differences seem more related to changes in mapping techniques. These results confirm that in areas characterized by a moderate anthropogenic impact, the Posidonia oceanica meadow has no significant regression and that the changes due to the evolution of mapping techniques are not negligible. However, others facts should be taken into account before extrapolating to the Mediterranean Sea (e.g. actually mapped surfaces and assessing the amplitude of the actual regression.

  9. A Comparison of Advanced Regression Algorithms for Quantifying Urban Land Cover

    Directory of Open Access Journals (Sweden)

    Akpona Okujeni

    2014-07-01

    Full Text Available Quantitative methods for mapping sub-pixel land cover fractions are gaining increasing attention, particularly with regard to upcoming hyperspectral satellite missions. We evaluated five advanced regression algorithms combined with synthetically mixed training data for quantifying urban land cover from HyMap data at 3.6 and 9 m spatial resolution. Methods included support vector regression (SVR, kernel ridge regression (KRR, artificial neural networks (NN, random forest regression (RFR and partial least squares regression (PLSR. Our experiments demonstrate that both kernel methods SVR and KRR yield high accuracies for mapping complex urban surface types, i.e., rooftops, pavements, grass- and tree-covered areas. SVR and KRR models proved to be stable with regard to the spatial and spectral differences between both images and effectively utilized the higher complexity of the synthetic training mixtures for improving estimates for coarser resolution data. Observed deficiencies mainly relate to known problems arising from spectral similarities or shadowing. The remaining regressors either revealed erratic (NN or limited (RFR and PLSR performances when comprehensively mapping urban land cover. Our findings suggest that the combination of kernel-based regression methods, such as SVR and KRR, with synthetically mixed training data is well suited for quantifying urban land cover from imaging spectrometer data at multiple scales.

  10. Hypotensive response magnitude and duration in hypertensives: continuous and interval exercise.

    Science.gov (United States)

    Carvalho, Raphael Santos Teodoro de; Pires, Cássio Mascarenhas Robert; Junqueira, Gustavo Cardoso; Freitas, Dayana; Marchi-Alves, Leila Maria

    2015-03-01

    Although exercise training is known to promote post-exercise hypotension, there is currently no consistent argument about the effects of manipulating its various components (intensity, duration, rest periods, types of exercise, training methods) on the magnitude and duration of hypotensive response. To compare the effect of continuous and interval exercises on hypotensive response magnitude and duration in hypertensive patients by using ambulatory blood pressure monitoring (ABPM). The sample consisted of 20 elderly hypertensives. Each participant underwent three ABPM sessions: one control ABPM, without exercise; one ABPM after continuous exercise; and one ABPM after interval exercise. Systolic blood pressure (SBP), diastolic blood pressure (DBP), mean arterial pressure (MAP), heart rate (HR) and double product (DP) were monitored to check post-exercise hypotension and for comparison between each ABPM. ABPM after continuous exercise and after interval exercise showed post-exercise hypotension and a significant reduction (p ABPM. Comparing ABPM after continuous and ABPM after interval exercise, a significant reduction (p < 0.05) in SBP, DBP, MAP and DP was observed in the latter. Continuous and interval exercise trainings promote post-exercise hypotension with reduction in SBP, DBP, MAP and DP in the 20 hours following exercise. Interval exercise training causes greater post-exercise hypotension and lower cardiovascular overload as compared with continuous exercise.

  11. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    Directory of Open Access Journals (Sweden)

    Ionut Bebu

    2016-06-01

    Full Text Available For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR from several studies, the number needed to treat (NNT, and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates.

  12. SPLINE LINEAR REGRESSION USED FOR EVALUATING FINANCIAL ASSETS 1

    Directory of Open Access Journals (Sweden)

    Liviu GEAMBAŞU

    2010-12-01

    Full Text Available One of the most important preoccupations of financial markets participants was and still is the problem of determining more precise the trend of financial assets prices. For solving this problem there were written many scientific papers and were developed many mathematical and statistical models in order to better determine the financial assets price trend. If until recently the simple linear models were largely used due to their facile utilization, the financial crises that affected the world economy starting with 2008 highlight the necessity of adapting the mathematical models to variation of economy. A simple to use model but adapted to economic life realities is the spline linear regression. This type of regression keeps the continuity of regression function, but split the studied data in intervals with homogenous characteristics. The characteristics of each interval are highlighted and also the evolution of market over all the intervals, resulting reduced standard errors. The first objective of the article is the theoretical presentation of the spline linear regression, also referring to scientific national and international papers related to this subject. The second objective is applying the theoretical model to data from the Bucharest Stock Exchange

  13. QT interval in healthy dogs: which method of correcting the QT interval in dogs is appropriate for use in small animal clinics?

    Directory of Open Access Journals (Sweden)

    Maira S. Oliveira

    2014-05-01

    Full Text Available The electrocardiography (ECG QT interval is influenced by fluctuations in heart rate (HR what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds. Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc obtained using the diverse formulae were significantly different (ρ<0.05, while those derived according to the equation QTcV = QT + 0.087(1- RR were the most consistent (linear regression. QTcV values were strongly correlated (r=0.83 with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.

  14. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  15. Algorithms and Complexity Results for Genome Mapping Problems.

    Science.gov (United States)

    Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric

    2017-01-01

    Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.

  16. A comparison of regression algorithms for wind speed forecasting at Alexander Bay

    CSIR Research Space (South Africa)

    Botha, Nicolene

    2016-12-01

    Full Text Available to forecast 1 to 24 hours ahead, in hourly intervals. Predictions are performed on a wind speed time series with three machine learning regression algorithms, namely support vector regression, ordinary least squares and Bayesian ridge regression. The resulting...

  17. Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan

    This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi...

  18. Use of multiple linear regression and logistic regression models to investigate changes in birthweight for term singleton infants in Scotland.

    Science.gov (United States)

    Bonellie, Sandra R

    2012-10-01

    To illustrate the use of regression and logistic regression models to investigate changes over time in size of babies particularly in relation to social deprivation, age of the mother and smoking. Mean birthweight has been found to be increasing in many countries in recent years, but there are still a group of babies who are born with low birthweights. Population-based retrospective cohort study. Multiple linear regression and logistic regression models are used to analyse data on term 'singleton births' from Scottish hospitals between 1994-2003. Mothers who smoke are shown to give birth to lighter babies on average, a difference of approximately 0.57 Standard deviations lower (95% confidence interval. 0.55-0.58) when adjusted for sex and parity. These mothers are also more likely to have babies that are low birthweight (odds ratio 3.46, 95% confidence interval 3.30-3.63) compared with non-smokers. Low birthweight is 30% more likely where the mother lives in the most deprived areas compared with the least deprived, (odds ratio 1.30, 95% confidence interval 1.21-1.40). Smoking during pregnancy is shown to have a detrimental effect on the size of infants at birth. This effect explains some, though not all, of the observed socioeconomic birthweight. It also explains much of the observed birthweight differences by the age of the mother.   Identifying mothers at greater risk of having a low birthweight baby as important implications for the care and advice this group receives. © 2012 Blackwell Publishing Ltd.

  19. Interval-Censored Time-to-Event Data Methods and Applications

    CERN Document Server

    Chen, Ding-Geng

    2012-01-01

    Interval-Censored Time-to-Event Data: Methods and Applications collects the most recent techniques, models, and computational tools for interval-censored time-to-event data. Top biostatisticians from academia, biopharmaceutical industries, and government agencies discuss how these advances are impacting clinical trials and biomedical research. Divided into three parts, the book begins with an overview of interval-censored data modeling, including nonparametric estimation, survival functions, regression analysis, multivariate data analysis, competing risks analysis, and other models for interva

  20. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    International Nuclear Information System (INIS)

    Althuwaynee, Omar F; Pradhan, Biswajeet; Ahmad, Noordin

    2014-01-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies

  1. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    Science.gov (United States)

    Althuwaynee, Omar F.; Pradhan, Biswajeet; Ahmad, Noordin

    2014-06-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies.

  2. Research on Driver Behavior in Yellow Interval at Signalized Intersections

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available Vehicles are often caught in dilemma zone when they approach signalized intersections in yellow interval. The existence of dilemma zone which is significantly influenced by driver behavior seriously affects the efficiency and safety of intersections. This paper proposes the driver behavior models in yellow interval by logistic regression and fuzzy decision tree modeling, respectively, based on camera image data. Vehicle’s speed and distance to stop line are considered in logistic regression model, which also brings in a dummy variable to describe installation of countdown timer display. Fuzzy decision tree model is generated by FID3 algorithm whose heuristic information is fuzzy information entropy based on membership functions. This paper concludes that fuzzy decision tree is more accurate to describe driver behavior at signalized intersection than logistic regression model.

  3. Application of Fuzzy Logic Inference System, Interval Numbers and Mapping Operator for Determination of Risk Level

    Directory of Open Access Journals (Sweden)

    Mohsen Omidvar

    2015-12-01

    Full Text Available Background & objective: Due to the features such as intuitive graphical appearance, ease of perception and straightforward applicability, risk matrix has become as one of the most used risk assessment tools. On the other hand, features such as the lack of precision in the classification of risk index, as well as subjective computational process, has limited its use. In order to solve this problem, in the current study we used fuzzy logic inference systems and mathematical operators (interval numbers and mapping operator. Methods: In this study, first 10 risk scenarios in the excavation and piping process were selected, then the outcome of the risk assessment were studied using four types of matrix including traditional (ORM, displaced cells (RCM , extended (ERM and fuzzy (FRM risk matrixes. Results: The results showed that the use of FRM and ERM matrix have prority, due to the high level of " Risk Tie Density" (RTD and "Risk Level Density" (RLD in the ORM and RCM matrix, as well as more accurate results presented in FRM and ERM, in risk assessment. While, FRM matrix provides more reliable results due to the application of fuzzy membership functions. Conclusion: Using new mathematical issues such as fuzzy sets and arithmetic and mapping operators for risk assessment could improve the accuracy of risk matrix and increase the reliability of the risk assessment results, when the accurate data are not available, or its data are avaliable in a limit range.

  4. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Approximating prediction uncertainty for random forest regression models

    Science.gov (United States)

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  6. Statistical methods in regression and calibration analysis of chromosome aberration data

    International Nuclear Information System (INIS)

    Merkle, W.

    1983-01-01

    The method of iteratively reweighted least squares for the regression analysis of Poisson distributed chromosome aberration data is reviewed in the context of other fit procedures used in the cytogenetic literature. As an application of the resulting regression curves methods for calculating confidence intervals on dose from aberration yield are described and compared, and, for the linear quadratic model a confidence interval is given. Emphasis is placed on the rational interpretation and the limitations of various methods from a statistical point of view. (orig./MG)

  7. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  8. Hierarchical tone mapping for high dynamic range image visualization

    Science.gov (United States)

    Qiu, Guoping; Duan, Jiang

    2005-07-01

    In this paper, we present a computationally efficient, practically easy to use tone mapping techniques for the visualization of high dynamic range (HDR) images in low dynamic range (LDR) reproduction devices. The new method, termed hierarchical nonlinear linear (HNL) tone-mapping operator maps the pixels in two hierarchical steps. The first step allocates appropriate numbers of LDR display levels to different HDR intensity intervals according to the pixel densities of the intervals. The second step linearly maps the HDR intensity intervals to theirs allocated LDR display levels. In the developed HNL scheme, the assignment of LDR display levels to HDR intensity intervals is controlled by a very simple and flexible formula with a single adjustable parameter. We also show that our new operators can be used for the effective enhancement of ordinary images.

  9. Tropical Forest Fire Susceptibility Mapping at the Cat Ba National Park Area, Hai Phong City, Vietnam, Using GIS-Based Kernel Logistic Regression

    Directory of Open Access Journals (Sweden)

    Dieu Tien Bui

    2016-04-01

    Full Text Available The Cat Ba National Park area (Vietnam with its tropical forest is recognized as being part of the world biodiversity conservation by the United Nations Educational, Scientific and Cultural Organization (UNESCO and is a well-known destination for tourists, with around 500,000 travelers per year. This area has been the site for many research projects; however, no project has been carried out for forest fire susceptibility assessment. Thus, protection of the forest including fire prevention is one of the main concerns of the local authorities. This work aims to produce a tropical forest fire susceptibility map for the Cat Ba National Park area, which may be helpful for the local authorities in forest fire protection management. To obtain this purpose, first, historical forest fires and related factors were collected from various sources to construct a GIS database. Then, a forest fire susceptibility model was developed using Kernel logistic regression. The quality of the model was assessed using the Receiver Operating Characteristic (ROC curve, area under the ROC curve (AUC, and five statistical evaluation measures. The usability of the resulting model is further compared with a benchmark model, the support vector machine (SVM. The results show that the Kernel logistic regression model has a high level of performance in both the training and validation dataset, with a prediction capability of 92.2%. Since the Kernel logistic regression model outperforms the benchmark model, we conclude that the proposed model is a promising alternative tool that should also be considered for forest fire susceptibility mapping in other areas. The results of this study are useful for the local authorities in forest planning and management.

  10. Using interval maxima regression (IMR) to determine environmental optima controlling Microcystis spp. growth in Lake Taihu.

    Science.gov (United States)

    Li, Ming; Peng, Qiang; Xiao, Man

    2016-01-01

    Fortnightly investigations at 12 sampling sites in Meiliang Bay and Gonghu Bay of Lake Taihu (China) were carried out from June to early November 2010. The relationship between abiotic factors and cell density of different Microcystis species was analyzed using the interval maxima regression (IMR) to determine the optimum temperature and nutrient concentrations for growth of different Microcystis species. Our results showed that cell density of all the Microcystis species increased along with the increase of water temperature, but Microcystis aeruginosa adapted to a wide range of temperatures. The optimum total dissolved nitrogen concentrations for M. aeruginosa, Microcystis wesenbergii, Microcystis ichthyoblabe, and unidentified Microcystis were 3.7, 2.0, 2.4, and 1.9 mg L(-1), respectively. The optimum total dissolved phosphorus concentrations for different species were M. wesenbergii (0.27 mg L(-1)) > M. aeruginosa (0.1 mg L(-1)) > M. ichthyoblabe (0.06 mg L(-1)) ≈ unidentified Microcystis, and the iron (Fe(3+)) concentrations were M. wesenbergii (0.73 mg L(-1)) > M. aeruginosa (0.42 mg L(-1)) > M. ichthyoblabe (0.35 mg L(-1)) > unidentified Microcystis (0.09 mg L(-1)). The above results suggest that if phosphorus concentration was reduced to 0.06 mg L(-1) or/and iron concentration was reduced to 0.35 mg L(-1) in Lake Taihu, the large colonial M. wesenbergii and M. aeruginosa would be replaced by small colonial M. ichthyoblabe and unidentified Microcystis. Thereafter, the intensity and frequency of the occurrence of Microcystis blooms would be reduced by changing Microcystis species composition.

  11. Socioeconomic position and the primary care interval

    DEFF Research Database (Denmark)

    Vedsted, Anders

    2018-01-01

    to the easiness to interpret the symptoms of the underlying cancer. Methods. We conducted a population-based cohort study using survey data on time intervals linked at an individually level to routine collected data on demographics from Danish registries. Using logistic regression we estimated the odds......Introduction. Diagnostic delays affect cancer survival negatively. Thus, the time interval from symptomatic presentation to a GP until referral to secondary care (i.e. primary care interval (PCI)), should be as short as possible. Lower socioeconomic position seems associated with poorer cancer...... younger than 45 years of age and older than 54 years of age had longer primary care interval than patients aged ‘45-54’ years. No other associations for SEP characteristics were observed. The findings may imply that GPs are referring patients regardless of SEP, although some room for improvement prevails...

  12. Algorithms for necklace maps

    NARCIS (Netherlands)

    Speckmann, B.; Verbeek, K.A.B.

    2015-01-01

    Necklace maps visualize quantitative data associated with regions by placing scaled symbols, usually disks, without overlap on a closed curve (the necklace) surrounding the map regions. Each region is projected onto an interval on the necklace that contains its symbol. In this paper we address the

  13. Mass movement susceptibility mapping - A comparison of logistic regression and Weight of evidence methods in Taounate-Ain Aicha region (Central Rif, Morocco

    Directory of Open Access Journals (Sweden)

    JEMMAH A I

    2018-01-01

    Full Text Available Taounate region is known by a high density of mass movements which cause several human and economic losses. The goal of this paper is to assess the landslide susceptibility of Taounate using the Weight of Evidence method (WofE and the Logistic Regression method (LR. Seven conditioning factors were used in this study: lithology, fault, drainage, slope, elevation, exposure and land use. Over the years, this site and its surroundings have experienced repeated landslides. For this reason, landslide susceptibility mapping is mandatory for risk prevention and land-use management. In this study, we have focused on recent large-scale mass movements. Finally, the ROC curves were established to evaluate the degree of fit of the model and to choose the best landslide susceptibility zonation. A total mass movements location were detected; 50% were randomly selected as input data for the entire process using the Spatial Data Model (SDM and the remaining locations were used for validation purposes. The obtained WofE’s landslide susceptibility map shows that high to very high susceptibility zones contain 62% of the total of inventoried landslides, while the same zones contain only 47% of landslides in the map obtained by the LR method. This landslide susceptibility map obtained is a major contribution to various urban and regional development plans under the Taounate Region National Development Program.

  14. Ergodicity of polygonal slap maps

    International Nuclear Information System (INIS)

    Del Magno, Gianluigi; Pedro Gaivão, José; Lopes Dias, João; Duarte, Pedro

    2014-01-01

    Polygonal slap maps are piecewise affine expanding maps of the interval obtained by projecting the sides of a polygon along their normals onto the perimeter of the polygon. These maps arise in the study of polygonal billiards with non-specular reflection laws. We study the absolutely continuous invariant probabilities (acips) of the slap maps for several polygons, including regular polygons and triangles. We also present a general method for constructing polygons with slap maps with more than one ergodic acip. (paper)

  15. Comparison of stochastic and regression based methods for quantification of predictive uncertainty of model-simulated wellhead protection zones in heterogeneous aquifers

    DEFF Research Database (Denmark)

    Christensen, Steen; Moore, C.; Doherty, J.

    2006-01-01

    accurate and required a few hundred model calls to be computed. (b) The linearized regression-based interval (Cooley, 2004) required just over a hundred model calls and also appeared to be nearly correct. (c) The calibration-constrained Monte-Carlo interval (Doherty, 2003) was found to be narrower than......For a synthetic case we computed three types of individual prediction intervals for the location of the aquifer entry point of a particle that moves through a heterogeneous aquifer and ends up in a pumping well. (a) The nonlinear regression-based interval (Cooley, 2004) was found to be nearly...... the regression-based intervals but required about half a million model calls. It is unclear whether or not this type of prediction interval is accurate....

  16. Regression algorithm for emotion detection

    OpenAIRE

    Berthelon , Franck; Sander , Peter

    2013-01-01

    International audience; We present here two components of a computational system for emotion detection. PEMs (Personalized Emotion Maps) store links between bodily expressions and emotion values, and are individually calibrated to capture each person's emotion profile. They are an implementation based on aspects of Scherer's theoretical complex system model of emotion~\\cite{scherer00, scherer09}. We also present a regression algorithm that determines a person's emotional feeling from sensor m...

  17. Integrated genetic linkage map of cultivated peanut by three RIL populations

    Institute of Scientific and Technical Information of China (English)

    Yanbin Song; Huifang Jiang; Huaiyong Luo; Li Huang; Yuning Chen; Weigang Chen; Nian Liu; Xiaoping Ren; Bolun Yu; Jianbin Guo

    2017-01-01

    High-density and precise genetic linkage map is fundamental to detect quanti-tative trait locus (QTL) of agronomic and quality related traits in cultivated peanut (Arachis hypogaea L.). In this study, three linkage maps from three RIL (recombinant inbred line) populations were used to construct an integrated map. A total of 2,069 SSR and transposon markers were anchored on the high-density integrated map which covered 2,231.53 cM with 20 linkage groups. Totally, 92 QTLs correlating with pod length (PL), pod width (PW), hun-dred pods weight (HPW) and plant height (PH) from above RIL populations were mapped on it. Seven intervals were found to harbor QTLs controlling the same traits in different pop-ulations, including one for PL, three for PW, two for HPW, and one for PH. Besides, QTLs controlling different traits in different populations were found to be overlapped in four inter-vals. Interval on A05 contains 17 QTLs for different traits from two RIL populations. New markers were added to these intervals to detect QTLs with narrow confidential intervals. Results obtained in this study may facilitate future genomic researches such as QTL study, fine mapping, positional cloning and marker-assisted selection (MAS) in peanut.

  18. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a

  19. Constraint-based Attribute and Interval Planning

    Science.gov (United States)

    Jonsson, Ari; Frank, Jeremy

    2013-01-01

    In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.

  20. Modeling Relationships Between Flight Crew Demographics and Perceptions of Interval Management

    Science.gov (United States)

    Remy, Benjamin; Wilson, Sara R.

    2016-01-01

    The Interval Management Alternative Clearances (IMAC) human-in-the-loop simulation experiment was conducted to assess interval management system performance and participants' acceptability and workload while performing three interval management clearance types. Twenty-four subject pilots and eight subject controllers flew ten high-density arrival scenarios into Denver International Airport during two weeks of data collection. This analysis examined the possible relationships between subject pilot demographics on reported perceptions of interval management in IMAC. Multiple linear regression models were created with a new software tool to predict subject pilot questionnaire item responses from demographic information. General patterns were noted across models that may indicate flight crew demographics influence perceptions of interval management.

  1. Identifying Generalizable Image Segmentation Parameters for Urban Land Cover Mapping through Meta-Analysis and Regression Tree Modeling

    Directory of Open Access Journals (Sweden)

    Brian A. Johnson

    2018-01-01

    Full Text Available The advent of very high resolution (VHR satellite imagery and the development of Geographic Object-Based Image Analysis (GEOBIA have led to many new opportunities for fine-scale land cover mapping, especially in urban areas. Image segmentation is an important step in the GEOBIA framework, so great time/effort is often spent to ensure that computer-generated image segments closely match real-world objects of interest. In the remote sensing community, segmentation is frequently performed using the multiresolution segmentation (MRS algorithm, which is tuned through three user-defined parameters (the scale, shape/color, and compactness/smoothness parameters. The scale parameter (SP is the most important parameter and governs the average size of generated image segments. Existing automatic methods to determine suitable SPs for segmentation are scene-specific and often computationally intensive, so an approach to estimating appropriate SPs that is generalizable (i.e., not scene-specific could speed up the GEOBIA workflow considerably. In this study, we attempted to identify generalizable SPs for five common urban land cover types (buildings, vegetation, roads, bare soil, and water through meta-analysis and nonlinear regression tree (RT modeling. First, we performed a literature search of recent studies that employed GEOBIA for urban land cover mapping and extracted the MRS parameters used, the image properties (i.e., spatial and radiometric resolutions, and the land cover classes mapped. Using this data extracted from the literature, we constructed RT models for each land cover class to predict suitable SP values based on the: image spatial resolution, image radiometric resolution, shape/color parameter, and compactness/smoothness parameter. Based on a visual and quantitative analysis of results, we found that for all land cover classes except water, relatively accurate SPs could be identified using our RT modeling results. The main advantage of our

  2. Mapping earthworm communities in Europe

    DEFF Research Database (Denmark)

    Rutgers, Michiel; Orgiazzi, Alberto; Gardi, Ciro

    Existing data sets on earthworm communities in Europe were collected, harmonized, modelled and depicted on a soil biodiversity map of Europe. Digital Soil Mapping was applied using multiple regressions relating relatively low density earthworm community data to soil characteristics, land use...

  3. Mapping earthworm communities in Europe

    NARCIS (Netherlands)

    Rutgers, M.; Orgiazzi, A.; Gardi, C.; Römbke, J.; Jansch, S.; Keith, A.; Neilson, R.; Boag, B.; Schmidt, O.; Murchie, A.K.; Blackshaw, R.P.; Pérès, G.; Cluzeau, D.; Guernion, M.; Briones, M.J.I.; Rodeiro, J.; Pineiro, R.; Diaz Cosin, D.J.; Sousa, J.P.; Suhadolc, M.; Kos, I.; Krogh, P.H.; Faber, J.H.; Mulder, C.; Bogte, J.J.; Wijnen, van H.J.; Schouten, A.J.; Zwart, de D.

    2016-01-01

    Existing data sets on earthworm communities in Europe were collected, harmonized, collated, modelled and depicted on a soil biodiversity map. Digital Soil Mapping was applied using multiple regressions relating relatively low density earthworm community data to soil characteristics, land use,

  4. SPSS macros to compare any two fitted values from a regression model.

    Science.gov (United States)

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  5. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia

    2018-04-10

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution\\'s parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.

  6. Premature Ventricular Contraction Coupling Interval Variability Destabilizes Cardiac Neuronal and Electrophysiological Control: Insights From Simultaneous Cardioneural Mapping.

    Science.gov (United States)

    Hamon, David; Rajendran, Pradeep S; Chui, Ray W; Ajijola, Olujimi A; Irie, Tadanobu; Talebi, Ramin; Salavatian, Siamak; Vaseghi, Marmar; Bradfield, Jason S; Armour, J Andrew; Ardell, Jeffrey L; Shivkumar, Kalyanam

    2017-04-01

    Variability in premature ventricular contraction (PVC) coupling interval (CI) increases the risk of cardiomyopathy and sudden death. The autonomic nervous system regulates cardiac electrical and mechanical indices, and its dysregulation plays an important role in cardiac disease pathogenesis. The impact of PVCs on the intrinsic cardiac nervous system, a neural network on the heart, remains unknown. The objective was to determine the effect of PVCs and CI on intrinsic cardiac nervous system function in generating cardiac neuronal and electric instability using a novel cardioneural mapping approach. In a porcine model (n=8), neuronal activity was recorded from a ventricular ganglion using a microelectrode array, and cardiac electrophysiological mapping was performed. Neurons were functionally classified based on their response to afferent and efferent cardiovascular stimuli, with neurons that responded to both defined as convergent (local reflex processors). Dynamic changes in neuronal activity were then evaluated in response to right ventricular outflow tract PVCs with fixed short, fixed long, and variable CI. PVC delivery elicited a greater neuronal response than all other stimuli ( P <0.001). Compared with fixed short and long CI, PVCs with variable CI had a greater impact on neuronal response ( P <0.05 versus short CI), particularly on convergent neurons ( P <0.05), as well as neurons receiving sympathetic ( P <0.05) and parasympathetic input ( P <0.05). The greatest cardiac electric instability was also observed after variable (short) CI PVCs. Variable CI PVCs affect critical populations of intrinsic cardiac nervous system neurons and alter cardiac repolarization. These changes may be critical for arrhythmogenesis and remodeling, leading to cardiomyopathy. © 2017 American Heart Association, Inc.

  7. High Resolution Mapping of Soil Properties Using Remote Sensing Variables in South-Western Burkina Faso: A Comparison of Machine Learning and Multiple Linear Regression Models.

    Science.gov (United States)

    Forkuor, Gerald; Hounkpatin, Ozias K L; Welp, Gerhard; Thiel, Michael

    2017-01-01

    Accurate and detailed spatial soil information is essential for environmental modelling, risk assessment and decision making. The use of Remote Sensing data as secondary sources of information in digital soil mapping has been found to be cost effective and less time consuming compared to traditional soil mapping approaches. But the potentials of Remote Sensing data in improving knowledge of local scale soil information in West Africa have not been fully explored. This study investigated the use of high spatial resolution satellite data (RapidEye and Landsat), terrain/climatic data and laboratory analysed soil samples to map the spatial distribution of six soil properties-sand, silt, clay, cation exchange capacity (CEC), soil organic carbon (SOC) and nitrogen-in a 580 km2 agricultural watershed in south-western Burkina Faso. Four statistical prediction models-multiple linear regression (MLR), random forest regression (RFR), support vector machine (SVM), stochastic gradient boosting (SGB)-were tested and compared. Internal validation was conducted by cross validation while the predictions were validated against an independent set of soil samples considering the modelling area and an extrapolation area. Model performance statistics revealed that the machine learning techniques performed marginally better than the MLR, with the RFR providing in most cases the highest accuracy. The inability of MLR to handle non-linear relationships between dependent and independent variables was found to be a limitation in accurately predicting soil properties at unsampled locations. Satellite data acquired during ploughing or early crop development stages (e.g. May, June) were found to be the most important spectral predictors while elevation, temperature and precipitation came up as prominent terrain/climatic variables in predicting soil properties. The results further showed that shortwave infrared and near infrared channels of Landsat8 as well as soil specific indices of redness

  8. Strange distributionally chaotic triangular maps

    International Nuclear Information System (INIS)

    Paganoni, L.; Smital, J.

    2005-01-01

    The notion of distributional chaos was introduced by Schweizer, Smital [Measures of chaos and a spectral decompostion of dynamical systems on the interval. Trans. Amer. Math. Soc. 344;1994:737-854] for continuous maps of the interval. For continuous maps of a compact metric space three mutually nonequivalent versions of distributional chaos, DC1-DC3, can be considered. In this paper we study distributional chaos in the class T m of triangular maps of the square which are monotone on the fibres; such maps must have zero topological entropy. The main results: (i) There is an F-bar T m such that F-bar DC2 and F vertical bar Rec(F)-bar DC3. (ii) If no ω-limit set of an F-bar T m contains two minimal subsets then F-bar DC1. This completes recent results obtained by Forti et al. [Dynamics of homeomorphisms on minimal sets generated by triangular mappings. Bull Austral Math Soc 59;1999:1-20], Smital, Stefankova [Distributional chaos for triangular maps, Chaos, Solitons and Fractals 21;2004:1125-8], and Balibrea et al. [The three versions of distributional chaos. Chaos, Solitons and Fractals 23;2005:1581-3]. The paper contributes to the solution of a long-standing open problem by Sharkovsky concerning classification of triangular maps

  9. Multivariate linear regression of high-dimensional fMRI data with multiple target variables.

    Science.gov (United States)

    Valente, Giancarlo; Castellanos, Agustin Lage; Vanacore, Gianluca; Formisano, Elia

    2014-05-01

    Multivariate regression is increasingly used to study the relation between fMRI spatial activation patterns and experimental stimuli or behavioral ratings. With linear models, informative brain locations are identified by mapping the model coefficients. This is a central aspect in neuroimaging, as it provides the sought-after link between the activity of neuronal populations and subject's perception, cognition or behavior. Here, we show that mapping of informative brain locations using multivariate linear regression (MLR) may lead to incorrect conclusions and interpretations. MLR algorithms for high dimensional data are designed to deal with targets (stimuli or behavioral ratings, in fMRI) separately, and the predictive map of a model integrates information deriving from both neural activity patterns and experimental design. Not accounting explicitly for the presence of other targets whose associated activity spatially overlaps with the one of interest may lead to predictive maps of troublesome interpretation. We propose a new model that can correctly identify the spatial patterns associated with a target while achieving good generalization. For each target, the training is based on an augmented dataset, which includes all remaining targets. The estimation on such datasets produces both maps and interaction coefficients, which are then used to generalize. The proposed formulation is independent of the regression algorithm employed. We validate this model on simulated fMRI data and on a publicly available dataset. Results indicate that our method achieves high spatial sensitivity and good generalization and that it helps disentangle specific neural effects from interaction with predictive maps associated with other targets. Copyright © 2013 Wiley Periodicals, Inc.

  10. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    Science.gov (United States)

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  11. Neural net generated seismic facies map and attribute facies map

    International Nuclear Information System (INIS)

    Addy, S.K.; Neri, P.

    1998-01-01

    The usefulness of 'seismic facies maps' in the analysis of an Upper Wilcox channel system in a 3-D survey shot by CGG in 1995 in Lavaca county in south Texas was discussed. A neural net-generated seismic facies map is a quick hydrocarbon exploration tool that can be applied regionally as well as on a prospect scale. The new technology is used to classify a constant interval parallel to a horizon in a 3-D seismic volume based on the shape of the wiggle traces using a neural network technology. The tool makes it possible to interpret sedimentary features of a petroleum deposit. The same technology can be used in regional mapping by making 'attribute facies maps' in which various forms of amplitude attributes, phase attributes or frequency attributes can be used

  12. Symplectic maps for accelerator lattices

    International Nuclear Information System (INIS)

    Warnock, R.L.; Ruth, R.; Gabella, W.

    1988-05-01

    We describe a method for numerical construction of a symplectic map for particle propagation in a general accelerator lattice. The generating function of the map is obtained by integrating the Hamilton-Jacobi equation as an initial-value problem on a finite time interval. Given the generating function, the map is put in explicit form by means of a Fourier inversion technique. We give an example which suggests that the method has promise. 9 refs., 9 figs

  13. Use of regression-based models to map sensitivity of aquatic resources to atmospheric deposition in Yosemite National Park, USA

    Science.gov (United States)

    Clow, D. W.; Nanus, L.; Huggett, B. W.

    2010-12-01

    An abundance of exposed bedrock, sparse soil and vegetation, and fast hydrologic flushing rates make aquatic ecosystems in Yosemite National Park susceptible to nutrient enrichment and episodic acidification due to atmospheric deposition of nitrogen (N) and sulfur (S). In this study, multiple-linear regression (MLR) models were created to estimate fall-season nitrate and acid neutralizing capacity (ANC) in surface water in Yosemite wilderness. Input data included estimated winter N deposition, fall-season surface-water chemistry measurements at 52 sites, and basin characteristics derived from geographic information system layers of topography, geology, and vegetation. The MLR models accounted for 84% and 70% of the variance in surface-water nitrate and ANC, respectively. Explanatory variables (and the sign of their coefficients) for nitrate included elevation (positive) and the abundance of neoglacial and talus deposits (positive), unvegetated terrain (positive), alluvium (negative), and riparian (negative) areas in the basins. Explanatory variables for ANC included basin area (positive) and the abundance of metamorphic rocks (positive), unvegetated terrain (negative), water (negative), and winter N deposition (negative) in the basins. The MLR equations were applied to 1407 stream reaches delineated in the National Hydrography Dataset for Yosemite, and maps of predicted surface-water nitrate and ANC concentrations were created. Predicted surface-water nitrate concentrations were highest in small, high-elevation cirques, and concentrations declined downstream. Predicted ANC concentrations showed the opposite pattern, except in high-elevation areas underlain by metamorphic rocks along the Sierran Crest, which had relatively high predicted ANC (>200 µeq L-1). Maps were created to show where basin characteristics predispose aquatic resources to nutrient enrichment and acidification effects from N and S deposition. The maps can be used to help guide development of

  14. Econometric analysis of realized covariation: high frequency based covariance, regression, and correlation in financial economics

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2004-01-01

    This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing...... the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities....

  15. Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China.

    Science.gov (United States)

    Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li'an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling

    2016-03-01

    A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box-Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China.

  16. Construction of a genetic linkage map in Lilium using a RIL mapping population based on SRAP marker

    Directory of Open Access Journals (Sweden)

    Chen Li-Jing

    2015-01-01

    Full Text Available A genetic linkage map of lily was constructed using RILs (recombinant inbred lines population of 180 individuals. This mapping population was developed by crossing Raizan No.1 (Formolongo and Gelria (Longiflomm cultivars through single-seed descent (SSD. SRAPs were generated by using restriction enzymes EcoRI in combination with either MseI. The resulting products were separated by electrophoresis on 6% denaturing polyacrylamide gel and visualized by silver staining. The segregation of each marker and linkage analysis was done using the program Mapmaker3.0. With 50 primer pairs, a total of 189 parental polymorphic bands were detected and 78 were used for mapping. The total map length was 2,135.5 cM consisted of 16 linkage groups. The number of markers in the linkage groups varied from 1 to 12. The length of linkage groups was range from 11.2 cM to 425.9 cM and mean marker interval distance range from 9.4 cM to 345.4 cM individually. The mean marker interval distance between markers was 27.4 cM. The map developed in the present study was the first sequence-related amplified polymorphism markers map of lily constructed with recombinant inbred lines, it could be used for genetic mapping and molecular marker assisted breeding and quantitative trait locus mapping of Lilium.

  17. Poisson regression for modeling count and frequency outcomes in trauma research.

    Science.gov (United States)

    Gagnon, David R; Doron-LaMarca, Susan; Bell, Margret; O'Farrell, Timothy J; Taft, Casey T

    2008-10-01

    The authors describe how the Poisson regression method for analyzing count or frequency outcome variables can be applied in trauma studies. The outcome of interest in trauma research may represent a count of the number of incidents of behavior occurring in a given time interval, such as acts of physical aggression or substance abuse. Traditional linear regression approaches assume a normally distributed outcome variable with equal variances over the range of predictor variables, and may not be optimal for modeling count outcomes. An application of Poisson regression is presented using data from a study of intimate partner aggression among male patients in an alcohol treatment program and their female partners. Results of Poisson regression and linear regression models are compared.

  18. Estimating and mapping forest biomass using regression models and Spot-6 images (case study: Hyrcanian forests of north of Iran).

    Science.gov (United States)

    Motlagh, Mohadeseh Ghanbari; Kafaky, Sasan Babaie; Mataji, Asadollah; Akhavan, Reza

    2018-05-21

    Hyrcanian forests of North of Iran are of great importance in terms of various economic and environmental aspects. In this study, Spot-6 satellite images and regression models were applied to estimate above-ground biomass in these forests. This research was carried out in six compartments in three climatic (semi-arid to humid) types and two altitude classes. In the first step, ground sampling methods at the compartment level were used to estimate aboveground biomass (Mg/ha). Then, by reviewing the results of other studies, the most appropriate vegetation indices were selected. In this study, three indices of NDVI, RVI, and TVI were calculated. We investigated the relationship between the vegetation indices and aboveground biomass measured at sample-plot level. Based on the results, the relationship between aboveground biomass values and vegetation indices was a linear regression with the highest level of significance for NDVI in all compartments. Since at the compartment level the correlation coefficient between NDVI and aboveground biomass was the highest, NDVI was used for mapping aboveground biomass. According to the results of this study, biomass values were highly different in various climatic and altitudinal classes with the highest biomass value observed in humid climate and high-altitude class.

  19. SEPARATION PHENOMENA LOGISTIC REGRESSION

    Directory of Open Access Journals (Sweden)

    Ikaro Daniel de Carvalho Barreto

    2014-03-01

    Full Text Available This paper proposes an application of concepts about the maximum likelihood estimation of the binomial logistic regression model to the separation phenomena. It generates bias in the estimation and provides different interpretations of the estimates on the different statistical tests (Wald, Likelihood Ratio and Score and provides different estimates on the different iterative methods (Newton-Raphson and Fisher Score. It also presents an example that demonstrates the direct implications for the validation of the model and validation of variables, the implications for estimates of odds ratios and confidence intervals, generated from the Wald statistics. Furthermore, we present, briefly, the Firth correction to circumvent the phenomena of separation.

  20. Interval selection with machine-dependent intervals

    OpenAIRE

    Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.

    2013-01-01

    We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...

  1. Molecular mapping of chromosomes 17 and X

    Energy Technology Data Exchange (ETDEWEB)

    Barker, D.F.

    1991-01-15

    Progress toward the construction of high density genetic maps of chromosomes 17 and X has been made by isolating and characterizing a relatively large set of polymorphic probes for each chromosome and using these probes to construct genetic maps. We have mapped the same polymorphic probes against a series of chromosome breakpoints on X and 17. The probes could be assigned to over 30 physical intervals on the X chromosome and 7 intervals on 17. In many cases, this process resulted in improved characterization of the relative locations of the breakpoints with respect to each other and the definition of new physical intervals. The strategy for isolation of the polymorphic clones utilized chromosome specific libraries of 1--15 kb segments from each of the two chromosomes. From these libraries, clones were screened for those detecting restriction fragment length polymorphisms. The markers were further characterized, the chromosomal assignments confirmed and in most cases segments of the original probes were subcloned into plasmids to produce probes with improved signal to noise ratios for use in the genetic marker studies. The linkage studies utilize the CEPH reference families and other well-characterized families in our collection which have been used for genetic disease linkage work. Preliminary maps and maps of portions of specific regions of 17 and X are provided. We have nearly completed a map of the 1 megabase Mycoplasma arthritidis genome by applying these techniques to a lambda phage library of its genome. We have found bit mapping to be an efficient means to organize a contiguous set of overlapping clones from a larger genome.

  2. A regression-based method for mapping traffic-related air pollution. Application and testing in four contrasting urban environments

    International Nuclear Information System (INIS)

    Briggs, D.J.; De Hoogh, C.; Elliot, P.; Gulliver, J.; Wills, J.; Kingham, S.; Smallbone, K.

    2000-01-01

    Accurate, high-resolution maps of traffic-related air pollution are needed both as a basis for assessing exposures as part of epidemiological studies, and to inform urban air-quality policy and traffic management. This paper assesses the use of a GIS-based, regression mapping technique to model spatial patterns of traffic-related air pollution. The model - developed using data from 80 passive sampler sites in Huddersfield, as part of the SAVIAH (Small Area Variations in Air Quality and Health) project - uses data on traffic flows and land cover in the 300-m buffer zone around each site, and altitude of the site, as predictors of NO 2 concentrations. It was tested here by application in four urban areas in the UK: Huddersfield (for the year following that used for initial model development), Sheffield, Northampton, and part of London. In each case, a GIS was built in ArcInfo, integrating relevant data on road traffic, urban land use and topography. Monitoring of NO 2 was undertaken using replicate passive samplers (in London, data were obtained from surveys carried out as part of the London network). In Huddersfield, Sheffield and Northampton, the model was first calibrated by comparing modelled results with monitored NO 2 concentrations at 10 randomly selected sites; the calibrated model was then validated against data from a further 10-28 sites. In London, where data for only 11 sites were available, validation was not undertaken. Results showed that the model performed well in all cases. After local calibration, the model gave estimates of mean annual NO 2 concentrations within a factor of 1.5 of the actual mean (approx. 70-90%) of the time and within a factor of 2 between 70 and 100% of the time. r 2 values between modelled and observed concentrations are in the range of 0.58-0.76. These results are comparable to those achieved by more sophisticated dispersion models. The model also has several advantages over dispersion modelling. It is able, for example, to

  3. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    Science.gov (United States)

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later.

  4. Inter-Pregnancy Intervals and Maternal Morbidity: New Evidence from Rwanda

    NARCIS (Netherlands)

    Habimana-Kabano, Ignace; Broekhuis, E.J.A.; Hooimeijer, P.

    The effects of short and long pregnancy intervals on maternal morbidity have hardly been investigated. This research analyses these effects using logistic regression in two steps. First, data from the Rwanda Demographic and Health Survey 2010 are used to study delivery referrals to District

  5. Premature Ventricular Contraction Coupling Interval Variability Destabilizes Cardiac Neuronal and Electrophysiological Control: Insights from Simultaneous Cardio-Neural Mapping

    Science.gov (United States)

    Hamon, David; Rajendran, Pradeep S.; Chui, Ray W.; Ajijola, Olujimi A.; Irie, Tadanobu; Talebi, Ramin; Salavatian, Siamak; Vaseghi, Marmar; Bradfield, Jason S.; Armour, J. Andrew; Ardell, Jeffrey L.; Shivkumar, Kalyanam

    2017-01-01

    Background Variability in premature ventricular contraction (PVC) coupling interval (CI) increases the risk of cardiomyopathy and sudden death. The autonomic nervous system regulates cardiac electrical and mechanical indices, and its dysregulation plays an important role in cardiac disease pathogenesis. The impact of PVCs on the intrinsic cardiac nervous system (ICNS), a neural network on the heart, remains unknown. The objective was to determine the effect of PVCs and CI on ICNS function in generating cardiac neuronal and electrical instability using a novel cardio-neural mapping approach. Methods and Results In a porcine model (n=8) neuronal activity was recorded from a ventricular ganglion using a microelectrode array, and cardiac electrophysiological mapping was performed. Neurons were functionally classified based on their response to afferent and efferent cardiovascular stimuli, with neurons that responded to both defined as convergent (local reflex processors). Dynamic changes in neuronal activity were then evaluated in response to right ventricular outflow tract PVCs with fixed short, fixed long, and variable CI. PVC delivery elicited a greater neuronal response than all other stimuli (P<0.001). Compared to fixed short and long CI, PVCs with variable CI had a greater impact on neuronal response (P<0.05 versus short CI), particularly on convergent neurons (P<0.05), as well as neurons receiving sympathetic (P<0.05) and parasympathetic input (P<0.05). The greatest cardiac electrical instability was also observed following variable (short) CI PVCs. Conclusions Variable CI PVCs affect critical populations of ICNS neurons and alter cardiac repolarization. These changes may be critical for arrhythmogenesis and remodeling leading to cardiomyopathy. PMID:28408652

  6. A Practical pedestrian approach to parsimonious regression with inaccurate inputs

    Directory of Open Access Journals (Sweden)

    Seppo Karrila

    2014-04-01

    Full Text Available A measurement result often dictates an interval containing the correct value. Interval data is also created by roundoff, truncation, and binning. We focus on such common interval uncertainty in data. Inaccuracy in model inputs is typically ignored on model fitting. We provide a practical approach for regression with inaccurate data: the mathematics is easy, and the linear programming formulations simple to use even in a spreadsheet. This self-contained elementary presentation introduces interval linear systems and requires only basic knowledge of algebra. Feature selection is automatic; but can be controlled to find only a few most relevant inputs; and joint feature selection is enabled for multiple modeled outputs. With more features than cases, a novel connection to compressed sensing emerges: robustness against interval errors-in-variables implies model parsimony, and the input inaccuracies determine the regularization term. A small numerical example highlights counterintuitive results and a dramatic difference to total least squares.

  7. High Resolution Mapping of Soil Properties Using Remote Sensing Variables in South-Western Burkina Faso: A Comparison of Machine Learning and Multiple Linear Regression Models.

    Directory of Open Access Journals (Sweden)

    Gerald Forkuor

    Full Text Available Accurate and detailed spatial soil information is essential for environmental modelling, risk assessment and decision making. The use of Remote Sensing data as secondary sources of information in digital soil mapping has been found to be cost effective and less time consuming compared to traditional soil mapping approaches. But the potentials of Remote Sensing data in improving knowledge of local scale soil information in West Africa have not been fully explored. This study investigated the use of high spatial resolution satellite data (RapidEye and Landsat, terrain/climatic data and laboratory analysed soil samples to map the spatial distribution of six soil properties-sand, silt, clay, cation exchange capacity (CEC, soil organic carbon (SOC and nitrogen-in a 580 km2 agricultural watershed in south-western Burkina Faso. Four statistical prediction models-multiple linear regression (MLR, random forest regression (RFR, support vector machine (SVM, stochastic gradient boosting (SGB-were tested and compared. Internal validation was conducted by cross validation while the predictions were validated against an independent set of soil samples considering the modelling area and an extrapolation area. Model performance statistics revealed that the machine learning techniques performed marginally better than the MLR, with the RFR providing in most cases the highest accuracy. The inability of MLR to handle non-linear relationships between dependent and independent variables was found to be a limitation in accurately predicting soil properties at unsampled locations. Satellite data acquired during ploughing or early crop development stages (e.g. May, June were found to be the most important spectral predictors while elevation, temperature and precipitation came up as prominent terrain/climatic variables in predicting soil properties. The results further showed that shortwave infrared and near infrared channels of Landsat8 as well as soil specific indices

  8. Potentiometric-surface map, 1993, Yucca Mountain and vicinity, Nevada

    International Nuclear Information System (INIS)

    Tucci, P.; Burkhardt, D.J.

    1995-01-01

    The revised potentiometric surface map here, using mainly 1993 average water levels, updates earlier maps of this area. Water levels are contoured with 20-m intervals, with additional 0.5-m contours in the small-gradient area SE of Yucca Mountain. Water levels range from 728 m above sea level SE of Yucca to 1,034 m above sea level north of Yucca. Potentiometric levels in the deeper parts of the volcanic rock aquifer range from 730 to 785 m above sea level. The potentiometric surface can be divided into 3 regions: A small gradient area E and SE of Yucca, a moderate-gradient area on the west side of Yucca, and a large-gradient area to the N-NE of Yucca. Water levels from wells at Yucca were examined for yearly trends (1986-93) using linear least-squares regression. Of the 22 wells, three had significant positive trends. The trend in well UE-25 WT-3 may be influenced by monitoring equipment problems. Tends in USW WT-7 and USW WTS-10 are similar; both are located near a fault west of Yucca; however another well near that fault exhibited no significant trend

  9. Genome-wide interval mapping using SNPs identifies new QTL for growth, body composition and several physiological variables in an F2 intercross between fat and lean chicken lines.

    Science.gov (United States)

    Demeure, Olivier; Duclos, Michel J; Bacciu, Nicola; Le Mignon, Guillaume; Filangi, Olivier; Pitel, Frédérique; Boland, Anne; Lagarrigue, Sandrine; Cogburn, Larry A; Simon, Jean; Le Roy, Pascale; Le Bihan-Duval, Elisabeth

    2013-09-30

    For decades, genetic improvement based on measuring growth and body composition traits has been successfully applied in the production of meat-type chickens. However, this conventional approach is hindered by antagonistic genetic correlations between some traits and the high cost of measuring body composition traits. Marker-assisted selection should overcome these problems by selecting loci that have effects on either one trait only or on more than one trait but with a favorable genetic correlation. In the present study, identification of such loci was done by genotyping an F2 intercross between fat and lean lines divergently selected for abdominal fatness genotyped with a medium-density genetic map (120 microsatellites and 1302 single nucleotide polymorphisms). Genome scan linkage analyses were performed for growth (body weight at 1, 3, 5, and 7 weeks, and shank length and diameter at 9 weeks), body composition at 9 weeks (abdominal fat weight and percentage, breast muscle weight and percentage, and thigh weight and percentage), and for several physiological measurements at 7 weeks in the fasting state, i.e. body temperature and plasma levels of IGF-I, NEFA and glucose. Interval mapping analyses were performed with the QTLMap software, including single-trait analyses with single and multiple QTL on the same chromosome. Sixty-seven QTL were detected, most of which had never been described before. Of these 67 QTL, 47 were detected by single-QTL analyses and 20 by multiple-QTL analyses, which underlines the importance of using different statistical models. Close analysis of the genes located in the defined intervals identified several relevant functional candidates, such as ACACA for abdominal fatness, GHSR and GAS1 for breast muscle weight, DCRX and ASPSCR1 for plasma glucose content, and ChEBP for shank diameter. The medium-density genetic map enabled us to genotype new regions of the chicken genome (including micro-chromosomes) that influenced the traits

  10. Dynamics of Stability of Orientation Maps Recorded with Optical Imaging.

    Science.gov (United States)

    Shumikhina, S I; Bondar, I V; Svinov, M M

    2018-03-15

    Orientation selectivity is an important feature of visual cortical neurons. Optical imaging of the visual cortex allows for the generation of maps of orientation selectivity that reflect the activity of large populations of neurons. To estimate the statistical significance of effects of experimental manipulations, evaluation of the stability of cortical maps over time is required. Here, we performed optical imaging recordings of the visual cortex of anesthetized adult cats. Monocular stimulation with moving clockwise square-wave gratings that continuously changed orientation and direction was used as the mapping stimulus. Recordings were repeated at various time intervals, from 15 min to 16 h. Quantification of map stability was performed on a pixel-by-pixel basis using several techniques. Map reproducibility showed clear dynamics over time. The highest degree of stability was seen in maps recorded 15-45 min apart. Averaging across all time intervals and all stimulus orientations revealed a mean shift of 2.2 ± 0.1°. There was a significant tendency for larger shifts to occur at longer time intervals. Shifts between 2.8° (mean ± 2SD) and 5° were observed more frequently at oblique orientations, while shifts greater than 5° appeared more frequently at cardinal orientations. Shifts greater than 5° occurred rarely overall (5.4% of cases) and never exceeded 11°. Shifts of 10-10.6° (0.7%) were seen occasionally at time intervals of more than 4 h. Our findings should be considered when evaluating the potential effect of experimental manipulations on orientation selectivity mapping studies. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. Assessing risk factors for periodontitis using regression

    Science.gov (United States)

    Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa

    2013-10-01

    Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.

  12. Background stratified Poisson regression analysis of cohort data.

    Science.gov (United States)

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  13. Magnetic Resonance Fingerprinting with short relaxation intervals.

    Science.gov (United States)

    Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

    2017-09-01

    The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially

  14. Spontaneous regression of cerebral arteriovenous malformations: clinical and angiographic analysis with review of the literature

    International Nuclear Information System (INIS)

    Lee, S.K.; Vilela, P.; Willinsky, R.; TerBrugge, K.G.

    2002-01-01

    Spontaneous regression of cerebral arteriovenous malformation (AVM) is rare and poorly understood. We reviewed the clinical and angiographic findings in patients who had spontaneous regression of cerebral AVMs to determine whether common features were present. The clinical and angiographic findings of four cases from our series and 29 cases from the literature were retrospectively reviewed. The clinical and angiographic features analyzed were: age at diagnosis, initial presentation, venous drainage pattern, number of draining veins, location of the AVM, number of arterial feeders, clinical events during the interval period to thrombosis, and interval period to spontaneous thrombosis. Common clinical and angiographic features of spontaneous regression of cerebral AVMs are: intracranial hemorrhage as an initial presentation, small AVMs, and a single draining vein. Spontaneous regression of cerebral AVMs can not be predicted by clinical or angiographic features, therefore it should not be considered as an option in cerebral AVM management, despite its proven occurrence. (orig.)

  15. A molecular recombination map of Antirrhinum majus

    Directory of Open Access Journals (Sweden)

    Hudson Andrew

    2010-12-01

    Full Text Available Abstract Background Genetic recombination maps provide important frameworks for comparative genomics, identifying gene functions, assembling genome sequences and for breeding. The molecular recombination map currently available for the model eudicot Antirrhinum majus is the result of a cross with Antirrhinum molle, limiting its usefulness within A. majus. Results We created a molecular linkage map of A. majus based on segregation of markers in the F2 population of two inbred lab strains of A. majus. The resulting map consisted of over 300 markers in eight linkage groups, which could be aligned with a classical recombination map and the A. majus karyotype. The distribution of recombination frequencies and distorted transmission of parental alleles differed from those of a previous inter-species hybrid. The differences varied in magnitude and direction between chromosomes, suggesting that they had multiple causes. The map, which covered an estimated of 95% of the genome with an average interval of 2 cM, was used to analyze the distribution of a newly discovered family of MITE transposons and tested for its utility in positioning seven mutations that affect aspects of plant size. Conclusions The current map has an estimated interval of 1.28 Mb between markers. It shows a lower level of transmission ratio distortion and a longer length than the previous inter-species map, making it potentially more useful. The molecular recombination map further indicates that the IDLE MITE transposons are distributed throughout the genome and are relatively stable. The map proved effective in mapping classical morphological mutations of A. majus.

  16. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    Science.gov (United States)

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Using the multiple regression analysis with respect to ANOVA and 3D mapping to model the actual performance of PEM (proton exchange membrane) fuel cell at various operating conditions

    International Nuclear Information System (INIS)

    Al-Hadeethi, Farqad; Al-Nimr, Moh'd; Al-Safadi, Mohammad

    2015-01-01

    The performance of PEM (proton exchange membrane) fuel cell was experimentally investigated at three temperatures (30, 50 and 70 °C), four flow rates (5, 10, 15 and 20 ml/min) and two flow patterns (co-current and counter current) in order to generate two correlations using multiple regression analysis with respect to ANOVA. Results revealed that increasing the temperature for co-current and counter current flow patterns will increase both hydrogen and oxygen diffusivities, water management and membrane conductivity. The derived mathematical correlations and three dimensional mapping (i.e. surface response) for the co-current and countercurrent flow patterns showed that there is a clear interaction among the various variables (temperatures and flow rates). - Highlights: • Generating mathematical correlations using multiple regression analysis with respect to ANOVA for the performance of the PEM fuel cell. • Using the 3D mapping to diagnose the optimum performance of the PEM fuel cell at the given operating conditions. • Results revealed that increasing the flow rate had direct influence on the consumption of oxygen. • Results assured that increasing the temperature in co-current and counter current flow patterns increases the performance of PEM fuel cell.

  18. Methods for estimating disease transmission rates: Evaluating the precision of Poisson regression and two novel methods

    DEFF Research Database (Denmark)

    Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin

    2017-01-01

    the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...... tested scenarios these new methods perform similar or better than Poisson regression, especially in the case of long sampling intervals. We conclude that transmission rate estimates are easily biased, which is important to take into account when using these rates in simulation models....

  19. On the Use of Second-Order Descriptors To Predict Queueing Behavior of MAPs

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis

    2002-01-01

    The contributions of this paper are the following: We derive a formula for the IDI (Index of Dispersion for Intervals) for the Markovian Arrival Process (MAP). We show that two-state MAPs with identical fundamental rate, IDI and IDC (Index of Dispersion for Counts), define interval stationary poi...

  20. Strange distributionally chaotic triangular maps II

    International Nuclear Information System (INIS)

    Paganoni, L.; Smital, J.

    2006-01-01

    The notion of distributional chaos was introduced by Schweizer and Smital [Measures of chaos and a spectral decomposition of dynamical systems on the interval, Trans Am Math Soc 1994;344:737-854] for continuous maps of the interval. For continuous maps of a compact metric space three mutually non-equivalent versions of distributional chaos, DC1-DC3, can be considered. In this paper we study distributional chaos in the class T m of triangular maps of the square which are monotone on the fibres. The main results: (i) If F-bar T m has positive topological entropy then F is DC1, and hence, DC2 and DC3. This result is interesting since similar statement is not true for general triangular maps of the square [Smital and Stefankova, Distributional chaos for triangular maps, Chaos, Solitons and Fractals 2004;21:1125-8]. (ii) There are F 1 ,F 2 -bar T m which are not DC3, and such that not every recurrent point of F 1 is uniformly recurrent, while F 2 is Li and Yorke chaotic on the set of uniformly recurrent points. This, along with recent results by Forti et al. [Dynamics of homeomorphisms on minimal sets generated by triangular mappings, Bull Austral Math Soc 1999;59:1-20], among others, make possible to compile complete list of the implications between dynamical properties of maps in T m , solving a long-standing open problem by Sharkovsky

  1. Mapping of imprinted quantitative trait loci using immortalized F2 populations.

    Directory of Open Access Journals (Sweden)

    Yongxian Wen

    Full Text Available Mapping of imprinted quantitative trait loci (iQTLs is helpful for understanding the effects of genomic imprinting on complex traits in animals and plants. At present, the experimental designs and corresponding statistical methods having been proposed for iQTL mapping are all based on temporary populations including F2 and BC1, which can be used only once and suffer some other shortcomings respectively. In this paper, we propose a framework for iQTL mapping, including methods of interval mapping (IM and composite interval mapping (CIM based on conventional low-density genetic maps and point mapping (PM and composite point mapping (CPM based on ultrahigh-density genetic maps, using an immortalized F2 (imF2 population generated by random crosses between recombinant inbred lines or doubled haploid lines. We demonstrate by simulations that imF2 populations are very desirable and the proposed statistical methods (especially CIM and CPM are very powerful for iQTL mapping, with which the imprinting effects as well as the additive and dominance effects of iQTLs can be unbiasedly estimated.

  2. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2016-12-01

    Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

  3. Progress towards GlobalSoilMap.net soil database of Denmark

    DEFF Research Database (Denmark)

    Adhikari, Kabindra; Bou Kheir, Rania; Greve, Mogens Humlekrog

    2012-01-01

    Denmark is an agriculture-based country where intensive mechanized cultivation has been practiced continuously for years leading to serious threats to the soils. Proper use and management of Danish soil resources, modeling and soil research activities need very detailed soil information. This study...... presents recent advancements in Digital Soil Mapping (DSM) activities in Denmark with an example of soil clay mapping using regression-based DSM techniques. Several environmental covariates were used to build regression rules and national scale soil prediction was made at 30 m resolution. Spatial...... content mapping, the plans for future soil mapping activities in support to GlobalSoilMap.net project initiatives are also included in this paper. Our study thought to enrich and update Danish soil database and Soil information system with new fine resolution soil property maps....

  4. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Science.gov (United States)

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  5. Fast metabolite identification with Input Output Kernel Regression

    Science.gov (United States)

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  6. Confidence bands for inverse regression models

    International Nuclear Information System (INIS)

    Birke, Melanie; Bissantz, Nicolai; Holzmann, Hajo

    2010-01-01

    We construct uniform confidence bands for the regression function in inverse, homoscedastic regression models with convolution-type operators. Here, the convolution is between two non-periodic functions on the whole real line rather than between two periodic functions on a compact interval, since the former situation arguably arises more often in applications. First, following Bickel and Rosenblatt (1973 Ann. Stat. 1 1071–95) we construct asymptotic confidence bands which are based on strong approximations and on a limit theorem for the supremum of a stationary Gaussian process. Further, we propose bootstrap confidence bands based on the residual bootstrap and prove consistency of the bootstrap procedure. A simulation study shows that the bootstrap confidence bands perform reasonably well for moderate sample sizes. Finally, we apply our method to data from a gel electrophoresis experiment with genetically engineered neuronal receptor subunits incubated with rat brain extract

  7. Determination of fat content in chicken hamburgers using NIR spectroscopy and the Successive Projections Algorithm for interval selection in PLS regression (iSPA-PLS)

    Science.gov (United States)

    Krepper, Gabriela; Romeo, Florencia; Fernandes, David Douglas de Sousa; Diniz, Paulo Henrique Gonçalves Dias; de Araújo, Mário César Ugulino; Di Nezio, María Susana; Pistonesi, Marcelo Fabián; Centurión, María Eugenia

    2018-01-01

    Determining fat content in hamburgers is very important to minimize or control the negative effects of fat on human health, effects such as cardiovascular diseases and obesity, which are caused by the high consumption of saturated fatty acids and cholesterol. This study proposed an alternative analytical method based on Near Infrared Spectroscopy (NIR) and Successive Projections Algorithm for interval selection in Partial Least Squares regression (iSPA-PLS) for fat content determination in commercial chicken hamburgers. For this, 70 hamburger samples with a fat content ranging from 14.27 to 32.12 mg kg- 1 were prepared based on the upper limit recommended by the Argentinean Food Codex, which is 20% (w w- 1). NIR spectra were then recorded and then preprocessed by applying different approaches: base line correction, SNV, MSC, and Savitzky-Golay smoothing. For comparison, full-spectrum PLS and the Interval PLS are also used. The best performance for the prediction set was obtained for the first derivative Savitzky-Golay smoothing with a second-order polynomial and window size of 19 points, achieving a coefficient of correlation of 0.94, RMSEP of 1.59 mg kg- 1, REP of 7.69% and RPD of 3.02. The proposed methodology represents an excellent alternative to the conventional Soxhlet extraction method, since waste generation is avoided, yet without the use of either chemical reagents or solvents, which follows the primary principles of Green Chemistry. The new method was successfully applied to chicken hamburger analysis, and the results agreed with those with reference values at a 95% confidence level, making it very attractive for routine analysis.

  8. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  9. Pseudospectral methods on a semi-infinite interval with application to the hydrogen atom: a comparison of the mapped Fourier-sine method with Laguerre series and rational Chebyshev expansions

    International Nuclear Information System (INIS)

    Boyd, John P.; Rangan, C.; Bucksbaum, P.H.

    2003-01-01

    The Fourier-sine-with-mapping pseudospectral algorithm of Fattal et al. [Phys. Rev. E 53 (1996) 1217] has been applied in several quantum physics problems. Here, we compare it with pseudospectral methods using Laguerre functions and rational Chebyshev functions. We show that Laguerre and Chebyshev expansions are better suited for solving problems in the interval r in R set of [0,∞] (for example, the Coulomb-Schroedinger equation), than the Fourier-sine-mapping scheme. All three methods give similar accuracy for the hydrogen atom when the scaling parameter L is optimum, but the Laguerre and Chebyshev methods are less sensitive to variations in L. We introduce a new variant of rational Chebyshev functions which has a more uniform spacing of grid points for large r, and gives somewhat better results than the rational Chebyshev functions of Boyd [J. Comp. Phys. 70 (1987) 63

  10. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  11. Energy compensation after sprint- and high-intensity interval training.

    Directory of Open Access Journals (Sweden)

    Matthew M Schubert

    Full Text Available Many individuals lose less weight than expected in response to exercise interventions when considering the increased energy expenditure of exercise (ExEE. This is due to energy compensation in response to ExEE, which may include increases in energy intake (EI and decreases in non-exercise physical activity (NEPA. We examined the degree of energy compensation in healthy young men and women in response to interval training.Data were examined from a prior study in which 24 participants (mean age, BMI, & VO2max = 28 yrs, 27.7 kg•m-2, and 32 mL∙kg-1∙min-1 completed either 4 weeks of sprint-interval training or high-intensity interval training. Energy compensation was calculated from changes in body composition (air displacement plethysmography and exercise energy expenditure was calculated from mean heart rate based on the heart rate-VO2 relationship. Differences between high (≥ 100% and low (< 100% levels of energy compensation were assessed. Linear regressions were utilized to determine associations between energy compensation and ΔVO2max, ΔEI, ΔNEPA, and Δresting metabolic rate.Very large individual differences in energy compensation were noted. In comparison to individuals with low levels of compensation, individuals with high levels of energy compensation gained fat mass, lost fat-free mass, and had lower change scores for VO2max and NEPA. Linear regression results indicated that lower levels of energy compensation were associated with increases in ΔVO2max (p < 0.001 and ΔNEPA (p < 0.001.Considerable variation exists in response to short-term, low dose interval training. In agreement with prior work, increases in ΔVO2max and ΔNEPA were associated with lower energy compensation. Future studies should focus on identifying if a dose-response relationship for energy compensation exists in response to interval training, and what underlying mechanisms and participant traits contribute to the large variation between individuals.

  12. Background stratified Poisson regression analysis of cohort data

    International Nuclear Information System (INIS)

    Richardson, David B.; Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)

  13. Multitask Quantile Regression under the Transnormal Model.

    Science.gov (United States)

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2016-01-01

    We consider estimating multi-task quantile regression under the transnormal model, with focus on high-dimensional setting. We derive a surprisingly simple closed-form solution through rank-based covariance regularization. In particular, we propose the rank-based ℓ 1 penalization with positive definite constraints for estimating sparse covariance matrices, and the rank-based banded Cholesky decomposition regularization for estimating banded precision matrices. By taking advantage of alternating direction method of multipliers, nearest correlation matrix projection is introduced that inherits sampling properties of the unprojected one. Our work combines strengths of quantile regression and rank-based covariance regularization to simultaneously deal with nonlinearity and nonnormality for high-dimensional regression. Furthermore, the proposed method strikes a good balance between robustness and efficiency, achieves the "oracle"-like convergence rate, and provides the provable prediction interval under the high-dimensional setting. The finite-sample performance of the proposed method is also examined. The performance of our proposed rank-based method is demonstrated in a real application to analyze the protein mass spectroscopy data.

  14. Resting-state functional magnetic resonance imaging: the impact of regression analysis.

    Science.gov (United States)

    Yeh, Chia-Jung; Tseng, Yu-Sheng; Lin, Yi-Ru; Tsai, Shang-Yueh; Huang, Teng-Yi

    2015-01-01

    To investigate the impact of regression methods on resting-state functional magnetic resonance imaging (rsfMRI). During rsfMRI preprocessing, regression analysis is considered effective for reducing the interference of physiological noise on the signal time course. However, it is unclear whether the regression method benefits rsfMRI analysis. Twenty volunteers (10 men and 10 women; aged 23.4 ± 1.5 years) participated in the experiments. We used node analysis and functional connectivity mapping to assess the brain default mode network by using five combinations of regression methods. The results show that regressing the global mean plays a major role in the preprocessing steps. When a global regression method is applied, the values of functional connectivity are significantly lower (P ≤ .01) than those calculated without a global regression. This step increases inter-subject variation and produces anticorrelated brain areas. rsfMRI data processed using regression should be interpreted carefully. The significance of the anticorrelated brain areas produced by global signal removal is unclear. Copyright © 2014 by the American Society of Neuroimaging.

  15. Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals.

    Science.gov (United States)

    Aagten-Murphy, David; Cappagli, Giulia; Burr, David

    2014-03-01

    Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to

  16. Seafloor mapping of large areas using multibeam system - Indian experience

    Digital Repository Service at National Institute of Oceanography (India)

    Kodagali, V.N.; KameshRaju, K.A; Ramprasad, T.

    averaged and merged to produce large area maps. Maps were generated in the scale of 1 mil. and 1.5 mil covering area of about 2 mil. sq.km in single map. Also, depth contour interval were generated. A computer program was developed to convert the depth data...

  17. Translation of Bernstein Coefficients Under an Affine Mapping of the Unit Interval

    Science.gov (United States)

    Alford, John A., II

    2012-01-01

    We derive an expression connecting the coefficients of a polynomial expanded in the Bernstein basis to the coefficients of an equivalent expansion of the polynomial under an affine mapping of the domain. The expression may be useful in the calculation of bounds for multi-variate polynomials.

  18. Technical note: Instantaneous sampling intervals validated from continuous video observation for behavioral recording of feedlot lambs.

    Science.gov (United States)

    Pullin, A N; Pairis-Garcia, M D; Campbell, B J; Campler, M R; Proudfoot, K L

    2017-11-01

    When considering methodologies for collecting behavioral data, continuous sampling provides the most complete and accurate data set whereas instantaneous sampling can provide similar results and also increase the efficiency of data collection. However, instantaneous time intervals require validation to ensure accurate estimation of the data. Therefore, the objective of this study was to validate scan sampling intervals for lambs housed in a feedlot environment. Feeding, lying, standing, drinking, locomotion, and oral manipulation were measured on 18 crossbred lambs housed in an indoor feedlot facility for 14 h (0600-2000 h). Data from continuous sampling were compared with data from instantaneous scan sampling intervals of 5, 10, 15, and 20 min using a linear regression analysis. Three criteria determined if a time interval accurately estimated behaviors: 1) ≥ 0.90, 2) slope not statistically different from 1 ( > 0.05), and 3) intercept not statistically different from 0 ( > 0.05). Estimations for lying behavior were accurate up to 20-min intervals, whereas feeding and standing behaviors were accurate only at 5-min intervals (i.e., met all 3 regression criteria). Drinking, locomotion, and oral manipulation demonstrated poor associations () for all tested intervals. The results from this study suggest that a 5-min instantaneous sampling interval will accurately estimate lying, feeding, and standing behaviors for lambs housed in a feedlot, whereas continuous sampling is recommended for the remaining behaviors. This methodology will contribute toward the efficiency, accuracy, and transparency of future behavioral data collection in lamb behavior research.

  19. Estimating transmitted waves of floating breakwater using support vector regression model

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Hegde, A.V.; Kumar, V.; Patil, S.G.

    is first mapped onto an m-dimensional feature space using some fixed (nonlinear) mapping, and then a linear model is constructed in this feature space (Ivanciuc Ovidiu 2007). Using mathematical notation, the linear model in the feature space f(x, w... regressive vector machines, Ocean Engineering Journal, Vol – 36, pp 339 – 347, 2009. 3. Ivanciuc Ovidiu, Applications of support vector machines in chemistry, Review in Computational Chemistry, Eds K. B. Lipkouitz and T. R. Cundari, Vol – 23...

  20. Differentiating regressed melanoma from regressed lichenoid keratosis.

    Science.gov (United States)

    Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

    2017-04-01

    Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China.

    Science.gov (United States)

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-05-11

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.

  2. Visualizing the Logistic Map with a Microcontroller

    Science.gov (United States)

    Serna, Juan D.; Joshi, Amitabh

    2012-01-01

    The logistic map is one of the simplest nonlinear dynamical systems that clearly exhibits the route to chaos. In this paper, we explore the evolution of the logistic map using an open-source microcontroller connected to an array of light-emitting diodes (LEDs). We divide the one-dimensional domain interval [0,1] into ten equal parts, an associate…

  3. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    Science.gov (United States)

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  4. Molecular mapping of chromosomes 17 and X. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Barker, D.F.

    1991-01-15

    Progress toward the construction of high density genetic maps of chromosomes 17 and X has been made by isolating and characterizing a relatively large set of polymorphic probes for each chromosome and using these probes to construct genetic maps. We have mapped the same polymorphic probes against a series of chromosome breakpoints on X and 17. The probes could be assigned to over 30 physical intervals on the X chromosome and 7 intervals on 17. In many cases, this process resulted in improved characterization of the relative locations of the breakpoints with respect to each other and the definition of new physical intervals. The strategy for isolation of the polymorphic clones utilized chromosome specific libraries of 1--15 kb segments from each of the two chromosomes. From these libraries, clones were screened for those detecting restriction fragment length polymorphisms. The markers were further characterized, the chromosomal assignments confirmed and in most cases segments of the original probes were subcloned into plasmids to produce probes with improved signal to noise ratios for use in the genetic marker studies. The linkage studies utilize the CEPH reference families and other well-characterized families in our collection which have been used for genetic disease linkage work. Preliminary maps and maps of portions of specific regions of 17 and X are provided. We have nearly completed a map of the 1 megabase Mycoplasma arthritidis genome by applying these techniques to a lambda phage library of its genome. We have found bit mapping to be an efficient means to organize a contiguous set of overlapping@ clones from a larger genome.

  5. Uso de regressões logísticas múltiplas para mapeamento digital de solos no Planalto Médio do RS Multiple logistic regression applied to soil survey in rio grande do sul state, Brazil

    Directory of Open Access Journals (Sweden)

    Samuel Ribeiro Figueiredo

    2008-12-01

    hydrographic variables (distance to rivers, flow length, topographical wetness index, and stream power index. Multiple logistic regressions were established between the soil classes mapped on the basis of a traditional survey at a scale of 1:80.000 and the land variables calculated using the DEM. The regressions were used to calculate the probability of occurrence of each soil class. The final estimated soil map was drawn by assigning the soil class with highest probability of occurrence to each cell. The general accuracy was evaluated at 58 % and the Kappa coefficient at 38 % in a comparison of the original soil map with the map estimated at the original scale. A legend simplification had little effect to increase the general accuracy of the map (general accuracy of 61 % and Kappa coefficient of 39 %. It was concluded that multiple logistic regressions have a predictive potential as tool of supervised soil mapping.

  6. Mapping the HISS Dipole

    International Nuclear Information System (INIS)

    McParland, C.; Bieser, F.

    1984-01-01

    The principal component of the Bevalac HISS facility is a large super-conducting 3 Tesla dipole. The facility's need for a large magnetic volume spectrometer resulted in a large gap geometry - a 2 meter pole tip diameter and a 1 meter pole gap. Obviously, the field required detailed mapping for effective use as a spectrometer. The mapping device was designed with several major features in mind. The device would measure field values on a grid which described a closed rectangular solid. The grid would be a regular with the exact measurement intervals adjustable by software. The device would function unattended over the long period of time required to complete a field map. During this time, the progress of the map could be monitored by anyone with access to the HISS VAX computer. Details of the mechanical, electrical, and control design follow

  7. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR is an efficient tool for metamodelling of nonlinear dynamic models

    Directory of Open Access Journals (Sweden)

    Omholt Stig W

    2011-06-01

    Full Text Available Abstract Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs to variation in features of the trajectories of the state variables (outputs throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR, where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR and ordinary least squares (OLS regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback

  8. Hierarchical cluster-based partial least squares regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models.

    Science.gov (United States)

    Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald

    2011-06-01

    Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for

  9. DIGITAL GEOECOLOGICAL MAPS AND SEVERAL METHODS OF ITS CONSTRUCTING IN A GIS ARCGIS

    Directory of Open Access Journals (Sweden)

    S. V. Lebedev

    2015-01-01

    Full Text Available This article is focused on methods of digital geoecological maps construction with using GIS ArcGIS technologies. The technique of GIS ArcGIS mapping is illustrated by examples of GIS maps of radioactive and chemical pollution in the snow cover on the territory of St. Petersburg’s (Russia. The geostatistic and deterministic approaches were applied for interpolation of input data. The input data were presented by the coordinates of points located on territory according to the scheme of measurements. The most optimal amount of classification intervals describing the natural processes and the phenomena of all-over distribution on the geoeclogical GIS maps is the 3–5 intervals of the parameter that will be considered. The borders of classes of intervals are set in depend on existing normative of pollution in different components of environment and empirical character of study parameter distribution on territory under consideration.

  10. Comparing the performance of various digital soil mapping approaches to map physical soil properties

    Science.gov (United States)

    Laborczi, Annamária; Takács, Katalin; Pásztor, László

    2015-04-01

    Spatial information on physical soil properties is intensely expected, in order to support environmental related and land use management decisions. One of the most widely used properties to characterize soils physically is particle size distribution (PSD), which determines soil water management and cultivability. According to their size, different particles can be categorized as clay, silt, or sand. The size intervals are defined by national or international textural classification systems. The relative percentage of sand, silt, and clay in the soil constitutes textural classes, which are also specified miscellaneously in various national and/or specialty systems. The most commonly used is the classification system of the United States Department of Agriculture (USDA). Soil texture information is essential input data in meteorological, hydrological and agricultural prediction modelling. Although Hungary has a great deal of legacy soil maps and other relevant soil information, it often occurs, that maps do not exist on a certain characteristic with the required thematic and/or spatial representation. The recent developments in digital soil mapping (DSM), however, provide wide opportunities for the elaboration of object specific soil maps (OSSM) with predefined parameters (resolution, accuracy, reliability etc.). Due to the simultaneous richness of available Hungarian legacy soil data, spatial inference methods and auxiliary environmental information, there is a high versatility of possible approaches for the compilation of a given soil map. This suggests the opportunity of optimization. For the creation of an OSSM one might intend to identify the optimum set of soil data, method and auxiliary co-variables optimized for the resources (data costs, computation requirements etc.). We started comprehensive analysis of the effects of the various DSM components on the accuracy of the output maps on pilot areas. The aim of this study is to compare and evaluate different

  11. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  12. Differential preparation intervals modulate repetition processes in task switching: an ERP study

    Directory of Open Access Journals (Sweden)

    Min eWang

    2016-02-01

    Full Text Available In task-switching paradigms, reaction times (RTs switch cost (SC and the neural correlates underlying the SC are affected by different preparation intervals. However, little is known about the effect of the preparation interval on the repetition processes in task-switching. To examine this effect we utilized a cued task-switching paradigm with long sequences of repeated trials. Response-stimulus intervals (RSI and cue-stimulus intervals (CSI were manipulated in short and long conditions. Electroencephalography (EEG and behavioral data were recorded. We found that with increasing repetitions, RTs were faster in the short CSI conditions, while P3 amplitudes decreased in the LS (long RSI and short CSI conditions. Positive correlations between RT benefit and P3 activation decrease (repeat 1 minus repeat 5, and between the slope of the RT and P3 regression lines were observed only in the LS condition. Our findings suggest that differential preparation intervals modulate repetition processes in task switching.

  13. Spectral density regression for bivariate extremes

    KAUST Repository

    Castro Camilo, Daniela

    2016-05-11

    We introduce a density regression model for the spectral density of a bivariate extreme value distribution, that allows us to assess how extremal dependence can change over a covariate. Inference is performed through a double kernel estimator, which can be seen as an extension of the Nadaraya–Watson estimator where the usual scalar responses are replaced by mean constrained densities on the unit interval. Numerical experiments with the methods illustrate their resilience in a variety of contexts of practical interest. An extreme temperature dataset is used to illustrate our methods. © 2016 Springer-Verlag Berlin Heidelberg

  14. High Resolution of Quantitative Traits Into Multiple Loci via Interval Mapping

    OpenAIRE

    Jansen, Ritsert C.; Stam, Piet

    1994-01-01

    A very general method is described for multiple linear regression of a quantitative phenotype on genotype [putative quantitative trait loci (QTLs) and markers] in segregating generations obtained from line crosses. The method exploits two features, (a) the use of additional parental and F1 data, which fixes the joint QTL effects and the environmental error, and (b) the use of markers as cofactors, which reduces the genetic background noise. As a result, a significant increase of QTL detection...

  15. A simple proof of the exactness of expanding maps of the interval with an indifferent fixed point

    International Nuclear Information System (INIS)

    Lenci, Marco

    2016-01-01

    Expanding maps with indifferent fixed points, a.k.a. intermittent maps, are popular models in nonlinear dynamics and infinite ergodic theory. We present a simple proof of the exactness of a wide class of expanding maps of [0, 1], with countably many surjective branches and a strongly neutral fixed point in 0.

  16. Preliminary genetic linkage map of the abalone Haliotis diversicolor Reeve

    Science.gov (United States)

    Shi, Yaohua; Guo, Ximing; Gu, Zhifeng; Wang, Aimin; Wang, Yan

    2010-05-01

    Haliotis diversicolor Reeve is one of the most important mollusks cultured in South China. Preliminary genetic linkage maps were constructed with amplified fragment length polymorphism (AFLP) markers. A total of 2 596 AFLP markers were obtained from 28 primer combinations in two parents and 78 offsprings. Among them, 412 markers (15.9%) were polymorphic and segregated in the mapping family. Chi-square tests showed that 151 (84.4%) markers segregated according to the expected 1:1 Mendelian ratio ( P<0.05) in the female parent, and 200 (85.8%) in the male parent. For the female map, 179 markers were used for linkage analysis and 90 markers were assigned to 17 linkage groups with an average interval length of 25.7 cm. For the male map, 233 markers were used and 94 were mapped into 18 linkage groups, with an average interval of 25.0 cm. The estimated genome length was 2 773.0 cm for the female and 2 817.1 cm for the male map. The observed length of the linkage map was 1 875.2 cm and 1 896.5 cm for the female and male maps, respectively. When doublets were considered, the map length increased to 2 152.8 cm for the female and 2 032.7 cm for the male map, corresponding to genome coverage of 77.6% and 72.2%, respectively.

  17. Identification of milling and baking quality QTL in multiple soft wheat mapping populations.

    Science.gov (United States)

    Cabrera, Antonio; Guttieri, Mary; Smith, Nathan; Souza, Edward; Sturbaum, Anne; Hua, Duc; Griffey, Carl; Barnett, Marla; Murphy, Paul; Ohm, Herb; Uphaus, Jim; Sorrells, Mark; Heffner, Elliot; Brown-Guedira, Gina; Van Sanford, David; Sneller, Clay

    2015-11-01

    Two mapping approaches were use to identify and validate milling and baking quality QTL in soft wheat. Two LG were consistently found important for multiple traits and we recommend the use marker-assisted selection on specific markers reported here. Wheat-derived food products require a range of characteristics. Identification and understanding of the genetic components controlling end-use quality of wheat is important for crop improvement. We assessed the underlying genetics controlling specific milling and baking quality parameters of soft wheat including flour yield, softness equivalent, flour protein, sucrose, sodium carbonate, water absorption and lactic acid, solvent retention capacities in a diversity panel and five bi-parental mapping populations. The populations were genotyped with SSR and DArT markers, with markers specific for the 1BL.1RS translocation and sucrose synthase gene. Association analysis and composite interval mapping were performed to identify quantitative trait loci (QTL). High heritability was observed for each of the traits evaluated, trait correlations were consistent over populations, and transgressive segregants were common in all bi-parental populations. A total of 26 regions were identified as potential QTL in the diversity panel and 74 QTL were identified across all five bi-parental mapping populations. Collinearity of QTL from chromosomes 1B and 2B was observed across mapping populations and was consistent with results from the association analysis in the diversity panel. Multiple regression analysis showed the importance of the two 1B and 2B regions and marker-assisted selection for the favorable alleles at these regions should improve quality.

  18. Toward Customer-Centric Organizational Science: A Common Language Effect Size Indicator for Multiple Linear Regressions and Regressions With Higher-Order Terms.

    Science.gov (United States)

    Krasikova, Dina V; Le, Huy; Bachura, Eric

    2018-01-22

    To address a long-standing concern regarding a gap between organizational science and practice, scholars called for more intuitive and meaningful ways of communicating research results to users of academic research. In this article, we develop a common language effect size index (CLβ) that can help translate research results to practice. We demonstrate how CLβ can be computed and used to interpret the effects of continuous and categorical predictors in multiple linear regression models. We also elaborate on how the proposed CLβ index is computed and used to interpret interactions and nonlinear effects in regression models. In addition, we test the robustness of the proposed index to violations of normality and provide means for computing standard errors and constructing confidence intervals around its estimates. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Change in Breast Cancer Screening Intervals Since the 2009 USPSTF Guideline.

    Science.gov (United States)

    Wernli, Karen J; Arao, Robert F; Hubbard, Rebecca A; Sprague, Brian L; Alford-Teaster, Jennifer; Haas, Jennifer S; Henderson, Louise; Hill, Deidre; Lee, Christoph I; Tosteson, Anna N A; Onega, Tracy

    2017-08-01

    In 2009, the U.S. Preventive Services Task Force (USPSTF) recommended biennial mammography for women aged 50-74 years and shared decision-making for women aged 40-49 years for breast cancer screening. We evaluated changes in mammography screening interval after the 2009 recommendations. We conducted a prospective cohort study of women aged 40-74 years who received 821,052 screening mammograms between 2006 and 2012 using data from the Breast Cancer Surveillance Consortium. We compared changes in screening intervals and stratified intervals based on whether the mammogram at the end of the interval occurred before or after the 2009 recommendation. Differences in mean interval length by woman-level characteristics were compared using linear regression. The mean interval (in months) minimally decreased after the 2009 USPSTF recommendations. Among women aged 40-49 years, the mean interval decreased from 17.2 months to 17.1 months (difference -0.16%, 95% confidence interval [CI] -0.30 to -0.01). Similar small reductions were seen for most age groups. The largest change in interval length in the post-USPSTF period was declines among women with a first-degree family history of breast cancer (difference -0.68%, 95% CI -0.82 to -0.54) or a 5-year breast cancer risk ≥2.5% (difference -0.58%, 95% CI -0.73 to -0.44). The 2009 USPSTF recommendation did not lengthen the average mammography interval among women routinely participating in mammography screening. Future studies should evaluate whether breast cancer screening intervals lengthen toward biennial intervals following new national 2016 breast cancer screening recommendations, particularly among women less than 50 years of age.

  20. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China

    Directory of Open Access Journals (Sweden)

    Xianyu Yu

    2016-05-01

    Full Text Available In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.

  1. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    Science.gov (United States)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  2. Monophasic action potentials and activation recovery intervals as measures of ventricular action potential duration: experimental evidence to resolve some controversies

    NARCIS (Netherlands)

    Coronel, Ruben; de Bakker, Jacques M. T.; Wilms-Schopman, Francien J. G.; Opthof, Tobias; Linnenbank, André C.; Belterman, Charly N.; Janse, Michiel J.

    2006-01-01

    BACKGROUND: Activation recovery intervals (ARIs) and monophasic action potential (MAP) duration are used as measures of action potential duration in beating hearts. However, controversies exist concerning the correct way to record MAPs or calculate ARIs. We have addressed these issues

  3. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  4. Reproducibility of somatosensory spatial perceptual maps.

    Science.gov (United States)

    Steenbergen, Peter; Buitenweg, Jan R; Trojan, Jörg; Veltink, Peter H

    2013-02-01

    Various studies have shown subjects to mislocalize cutaneous stimuli in an idiosyncratic manner. Spatial properties of individual localization behavior can be represented in the form of perceptual maps. Individual differences in these maps may reflect properties of internal body representations, and perceptual maps may therefore be a useful method for studying these representations. For this to be the case, individual perceptual maps need to be reproducible, which has not yet been demonstrated. We assessed the reproducibility of localizations measured twice on subsequent days. Ten subjects participated in the experiments. Non-painful electrocutaneous stimuli were applied at seven sites on the lower arm. Subjects localized the stimuli on a photograph of their own arm, which was presented on a tablet screen overlaying the real arm. Reproducibility was assessed by calculating intraclass correlation coefficients (ICC) for the mean localizations of each electrode site and the slope and offset of regression models of the localizations, which represent scaling and displacement of perceptual maps relative to the stimulated sites. The ICCs of the mean localizations ranged from 0.68 to 0.93; the ICCs of the regression parameters were 0.88 for the intercept and 0.92 for the slope. These results indicate a high degree of reproducibility. We conclude that localization patterns of non-painful electrocutaneous stimuli on the arm are reproducible on subsequent days. Reproducibility is a necessary property of perceptual maps for these to reflect properties of a subject's internal body representations. Perceptual maps are therefore a promising method for studying body representations.

  5. Stochastic development regression on non-linear manifolds

    DEFF Research Database (Denmark)

    Kühnel, Line; Sommer, Stefan Horst

    2017-01-01

    We introduce a regression model for data on non-linear manifolds. The model describes the relation between a set of manifold valued observations, such as shapes of anatomical objects, and Euclidean explanatory variables. The approach is based on stochastic development of Euclidean diffusion...... processes to the manifold. Defining the data distribution as the transition distribution of the mapped stochastic process, parameters of the model, the non-linear analogue of design matrix and intercept, are found via maximum likelihood. The model is intrinsically related to the geometry encoded...

  6. Prediction of hearing outcomes by multiple regression analysis in patients with idiopathic sudden sensorineural hearing loss.

    Science.gov (United States)

    Suzuki, Hideaki; Tabata, Takahisa; Koizumi, Hiroki; Hohchi, Nobusuke; Takeuchi, Shoko; Kitamura, Takuro; Fujino, Yoshihisa; Ohbuchi, Toyoaki

    2014-12-01

    This study aimed to create a multiple regression model for predicting hearing outcomes of idiopathic sudden sensorineural hearing loss (ISSNHL). The participants were 205 consecutive patients (205 ears) with ISSNHL (hearing level ≥ 40 dB, interval between onset and treatment ≤ 30 days). They received systemic steroid administration combined with intratympanic steroid injection. Data were examined by simple and multiple regression analyses. Three hearing indices (percentage hearing improvement, hearing gain, and posttreatment hearing level [HLpost]) and 7 prognostic factors (age, days from onset to treatment, initial hearing level, initial hearing level at low frequencies, initial hearing level at high frequencies, presence of vertigo, and contralateral hearing level) were included in the multiple regression analysis as dependent and explanatory variables, respectively. In the simple regression analysis, the percentage hearing improvement, hearing gain, and HLpost showed significant correlation with 2, 5, and 6 of the 7 prognostic factors, respectively. The multiple correlation coefficients were 0.396, 0.503, and 0.714 for the percentage hearing improvement, hearing gain, and HLpost, respectively. Predicted values of HLpost calculated by the multiple regression equation were reliable with 70% probability with a 40-dB-width prediction interval. Prediction of HLpost by the multiple regression model may be useful to estimate the hearing prognosis of ISSNHL. © The Author(s) 2014.

  7. Assessment of deforestation using regression; Hodnotenie odlesnenia s vyuzitim regresie

    Energy Technology Data Exchange (ETDEWEB)

    Juristova, J. [Univerzita Komenskeho, Prirodovedecka fakulta, Katedra kartografie, geoinformatiky a DPZ, 84215 Bratislava (Slovakia)

    2013-04-16

    This work is devoted to the evaluation of deforestation using regression methods through software Idrisi Taiga. Deforestation is evaluated by the method of logistic regression. The dependent variable has discrete values '0' and '1', indicating that the deforestation occurred or not. Independent variables have continuous values, expressing the distance from the edge of the deforested areas of forests from urban areas, the river and the road network. The results were also used in predicting the probability of deforestation in subsequent periods. The result is a map showing the output probability of deforestation for the periods 1990/2000 and 200/2006 in accordance with predetermined coefficients (values of independent variables). (authors)

  8. Retro-regression--another important multivariate regression improvement.

    Science.gov (United States)

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  9. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  10. Remote sensing and GIS-based landslide hazard analysis and cross-validation using multivariate logistic regression model on three test areas in Malaysia

    Science.gov (United States)

    Pradhan, Biswajeet

    2010-05-01

    This paper presents the results of the cross-validation of a multivariate logistic regression model using remote sensing data and GIS for landslide hazard analysis on the Penang, Cameron, and Selangor areas in Malaysia. Landslide locations in the study areas were identified by interpreting aerial photographs and satellite images, supported by field surveys. SPOT 5 and Landsat TM satellite imagery were used to map landcover and vegetation index, respectively. Maps of topography, soil type, lineaments and land cover were constructed from the spatial datasets. Ten factors which influence landslide occurrence, i.e., slope, aspect, curvature, distance from drainage, lithology, distance from lineaments, soil type, landcover, rainfall precipitation, and normalized difference vegetation index (ndvi), were extracted from the spatial database and the logistic regression coefficient of each factor was computed. Then the landslide hazard was analysed using the multivariate logistic regression coefficients derived not only from the data for the respective area but also using the logistic regression coefficients calculated from each of the other two areas (nine hazard maps in all) as a cross-validation of the model. For verification of the model, the results of the analyses were then compared with the field-verified landslide locations. Among the three cases of the application of logistic regression coefficient in the same study area, the case of Selangor based on the Selangor logistic regression coefficients showed the highest accuracy (94%), where as Penang based on the Penang coefficients showed the lowest accuracy (86%). Similarly, among the six cases from the cross application of logistic regression coefficient in other two areas, the case of Selangor based on logistic coefficient of Cameron showed highest (90%) prediction accuracy where as the case of Penang based on the Selangor logistic regression coefficients showed the lowest accuracy (79%). Qualitatively, the cross

  11. Using the classical linear regression model in analysis of the dependences of conveyor belt life

    Directory of Open Access Journals (Sweden)

    Miriam Andrejiová

    2013-12-01

    Full Text Available The paper deals with the classical linear regression model of the dependence of conveyor belt life on some selected parameters: thickness of paint layer, width and length of the belt, conveyor speed and quantity of transported material. The first part of the article is about regression model design, point and interval estimation of parameters, verification of statistical significance of the model, and about the parameters of the proposed regression model. The second part of the article deals with identification of influential and extreme values that can have an impact on estimation of regression model parameters. The third part focuses on assumptions of the classical regression model, i.e. on verification of independence assumptions, normality and homoscedasticity of residuals.

  12. Analyzing hospitalization data: potential limitations of Poisson regression.

    Science.gov (United States)

    Weaver, Colin G; Ravani, Pietro; Oliver, Matthew J; Austin, Peter C; Quinn, Robert R

    2015-08-01

    Poisson regression is commonly used to analyze hospitalization data when outcomes are expressed as counts (e.g. number of days in hospital). However, data often violate the assumptions on which Poisson regression is based. More appropriate extensions of this model, while available, are rarely used. We compared hospitalization data between 206 patients treated with hemodialysis (HD) and 107 treated with peritoneal dialysis (PD) using Poisson regression and compared results from standard Poisson regression with those obtained using three other approaches for modeling count data: negative binomial (NB) regression, zero-inflated Poisson (ZIP) regression and zero-inflated negative binomial (ZINB) regression. We examined the appropriateness of each model and compared the results obtained with each approach. During a mean 1.9 years of follow-up, 183 of 313 patients (58%) were never hospitalized (indicating an excess of 'zeros'). The data also displayed overdispersion (variance greater than mean), violating another assumption of the Poisson model. Using four criteria, we determined that the NB and ZINB models performed best. According to these two models, patients treated with HD experienced similar hospitalization rates as those receiving PD {NB rate ratio (RR): 1.04 [bootstrapped 95% confidence interval (CI): 0.49-2.20]; ZINB summary RR: 1.21 (bootstrapped 95% CI 0.60-2.46)}. Poisson and ZIP models fit the data poorly and had much larger point estimates than the NB and ZINB models [Poisson RR: 1.93 (bootstrapped 95% CI 0.88-4.23); ZIP summary RR: 1.84 (bootstrapped 95% CI 0.88-3.84)]. We found substantially different results when modeling hospitalization data, depending on the approach used. Our results argue strongly for a sound model selection process and improved reporting around statistical methods used for modeling count data. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  13. Quantitative Trait Loci Mapping Problem: An Extinction-Based Multi-Objective Evolutionary Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Nicholas S. Flann

    2013-09-01

    Full Text Available The Quantitative Trait Loci (QTL mapping problem aims to identify regions in the genome that are linked to phenotypic features of the developed organism that vary in degree. It is a principle step in determining targets for further genetic analysis and is key in decoding the role of specific genes that control quantitative traits within species. Applications include identifying genetic causes of disease, optimization of cross-breeding for desired traits and understanding trait diversity in populations. In this paper a new multi-objective evolutionary algorithm (MOEA method is introduced and is shown to increase the accuracy of QTL mapping identification for both independent and epistatic loci interactions. The MOEA method optimizes over the space of possible partial least squares (PLS regression QTL models and considers the conflicting objectives of model simplicity versus model accuracy. By optimizing for minimal model complexity, MOEA has the advantage of solving the over-fitting problem of conventional PLS models. The effectiveness of the method is confirmed by comparing the new method with Bayesian Interval Mapping approaches over a series of test cases where the optimal solutions are known. This approach can be applied to many problems that arise in analysis of genomic data sets where the number of features far exceeds the number of observations and where features can be highly correlated.

  14. Econometric analysis of realised covariation: high frequency covariance, regression and correlation in financial economics

    OpenAIRE

    Ole E. Barndorff-Nielsen; Neil Shephard

    2002-01-01

    This paper analyses multivariate high frequency financial data using realised covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis and covariance. It will be based on a fixed interval of time (e.g. a day or week), allowing the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions and covariances change through time. In particular w...

  15. Multifractal distribution of spike intervals for two oscillators coupled by unreliable pulses

    International Nuclear Information System (INIS)

    Kestler, Johannes; Kinzel, Wolfgang

    2006-01-01

    Two neurons coupled by unreliable synapses are modelled by leaky integrate-and-fire neurons and stochastic on-off synapses. The dynamics is mapped to an iterated function system. Numerical calculations yield a multifractal distribution of interspike intervals. The covering, information and correlation dimensions are calculated as a function of synaptic strength and transmission probability. (letter to the editor)

  16. SPE dose prediction using locally weighted regression

    International Nuclear Information System (INIS)

    Hines, J. W.; Townsend, L. W.; Nichols, T. F.

    2005-01-01

    When astronauts are outside earth's protective magnetosphere, they are subject to large radiation doses resulting from solar particle events (SPEs). The total dose received from a major SPE in deep space could cause severe radiation poisoning. The dose is usually received over a 20-40 h time interval but the event's effects may be mitigated with an early warning system. This paper presents a method to predict the total dose early in the event. It uses a locally weighted regression model, which is easier to train and provides predictions as accurate as neural network models previously used. (authors)

  17. SPE dose prediction using locally weighted regression

    International Nuclear Information System (INIS)

    Hines, J. W.; Townsend, L. W.; Nichols, T. F.

    2005-01-01

    When astronauts are outside Earth's protective magnetosphere, they are subject to large radiation doses resulting from solar particle events. The total dose received from a major solar particle event in deep space could cause severe radiation poisoning. The dose is usually received over a 20-40 h time interval but the event's effects may be reduced with an early warning system. This paper presents a method to predict the total dose early in the event. It uses a locally weighted regression model, which is easier to train, and provides predictions as accurate as the neural network models that were used previously. (authors)

  18. Identifying Interacting Genetic Variations by Fish-Swarm Logic Regression

    Science.gov (United States)

    Yang, Aiyuan; Yan, Chunxia; Zhu, Feng; Zhao, Zhongmeng; Cao, Zhi

    2013-01-01

    Understanding associations between genotypes and complex traits is a fundamental problem in human genetics. A major open problem in mapping phenotypes is that of identifying a set of interacting genetic variants, which might contribute to complex traits. Logic regression (LR) is a powerful multivariant association tool. Several LR-based approaches have been successfully applied to different datasets. However, these approaches are not adequate with regard to accuracy and efficiency. In this paper, we propose a new LR-based approach, called fish-swarm logic regression (FSLR), which improves the logic regression process by incorporating swarm optimization. In our approach, a school of fish agents are conducted in parallel. Each fish agent holds a regression model, while the school searches for better models through various preset behaviors. A swarm algorithm improves the accuracy and the efficiency by speeding up the convergence and preventing it from dropping into local optimums. We apply our approach on a real screening dataset and a series of simulation scenarios. Compared to three existing LR-based approaches, our approach outperforms them by having lower type I and type II error rates, being able to identify more preset causal sites, and performing at faster speeds. PMID:23984382

  19. Identifying Interacting Genetic Variations by Fish-Swarm Logic Regression

    Directory of Open Access Journals (Sweden)

    Xuanping Zhang

    2013-01-01

    Full Text Available Understanding associations between genotypes and complex traits is a fundamental problem in human genetics. A major open problem in mapping phenotypes is that of identifying a set of interacting genetic variants, which might contribute to complex traits. Logic regression (LR is a powerful multivariant association tool. Several LR-based approaches have been successfully applied to different datasets. However, these approaches are not adequate with regard to accuracy and efficiency. In this paper, we propose a new LR-based approach, called fish-swarm logic regression (FSLR, which improves the logic regression process by incorporating swarm optimization. In our approach, a school of fish agents are conducted in parallel. Each fish agent holds a regression model, while the school searches for better models through various preset behaviors. A swarm algorithm improves the accuracy and the efficiency by speeding up the convergence and preventing it from dropping into local optimums. We apply our approach on a real screening dataset and a series of simulation scenarios. Compared to three existing LR-based approaches, our approach outperforms them by having lower type I and type II error rates, being able to identify more preset causal sites, and performing at faster speeds.

  20. A Simple Linear Regression Method for Quantitative Trait Loci Linkage Analysis With Censored Observations

    OpenAIRE

    Anderson, Carl A.; McRae, Allan F.; Visscher, Peter M.

    2006-01-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using...

  1. A baseline-free procedure for transformation models under interval censorship.

    Science.gov (United States)

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  2. Mapping Hurricane Rita inland storm tide

    Science.gov (United States)

    Berenbrock, Charles; Mason, Jr., Robert R.; Blanchard, Stephen F.; Simonovic, Slobodan P.

    2009-01-01

    Flood-inundation data are most useful for decision makers when presented in the context of maps of effected communities and (or) areas. But because the data are scarce and rarely cover the full extent of the flooding, interpolation and extrapolation of the information are needed. Many geographic information systems (GIS) provide various interpolation tools, but these tools often ignore the effects of the topographic and hydraulic features that influence flooding. A barrier mapping method was developed to improve maps of storm tide produced by Hurricane Rita. Maps were developed for the maximum storm tide and at 3-hour intervals from midnight (0000 hour) through noon (1200 hour) on September 24, 2005. The improved maps depict storm-tide elevations and the extent of flooding. The extent of storm-tide inundation from the improved maximum storm-tide map was compared to the extent of flood-inundation from a map prepared by the Federal Emergency Management Agency (FEMA). The boundaries from these two maps generally compared quite well especially along the Calcasieu River. Also a cross-section profile that parallels the Louisiana coast was developed from the maximum storm-tide map and included FEMA high-water marks.

  3. QT interval prolongation in users of selective serotonin reuptake inhibitors in an elderly surgical population

    DEFF Research Database (Denmark)

    van Haelst, Ingrid M M; van Klei, Wilton A; Doodeman, Hieronymus J

    2014-01-01

    OBJECTIVE: To investigate the association between the use of a selective serotonin reuptake inhibitor (SSRI) and the occurrence of QT interval prolongation in an elderly surgical population. METHOD: A cross-sectional study was conducted among patients (> 60 years) scheduled for outpatient...... preanesthesia evaluation in the period 2007 until 2012. The index group included elderly users of an SSRI. The reference group of nonusers of antidepressants was matched to the index group on sex and year of scheduled surgery (ratio, 1:1). The primary outcome was the occurrence of QT interval prolongation shown...... on electrocardiogram. The QT interval was corrected for heart rate (QTc interval). The secondary outcome was the duration of the QTc interval. The outcomes were adjusted for confounding by using regression techniques. RESULTS: The index and reference groups included 397 users of an SSRI and 397 nonusers, respectively...

  4. Continuous age- and sex-adjusted reference intervals of urinary markers for cerebral creatine deficiency syndromes: a novel approach to the definition of reference intervals.

    Science.gov (United States)

    Mørkrid, Lars; Rowe, Alexander D; Elgstoen, Katja B P; Olesen, Jess H; Ruijter, George; Hall, Patricia L; Tortorelli, Silvia; Schulze, Andreas; Kyriakopoulou, Lianna; Wamelink, Mirjam M C; van de Kamp, Jiddeke M; Salomons, Gajja S; Rinaldo, Piero

    2015-05-01

    Urinary concentrations of creatine and guanidinoacetic acid divided by creatinine are informative markers for cerebral creatine deficiency syndromes (CDSs). The renal excretion of these substances varies substantially with age and sex, challenging the sensitivity and specificity of postanalytical interpretation. Results from 155 patients with CDS and 12 507 reference individuals were contributed by 5 diagnostic laboratories. They were binned into 104 adjacent age intervals and renormalized with Box-Cox transforms (Ξ). Estimates for central tendency (μ) and dispersion (σ) of Ξ were obtained for each bin. Polynomial regression analysis was used to establish the age dependence of both μ[log(age)] and σ[log(age)]. The regression residuals were then calculated as z-scores = {Ξ - μ[log(age)]}/σ[log(age)]. The process was iterated until all z-scores outside Tukey fences ±3.372 were identified and removed. Continuous percentile charts were then calculated and plotted by retransformation. Statistically significant and biologically relevant subgroups of z-scores were identified. Significantly higher marker values were seen in females than males, necessitating separate reference intervals in both adolescents and adults. Comparison between our reconstructed reference percentiles and current standard age-matched reference intervals highlights an underlying risk of false-positive and false-negative events at certain ages. Disease markers depending strongly on covariates such as age and sex require large numbers of reference individuals to establish peripheral percentiles with sufficient precision. This is feasible only through collaborative data sharing and the use of appropriate statistical methods. Broad application of this approach can be implemented through freely available Web-based software. © 2015 American Association for Clinical Chemistry.

  5. Second-generation speed limit map updating applications

    DEFF Research Database (Denmark)

    Tradisauskas, Nerius; Agerholm, Niels; Juhl, Jens

    2011-01-01

    Intelligent Speed Adaptation is an Intelligent Transport System developed to significantly improve road safety in helping car drivers maintain appropriate driving behaviour. The system works in connection with the speed limits on the road network. It is thus essential to keep the speed limit map...... used in the Intelligent Speed Adaptation scheme updated. The traditional method of updating speed limit maps on the basis of long time interval observations needed to be replaced by a more efficient speed limit updating tool. In a Danish Intelligent Speed Adaptation trial a web-based tool was therefore...... for map updating should preferably be made on the basis of a commercial map provider, 2 such as Google Maps and that the real challenge is to oblige road authorities to carry out updates....

  6. An extended anchored linkage map and virtual mapping for the american mink genome based on homology to human and dog

    DEFF Research Database (Denmark)

    Anistoroaei, Razvan Marian; Ansari, S.; Farid, A.

    2009-01-01

    hybridization (FISH) and/or by means of human/dog/mink comparative homology. The average interval between markers is 8.5 cM and the linkage groups collectively span 1340 cM. In addition, 217 and 275 mink microsatellites have been placed on human and dog genomes, respectively. In conjunction with the existing...... comparative human/dog/mink data, these assignments represent useful virtual maps for the American mink genome. Comparison of the current human/dog assembled sequential map with the existing Zoo-FISH-based human/dog/mink maps helped to refine the human/dog/mink comparative map. Furthermore, comparison...... of the human and dog genome assemblies revealed a number of large synteny blocks, some of which are corroborated by data from the mink linkage map....

  7. Two SPSS programs for interpreting multiple regression results.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J; Chico, Eliseo

    2010-02-01

    When multiple regression is used in explanation-oriented designs, it is very important to determine both the usefulness of the predictor variables and their relative importance. Standardized regression coefficients are routinely provided by commercial programs. However, they generally function rather poorly as indicators of relative importance, especially in the presence of substantially correlated predictors. We provide two user-friendly SPSS programs that implement currently recommended techniques and recent developments for assessing the relevance of the predictors. The programs also allow the user to take into account the effects of measurement error. The first program, MIMR-Corr.sps, uses a correlation matrix as input, whereas the second program, MIMR-Raw.sps, uses the raw data and computes bootstrap confidence intervals of different statistics. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from http://brm.psychonomic-journals.org/content/supplemental.

  8. Modelling and mapping the topsoil organic carbon content for Tanzania

    Science.gov (United States)

    Kempen, Bas; Kaaya, Abel; Ngonyani Mhaiki, Consolatha; Kiluvia, Shani; Ruiperez-Gonzalez, Maria; Batjes, Niels; Dalsgaard, Soren

    2014-05-01

    Soil organic carbon (SOC), held in soil organic matter, is a key indicator of soil health and plays an important role in the global carbon cycle. The soil can act as a net source or sink of carbon depending on land use and management. Deforestation and forest degradation lead to the release of vast amounts of carbon from the soil in the form of greenhouse gasses, especially in tropical countries. Tanzania has a high deforestation rate: it is estimated that the country loses 1.1% of its total forested area annually. During 2010-2013 Tanzania has been a pilot country under the UN-REDD programme. This programme has supported Tanzania in its initial efforts towards reducing greenhouse gas emission from forest degradation and deforestation and towards preserving soil carbon stocks. Formulation and implementation of the national REDD strategy requires detailed information on the five carbon pools among these the SOC pool. The spatial distribution of SOC contents and stocks was not available for Tanzania. The initial aim of this research, was therefore to develop high-resolution maps of the SOC content for the country. The mapping exercise was carried out in a collaborative effort with four Tanzanian institutes and data from the Africa Soil Information Service initiative (AfSIS). The mapping exercise was provided with over 3200 field observations on SOC from four sources; this is the most comprehensive soil dataset collected in Tanzania so far. The main source of soil samples was the National Forest Monitoring and Assessment (NAFORMA). The carbon maps were generated by means of digital soil mapping using regression-kriging. Maps at 250 m spatial resolution were developed for four depth layers: 0-10 cm, 10-20 cm, 20-30 cm, and 0-30 cm. A total of 37 environmental GIS data layers were prepared for use as covariates in the regression model. These included vegetation indices, terrain parameters, surface temperature, spectral reflectances, a land cover map and a small

  9. Sodium Velocity Maps on Mercury

    Science.gov (United States)

    Potter, A. E.; Killen, R. M.

    2011-01-01

    The objective of the current work was to measure two-dimensional maps of sodium velocities on the Mercury surface and examine the maps for evidence of sources or sinks of sodium on the surface. The McMath-Pierce Solar Telescope and the Stellar Spectrograph were used to measure Mercury spectra that were sampled at 7 milliAngstrom intervals. Observations were made each day during the period October 5-9, 2010. The dawn terminator was in view during that time. The velocity shift of the centroid of the Mercury emission line was measured relative to the solar sodium Fraunhofer line corrected for radial velocity of the Earth. The difference between the observed and calculated velocity shift was taken to be the velocity vector of the sodium relative to Earth. For each position of the spectrograph slit, a line of velocities across the planet was measured. Then, the spectrograph slit was stepped over the surface of Mercury at 1 arc second intervals. The position of Mercury was stabilized by an adaptive optics system. The collection of lines were assembled into an images of surface reflection, sodium emission intensities, and Earthward velocities over the surface of Mercury. The velocity map shows patches of higher velocity in the southern hemisphere, suggesting the existence of sodium sources there. The peak earthward velocity occurs in the equatorial region, and extends to the terminator. Since this was a dawn terminator, this might be an indication of dawn evaporation of sodium. Leblanc et al. (2008) have published a velocity map that is similar.

  10. Identifying changes in dissolved organic matter content and characteristics by fluorescence spectroscopy coupled with self-organizing map and classification and regression tree analysis during wastewater treatment.

    Science.gov (United States)

    Yu, Huibin; Song, Yonghui; Liu, Ruixia; Pan, Hongwei; Xiang, Liancheng; Qian, Feng

    2014-10-01

    The stabilization of latent tracers of dissolved organic matter (DOM) of wastewater was analyzed by three-dimensional excitation-emission matrix (EEM) fluorescence spectroscopy coupled with self-organizing map and classification and regression tree analysis (CART) in wastewater treatment performance. DOM of water samples collected from primary sedimentation, anaerobic, anoxic, oxic and secondary sedimentation tanks in a large-scale wastewater treatment plant contained four fluorescence components: tryptophan-like (C1), tyrosine-like (C2), microbial humic-like (C3) and fulvic-like (C4) materials extracted by self-organizing map. These components showed good positive linear correlations with dissolved organic carbon of DOM. C1 and C2 were representative components in the wastewater, and they were removed to a higher extent than those of C3 and C4 in the treatment process. C2 was a latent parameter determined by CART to differentiate water samples of oxic and secondary sedimentation tanks from the successive treatment units, indirectly proving that most of tyrosine-like material was degraded by anaerobic microorganisms. C1 was an accurate parameter to comprehensively separate the samples of the five treatment units from each other, indirectly indicating that tryptophan-like material was decomposed by anaerobic and aerobic bacteria. EEM fluorescence spectroscopy in combination with self-organizing map and CART analysis can be a nondestructive effective method for characterizing structural component of DOM fractions and monitoring organic matter removal in wastewater treatment process. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Dual Regression

    OpenAIRE

    Spady, Richard; Stouli, Sami

    2012-01-01

    We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

  12. Energy compensation after sprint- and high-intensity interval training.

    Science.gov (United States)

    Schubert, Matthew M; Palumbo, Elyse; Seay, Rebekah F; Spain, Katie K; Clarke, Holly E

    2017-01-01

    Many individuals lose less weight than expected in response to exercise interventions when considering the increased energy expenditure of exercise (ExEE). This is due to energy compensation in response to ExEE, which may include increases in energy intake (EI) and decreases in non-exercise physical activity (NEPA). We examined the degree of energy compensation in healthy young men and women in response to interval training. Data were examined from a prior study in which 24 participants (mean age, BMI, & VO2max = 28 yrs, 27.7 kg•m-2, and 32 mL∙kg-1∙min-1) completed either 4 weeks of sprint-interval training or high-intensity interval training. Energy compensation was calculated from changes in body composition (air displacement plethysmography) and exercise energy expenditure was calculated from mean heart rate based on the heart rate-VO2 relationship. Differences between high (≥ 100%) and low (high levels of energy compensation gained fat mass, lost fat-free mass, and had lower change scores for VO2max and NEPA. Linear regression results indicated that lower levels of energy compensation were associated with increases in ΔVO2max (p interval training. In agreement with prior work, increases in ΔVO2max and ΔNEPA were associated with lower energy compensation. Future studies should focus on identifying if a dose-response relationship for energy compensation exists in response to interval training, and what underlying mechanisms and participant traits contribute to the large variation between individuals.

  13. Nonparametric functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yang, Jie; Wu, Rongling; Casella, George

    2009-03-01

    Functional mapping is a useful tool for mapping quantitative trait loci (QTL) that control dynamic traits. It incorporates mathematical aspects of biological processes into the mixture model-based likelihood setting for QTL mapping, thus increasing the power of QTL detection and the precision of parameter estimation. However, in many situations there is no obvious functional form and, in such cases, this strategy will not be optimal. Here we propose to use nonparametric function estimation, typically implemented with B-splines, to estimate the underlying functional form of phenotypic trajectories, and then construct a nonparametric test to find evidence of existing QTL. Using the representation of a nonparametric regression as a mixed model, the final test statistic is a likelihood ratio test. We consider two types of genetic maps: dense maps and general maps, and the power of nonparametric functional mapping is investigated through simulation studies and demonstrated by examples.

  14. Digital soil mapping using multiple logistic regression on terrain parameters in southern Brazil Mapeamento digital de solos utilizando regressões logísticas múltiplas e parâmetros do terreno no sul do Brasil

    Directory of Open Access Journals (Sweden)

    Elvio Giasson

    2006-06-01

    Full Text Available Soil surveys are necessary sources of information for land use planning, but they are not always available. This study proposes the use of multiple logistic regressions on the prediction of occurrence of soil types based on reference areas. From a digitalized soil map and terrain parameters derived from the digital elevation model in ArcView environment, several sets of multiple logistic regressions were defined using statistical software Minitab, establishing relationship between explanatory terrain variables and soil types, using either the original legend or a simplified legend, and using or not stratification of the study area by drainage classes. Terrain parameters, such as elevation, distance to stream, flow accumulation, and topographic wetness index, were the variables that best explained soil distribution. Stratification by drainage classes did not have significant effect. Simplification of the original legend increased the accuracy of the method on predicting soil distribution.Os levantamentos de solos são fontes de informação necessárias para o planejamento de uso das terras, entretanto eles nem sempre estão disponíveis. Este estudo propõe o uso de regressões logísticas múltiplas na predição de ocorrência de classes de solos a partir de áreas de referência. Baseado no mapa original de solos em formato digital e parâmetros do terreno derivados do modelo numérico do terreno em ambiente ArcView, vários conjuntos de regressões logísticas múltiplas foram definidas usando o programa estatístico Minitab, estabelecendo relações entre as variáveis do terreno independentes e tipos de solos, usando tanto a legenda original como uma legenda simplificada, e usando ou não estratificação da área de estudo por classes de drenagem. Os parâmetros do terreno como elevação, distância dos rios, acúmulo de fluxo e índice de umidade topográfica foram as variáveis que melhor explicaram a distribuição das classes de

  15. A stepwise regression tree for nonlinear approximation: applications to estimating subpixel land cover

    Science.gov (United States)

    Huang, C.; Townshend, J.R.G.

    2003-01-01

    A stepwise regression tree (SRT) algorithm was developed for approximating complex nonlinear relationships. Based on the regression tree of Breiman et al . (BRT) and a stepwise linear regression (SLR) method, this algorithm represents an improvement over SLR in that it can approximate nonlinear relationships and over BRT in that it gives more realistic predictions. The applicability of this method to estimating subpixel forest was demonstrated using three test data sets, on all of which it gave more accurate predictions than SLR and BRT. SRT also generated more compact trees and performed better than or at least as well as BRT at all 10 equal forest proportion interval ranging from 0 to 100%. This method is appealing to estimating subpixel land cover over large areas.

  16. Determinants of LSIL Regression in Women from a Colombian Cohort

    International Nuclear Information System (INIS)

    Molano, Monica; Gonzalez, Mauricio; Gamboa, Oscar; Ortiz, Natasha; Luna, Joaquin; Hernandez, Gustavo; Posso, Hector; Murillo, Raul; Munoz, Nubia

    2010-01-01

    Objective: To analyze the role of Human Papillomavirus (HPV) and other risk factors in the regression of cervical lesions in women from the Bogota Cohort. Methods: 200 HPV positive women with abnormal cytology were included for regression analysis. The time of lesion regression was modeled using methods for interval censored survival time data. Median duration of total follow-up was 9 years. Results: 80 (40%) women were diagnosed with Atypical Squamous Cells of Undetermined Significance (ASCUS) or Atypical Glandular Cells of Undetermined Significance (AGUS) while 120 (60%) were diagnosed with Low Grade Squamous Intra-epithelial Lesions (LSIL). Globally, 40% of the lesions were still present at first year of follow up, while 1.5% was still present at 5 year check-up. The multivariate model showed similar regression rates for lesions in women with ASCUS/AGUS and women with LSIL (HR= 0.82, 95% CI 0.59-1.12). Women infected with HR HPV types and those with mixed infections had lower regression rates for lesions than did women infected with LR types (HR=0.526, 95% CI 0.33-0.84, for HR types and HR=0.378, 95% CI 0.20-0.69, for mixed infections). Furthermore, women over 30 years had a higher lesion regression rate than did women under 30 years (HR1.53, 95% CI 1.03-2.27). The study showed that the median time for lesion regression was 9 months while the median time for HPV clearance was 12 months. Conclusions: In the studied population, the type of infection and the age of the women are critical factors for the regression of cervical lesions.

  17. Segmentation and profiling consumers in a multi-channel environment using a combination of self-organizing maps (SOM method, and logistic regression

    Directory of Open Access Journals (Sweden)

    Seyed Ali Akbar Afjeh

    2014-05-01

    Full Text Available Market segmentation plays essential role on understanding the behavior of people’s interests in purchasing various products and services through various channels. This paper presents an empirical investigation to shed light on consumer’s purchasing attitude as well as gathering information in multi-channel environment. The proposed study of this paper designed a questionnaire and distributed it among 800 people who were at least 18 years of age and had some experiences on purchasing goods and services on internet, catalog or regular shopping centers. Self-organizing map, SOM, clustering technique was performed based on consumer’s interest in gathering information as well as purchasing products through internet, catalog and shopping centers and determined four segments. There were two types of questions for the proposed study of this paper. The first group considered participants’ personal characteristics such as age, gender, income, etc. The second group of questions was associated with participants’ psychographic characteristics including price consciousness, quality consciousness, time pressure, etc. Using multinominal logistic regression technique, the study determines consumers’ behaviors in each four segments.

  18. Convex Interval Games

    NARCIS (Netherlands)

    Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

    2008-01-01

    In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for

  19. Multiple regression approach to predict turbine-generator output for Chinshan nuclear power plant

    International Nuclear Information System (INIS)

    Chan, Yea-Kuang; Tsai, Yu-Ching

    2017-01-01

    The objective of this study is to develop a turbine cycle model using the multiple regression approach to estimate the turbine-generator output for the Chinshan Nuclear Power Plant (NPP). The plant operating data was verified using a linear regression model with a corresponding 95% confidence interval for the operating data. In this study, the key parameters were selected as inputs for the multiple regression based turbine cycle model. The proposed model was used to estimate the turbine-generator output. The effectiveness of the proposed turbine cycle model was demonstrated by using plant operating data obtained from the Chinshan NPP Unit 2. The results show that this multiple regression based turbine cycle model can be used to accurately estimate the turbine-generator output. In addition, this study also provides an alternative approach with simple and easy features to evaluate the thermal performance for nuclear power plants.

  20. Multiple regression approach to predict turbine-generator output for Chinshan nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Yea-Kuang; Tsai, Yu-Ching [Institute of Nuclear Energy Research, Taoyuan City, Taiwan (China). Nuclear Engineering Division

    2017-03-15

    The objective of this study is to develop a turbine cycle model using the multiple regression approach to estimate the turbine-generator output for the Chinshan Nuclear Power Plant (NPP). The plant operating data was verified using a linear regression model with a corresponding 95% confidence interval for the operating data. In this study, the key parameters were selected as inputs for the multiple regression based turbine cycle model. The proposed model was used to estimate the turbine-generator output. The effectiveness of the proposed turbine cycle model was demonstrated by using plant operating data obtained from the Chinshan NPP Unit 2. The results show that this multiple regression based turbine cycle model can be used to accurately estimate the turbine-generator output. In addition, this study also provides an alternative approach with simple and easy features to evaluate the thermal performance for nuclear power plants.

  1. Mapping lichen color-groups in western Arctic Alaska using seasonal Landsat composites

    Science.gov (United States)

    Nelson, P.; Macander, M. J.; Swingley, C. S.

    2016-12-01

    Mapping lichens at a landscape scale has received increased recent interest due to fears that terricolous lichen mats, primary winter caribou forage, may be decreasing across the arctic and boreal zones. However, previous efforts have produced taxonomically coarse, total lichen cover maps or have covered relatively small spatial extents. Here we attempt to map lichens of differing colors as species proxies across northwestern Alaska to produce the finest taxonomic and spatial- grained lichen maps covering the largest spatial extent to date. Lichen community sampling in five western Alaskan National Parks and Preserves from 2007-2012 generated 328 FIA-style 34.7 m radius plots on which species-level macrolichen community structure and abundance was estimated. Species were coded by color and plot lichen cover was aggregated by plot as the sum of the cover of each species in a color group. Ten different lichen color groupings were used for modeling to deduce which colors were most detectable. Reflectance signatures of each plot were extracted from a series of Landsat composites (circa 2000-2010) partitioned into two-week intervals from June 1 to Sept. 15. Median reflectance values for each band in each pixel were selected based on filtering criteria to reduce likelihood of snow cover. Lichen color group cover was regressed against plot reflectance plus additional abiotic predictors in two different data mining algorithms. Brown and grey lichens had the best models explaining approximately 40% of lichen cover in those color groups. Both data mining techniques produced similarly good fitting models. Spatial patterns of lichen color-group cover show distinctly different ecological patterns of these color-group species proxies.

  2. Restricted Interval Valued Neutrosophic Sets and Restricted Interval Valued Neutrosophic Topological Spaces

    Directory of Open Access Journals (Sweden)

    Anjan Mukherjee

    2016-08-01

    Full Text Available In this paper we introduce the concept of restricted interval valued neutrosophic sets (RIVNS in short. Some basic operations and properties of RIVNS are discussed. The concept of restricted interval valued neutrosophic topology is also introduced together with restricted interval valued neutrosophic finer and restricted interval valued neutrosophic coarser topology. We also define restricted interval valued neutrosophic interior and closer of a restricted interval valued neutrosophic set. Some theorems and examples are cites. Restricted interval valued neutrosophic subspace topology is also studied.

  3. Examination of Parameters Affecting the House Prices by Multiple Regression Analysis and its Contributions to Earthquake-Based Urban Transformation

    Science.gov (United States)

    Denli, H. H.; Durmus, B.

    2016-12-01

    The purpose of this study is to examine the factors which may affect the apartment prices with multiple linear regression analysis models and visualize the results by value maps. The study is focused on a county of Istanbul - Turkey. Totally 390 apartments around the county Umraniye are evaluated due to their physical and locational conditions. The identification of factors affecting the price of apartments in the county with a population of approximately 600k is expected to provide a significant contribution to the apartment market.Physical factors are selected as the age, number of rooms, size, floor numbers of the building and the floor that the apartment is positioned in. Positional factors are selected as the distances to the nearest hospital, school, park and police station. Totally ten physical and locational parameters are examined by regression analysis.After the regression analysis has been performed, value maps are composed from the parameters age, price and price per square meters. The most significant of the composed maps is the price per square meters map. Results show that the location of the apartment has the most influence to the square meter price information of the apartment. A different practice is developed from the composed maps by searching the ability of using price per square meters map in urban transformation practices. By marking the buildings older than 15 years in the price per square meters map, a different and new interpretation has been made to determine the buildings, to which should be given priority during an urban transformation in the county.This county is very close to the North Anatolian Fault zone and is under the threat of earthquakes. By marking the apartments older than 15 years on the price per square meters map, both older and expensive square meters apartments list can be gathered. By the help of this list, the priority could be given to the selected higher valued old apartments to support the economy of the country

  4. Evaluation of using digital gravity field models for zoning map creation

    Science.gov (United States)

    Loginov, Dmitry

    2018-05-01

    At the present time the digital cartographic models of geophysical fields are taking a special significance into geo-physical mapping. One of the important directions to their application is the creation of zoning maps, which allow taking into account the morphology of geophysical field in the implementation automated choice of contour intervals. The purpose of this work is the comparative evaluation of various digital models in the creation of integrated gravity field zoning map. For comparison were chosen the digital model of gravity field of Russia, created by the analog map with scale of 1 : 2 500 000, and the open global model of gravity field of the Earth - WGM2012. As a result of experimental works the four integrated gravity field zoning maps were obtained with using raw and processed data on each gravity field model. The study demonstrates the possibility of open data use to create integrated zoning maps with the condition to eliminate noise component of model by processing in specialized software systems. In this case, for solving problem of contour intervals automated choice the open digital models aren't inferior to regional models of gravity field, created for individual countries. This fact allows asserting about universality and independence of integrated zoning maps creation regardless of detail of a digital cartographic model of geo-physical fields.

  5. CIMP status of interval colon cancers: another piece to the puzzle.

    Science.gov (United States)

    Arain, Mustafa A; Sawhney, Mandeep; Sheikh, Shehla; Anway, Ruth; Thyagarajan, Bharat; Bond, John H; Shaukat, Aasma

    2010-05-01

    Colon cancers diagnosed in the interval after a complete colonoscopy may occur due to limitations of colonoscopy or due to the development of new tumors, possibly reflecting molecular and environmental differences in tumorigenesis resulting in rapid tumor growth. In a previous study from our group, interval cancers (colon cancers diagnosed within 5 years of a complete colonoscopy) were almost four times more likely to demonstrate microsatellite instability (MSI) than non-interval cancers. In this study we extended our molecular analysis to compare the CpG island methylator phenotype (CIMP) status of interval and non-interval colorectal cancers and investigate the relationship between the CIMP and MSI pathways in the pathogenesis of interval cancers. We searched our institution's cancer registry for interval cancers, defined as colon cancers that developed within 5 years of a complete colonoscopy. These were frequency matched in a 1:2 ratio by age and sex to patients with non-interval cancers (defined as colon cancers diagnosed on a patient's first recorded colonoscopy). Archived cancer specimens for all subjects were retrieved and tested for CIMP gene markers. The MSI status of subjects identified between 1989 and 2004 was known from our previous study. Tissue specimens of newly identified cases and controls (between 2005 and 2006) were tested for MSI. There were 1,323 cases of colon cancer diagnosed over the 17-year study period, of which 63 were identified as having interval cancer and matched to 131 subjects with non-interval cancer. Study subjects were almost all Caucasian men. CIMP was present in 57% of interval cancers compared to 33% of non-interval cancers (P=0.004). As shown previously, interval cancers were more likely than non-interval cancers to occur in the proximal colon (63% vs. 39%; P=0.002), and have MSI 29% vs. 11%, P=0.004). In multivariable logistic regression model, proximal location (odds ratio (OR) 1.85; 95% confidence interval (CI) 1

  6. A Forecasting Approach Combining Self-Organizing Map with Support Vector Regression for Reservoir Inflow during Typhoon Periods

    Directory of Open Access Journals (Sweden)

    Gwo-Fong Lin

    2016-01-01

    Full Text Available This study describes the development of a reservoir inflow forecasting model for typhoon events to improve short lead-time flood forecasting performance. To strengthen the forecasting ability of the original support vector machines (SVMs model, the self-organizing map (SOM is adopted to group inputs into different clusters in advance of the proposed SOM-SVM model. Two different input methods are proposed for the SVM-based forecasting method, namely, SOM-SVM1 and SOM-SVM2. The methods are applied to an actual reservoir watershed to determine the 1 to 3 h ahead inflow forecasts. For 1, 2, and 3 h ahead forecasts, improvements in mean coefficient of efficiency (MCE due to the clusters obtained from SOM-SVM1 are 21.5%, 18.5%, and 23.0%, respectively. Furthermore, improvement in MCE for SOM-SVM2 is 20.9%, 21.2%, and 35.4%, respectively. Another SOM-SVM2 model increases the SOM-SVM1 model for 1, 2, and 3 h ahead forecasts obtained improvement increases of 0.33%, 2.25%, and 10.08%, respectively. These results show that the performance of the proposed model can provide improved forecasts of hourly inflow, especially in the proposed SOM-SVM2 model. In conclusion, the proposed model, which considers limit and higher related inputs instead of all inputs, can generate better forecasts in different clusters than are generated from the SOM process. The SOM-SVM2 model is recommended as an alternative to the original SVR (Support Vector Regression model because of its accuracy and robustness.

  7. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

    Science.gov (United States)

    Saro, Lee; Woo, Jeon Seong; Kwan-Young, Oh; Moung-Jin, Lee

    2016-02-01

    The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs) followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS). These factors were analysed using artificial neural network (ANN) and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50%) and a test set (50%). A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10%) was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%). Of the weights used in the artificial neural network model, `slope' yielded the highest weight value (1.330), and `aspect' yielded the lowest value (1.000). This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

  8. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

    Directory of Open Access Journals (Sweden)

    Saro Lee

    2016-02-01

    Full Text Available The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS. These factors were analysed using artificial neural network (ANN and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50% and a test set (50%. A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10% was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%. Of the weights used in the artificial neural network model, ‘slope’ yielded the highest weight value (1.330, and ‘aspect’ yielded the lowest value (1.000. This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

  9. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

    Science.gov (United States)

    Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

    2017-01-01

    Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

  10. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  11. The relationship between quality of sleep and night shift rotation interval.

    Science.gov (United States)

    Kim, Jae Youn; Chae, Chang Ho; Kim, Young Ouk; Son, Jun Seok; Kim, Ja Hyun; Kim, Chan Woo; Park, Hyoung Ouk; Lee, Jun Ho; Kwon, Soon Il; Kwon, Sun Il

    2015-01-01

    Shift work is closely related with workers' health. In particular, sleep is thought to be affected by shift work. In addition, shift work has been reported to be associated with the type or direction of shift rotation, number of consecutive night shifts, and number of off-duty days. We aimed to analyze the association between the night shift rotation interval and the quality of sleep reported by Korean female shift workers. In total, 2,818 female shift workers from the manufacturing industry who received an employee physical examination at a single university hospital from January to August in 2014 were included. Subjects were classified into three groups (A, B, and C) by their night shift rotation interval. The quality of sleep was measured using the Korean version of the Pittsburgh Sleep Quality Index (PSQI). Descriptive analysis, univariate logistic regression, and multivariate logistic regression were performed. With group A as the reference, the odds ratio (OR) for having a seriously low quality of sleep was 1.456 (95% CI 1.171-1.811) and 2.348 (95% CI 1.852-2.977) for groups B and C, respectively. Thus, group C with the shortest night shift rotation interval was most likely to have a low quality of sleep. After adjustment for age, obesity, smoking status, alcohol consumption, exercise, being allowed to sleep during night shifts, work experience, and shift work experience, groups B and C had ORs of 1.419 (95% CI 1.134-1.777) and 2.238 (95% CI 1.737-2.882), respectively, compared to group A. Our data suggest that a shorter night shift rotation interval does not provide enough recovery time to adjust the circadian rhythm, resulting in a low quality of sleep. Because shift work is influenced by many different factors, future studies should aim to determine the most optimal shift work model and collect accurate, prospective data.

  12. Trans-ethnic meta-regression of genome-wide association studies accounting for ancestry increases power for discovery and improves fine-mapping resolution.

    Science.gov (United States)

    Mägi, Reedik; Horikoshi, Momoko; Sofer, Tamar; Mahajan, Anubha; Kitajima, Hidetoshi; Franceschini, Nora; McCarthy, Mark I; Morris, Andrew P

    2017-09-15

    Trans-ethnic meta-analysis of genome-wide association studies (GWAS) across diverse populations can increase power to detect complex trait loci when the underlying causal variants are shared between ancestry groups. However, heterogeneity in allelic effects between GWAS at these loci can occur that is correlated with ancestry. Here, a novel approach is presented to detect SNP association and quantify the extent of heterogeneity in allelic effects that is correlated with ancestry. We employ trans-ethnic meta-regression to model allelic effects as a function of axes of genetic variation, derived from a matrix of mean pairwise allele frequency differences between GWAS, and implemented in the MR-MEGA software. Through detailed simulations, we demonstrate increased power to detect association for MR-MEGA over fixed- and random-effects meta-analysis across a range of scenarios of heterogeneity in allelic effects between ethnic groups. We also demonstrate improved fine-mapping resolution, in loci containing a single causal variant, compared to these meta-analysis approaches and PAINTOR, and equivalent performance to MANTRA at reduced computational cost. Application of MR-MEGA to trans-ethnic GWAS of kidney function in 71,461 individuals indicates stronger signals of association than fixed-effects meta-analysis when heterogeneity in allelic effects is correlated with ancestry. Application of MR-MEGA to fine-mapping four type 2 diabetes susceptibility loci in 22,086 cases and 42,539 controls highlights: (i) strong evidence for heterogeneity in allelic effects that is correlated with ancestry only at the index SNP for the association signal at the CDKAL1 locus; and (ii) 99% credible sets with six or fewer variants for five distinct association signals. © The Author 2017. Published by Oxford University Press.

  13. Spectral decomposition of tent maps using symmetry considerations

    International Nuclear Information System (INIS)

    Ordonez, G.E.; Driebe, D.J.

    1996-01-01

    The spectral decompostion of the Frobenius-Perron operator of maps composed of many tents is determined from symmetry considerations. The eigenstates involve Euler as well as Bernoulli polynomials. The authors have introduced some new techniques, based on symmetry considerations, enabling the construction of spectral decompositions in a much simpler way than previous construction algorithms, Here we utilize these techniques to construct the spectral decomposition for one- dimensional maps of the unit interval composed of many tents. The construction uses the knowledge of the spectral decomposition of the r-adic map, which involves Bernoulli polynomials and their duals. It will be seen that the spectral decomposition of the tent maps involves both Bernoulli polynomials and Euler polynomials along with the appropriate dual states

  14. PMICALC: an R code-based software for estimating post-mortem interval (PMI) compatible with Windows, Mac and Linux operating systems.

    Science.gov (United States)

    Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel

    2010-01-30

    In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.

  15. Cryptanalysis of a family of 1D unimodal maps

    Science.gov (United States)

    Md Said, Mohamad Rushdan; Hina, Aliyu Danladi; Banerjee, Santo

    2017-07-01

    In this paper, we proposed a topologically conjugate map, equivalent to the well known logistic map. This constructed map is defined on the integer domain [0, 2n) with a view to be used as a random number generator (RNG) based on an integer domain as is the required in classical cryptography. The maps were found to have a one to one correspondence between points in their respective defining intervals defined on an n-bits precision. The dynamics of the proposed map similar with that of the logistic map, in terms of the Lyapunov exponents with the control parameter. This similarity between the curves indicates topological conjugacy between the maps. With a view to be applied in cryptography as a Pseudo-Random number generator (PRNG), the complexity of the constructed map as a source of randomness is determined using both the permutation entropy (PE) and the Lempel-Ziv (LZ-76) complexity measures, and the results are compared with numerical simulations.

  16. The impact of change in albumin assay on reference intervals, prevalence of 'hypoalbuminaemia' and albumin prescriptions.

    Science.gov (United States)

    Coley-Grant, Deon; Herbert, Mike; Cornes, Michael P; Barlow, Ian M; Ford, Clare; Gama, Rousseau

    2016-01-01

    We studied the impact on reference intervals, classification of patients with hypoalbuminaemia and albumin infusion prescriptions on changing from a bromocresol green (BCG) to a bromocresol purple (BCP) serum albumin assay. Passing-Bablok regression analysis and Bland-Altman plot were used to compare Abbott BCP and Roche BCG methods. Linear regression analysis was used to compare in-house and an external laboratory Abbott BCP serum albumin results. Reference intervals for Abbott BCP serum albumin were derived in two different laboratories using pathology data from adult patients in primary care. Prescriptions for 20% albumin infusions were compared one year before and one year after changing the albumin method. Abbott BCP assay had a negative bias of approximately 6 g/L compared with Roche BCG method.There was good agreement (y = 1.04 x - 1.03; R(2 )= 0.9933) between in-house and external laboratory Abbott BCP results. Reference intervals for the serum albumin Abbott BCP assay were 31-45 g/L, different to those recommended by Pathology Harmony and the manufacturers (35-50 g/L). Following the change in method there was a large increase in the number of patients classified as hypoalbuminaemic using Pathology Harmony references intervals (32%) but not when retrospectively compared to locally derived reference intervals (16%) compared with the previous year (12%). The method change was associated with a 44.6% increase in albumin prescriptions. This equated to an annual increase in expenditure of £35,234. We suggest that serum albumin reference intervals be method specific to prevent misclassification of albumin status in patients. Change in albumin methodology may have significant impact on hospital resources. © The Author(s) 2015.

  17. Advanced statistics: linear regression, part I: simple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  18. A High-Density Genetic Linkage Map and QTL Fine Mapping for Body Weight in Crucian Carp (Carassius auratus Using 2b-RAD Sequencing

    Directory of Open Access Journals (Sweden)

    Haiyang Liu

    2017-08-01

    Full Text Available A high-resolution genetic linkage map is essential for a wide range of genetics and genomics studies such as comparative genomics analysis and QTL fine mapping. Crucian carp (Carassius auratus is widely distributed in Eurasia, and is an important aquaculture fish worldwide. In this study, a high-density genetic linkage map was constructed for crucian carp using 2b-RAD technology. The consensus map contains 8487 SNP markers, assigning to 50 linkage groups (LGs and spanning 3762.88 cM, with an average marker interval of 0.44 cM and genome coverage of 98.8%. The female map had 4410 SNPs, and spanned 3500.42 cM (0.79 cM/marker, while the male map had 4625 SNPs and spanned 3346.33 cM (0.72 cM/marker. The average recombination ratio of female to male was 2.13:1, and significant male-biased recombination suppressions were observed in LG47 and LG49. Comparative genomics analysis revealed a clear 2:1 syntenic relationship between crucian carp LGs and chromosomes of zebrafish and grass carp, and a 1:1 correspondence, but extensive chromosomal rearrangement, between crucian carp and common carp, providing evidence that crucian carp has experienced a fourth round of whole genome duplication (4R-WGD. Eight chromosome-wide QTL for body weight at 2 months after hatch were detected on five LGs, explaining 10.1–13.2% of the phenotypic variations. Potential candidate growth-related genes, such as an EGF-like domain and TGF-β, were identified within the QTL intervals. This high-density genetic map and QTL analysis supplies a basis for genome evolutionary studies in cyprinid fishes, genome assembly, and QTL fine mapping for complex traits in crucian carp.

  19. Intermediate and advanced topics in multilevel logistic regression analysis.

    Science.gov (United States)

    Austin, Peter C; Merlo, Juan

    2017-09-10

    Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher-level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within-cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population-average effect of covariates measured at the subject and cluster level, in contrast to the within-cluster or cluster-specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster-level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  20. White Oak Creek Watershed topographic map and related materials

    International Nuclear Information System (INIS)

    Farrow, N.D.

    1981-04-01

    On March 22, 1978 a contract was let to Accu-Air Surveys, Inc., of Seymour, Indiana, to produce a topographic map of the White Oak Creek Watershed. Working from photography and ground control surveys, Accu-Air produced a map to ORNL's specifications. The map is in four sections (N.W., N.E., S.W., S.E.) at a scale of 1:2400. Contour intervals are 5 ft (1.5 m) with accented delineations every 25 ft (7.6 m). The scribe method was used for the finished map. Planimetric features, roads, major fence lines, drainage features, and tree lines are included. The ORNL grid is the primary coordinate system which is superimposed on the state plain coordinates

  1. Investigating fluvial pattern and delta-planform geometry based on varying intervals of flood and interflood

    Science.gov (United States)

    Rambo, J. E.; Kim, W.; Miller, K.

    2017-12-01

    Physical modeling of a delta's evolution can represent how changing the intervals of flood and interflood can alter a delta's fluvial pattern and geometry. Here we present a set of six experimental runs in which sediment and water were discharged at constant rates over each experiment. During the "flood" period, both sediment and water were discharged at rates of 0.25 cm3/s and 15 ml/s respectively, and during the "interflood" period, only water was discharged at 7.5 ml/s. The flood periods were only run for 30 minutes to keep the total volume of sediment constant. Run 0 did not have an interflood period and therefore ran with constant sediment and water discharge for the duration of the experiment.The other five runs had either 5, 10, or 15-min intervals of flood with 5, 10, or 15-min intervals of interflood. The experimental results show that Run 0 had the smallest topset area. This is due to a lack of surface reworking that takes place during interflood periods. Run 1 had 15-minute intervals of flood and 15-minute intervals of interflood, and it had the largest topset area. Additionally, the experiments that had longer intervals of interflood than flood had more elongated delta geometries. Wetted fraction color maps were also created to plot channel locations during each run. The maps show that the runs with longer interflood durations had channels occurring predominantly down the middle with stronger incisions; these runs produced deltas with more elongated geometries. When the interflood duration was even longer, however, strong channels started to occur at multiple locations. This increased interflood period allowed for the entire area over the delta's surface to be reworked, thus reducing the downstream slope and allowing channels to be more mobile laterally. Physical modeling of a delta allows us to predict a delta's resulting geometry given a set of conditions. This insight is needed especially with delta's being the home to many populations of people and

  2. Phase Space Prediction of Chaotic Time Series with Nu-Support Vector Machine Regression

    International Nuclear Information System (INIS)

    Ye Meiying; Wang Xiaodong

    2005-01-01

    A new class of support vector machine, nu-support vector machine, is discussed which can handle both classification and regression. We focus on nu-support vector machine regression and use it for phase space prediction of chaotic time series. The effectiveness of the method is demonstrated by applying it to the Henon map. This study also compares nu-support vector machine with back propagation (BP) networks in order to better evaluate the performance of the proposed methods. The experimental results show that the nu-support vector machine regression obtains lower root mean squared error than the BP networks and provides an accurate chaotic time series prediction. These results can be attributable to the fact that nu-support vector machine implements the structural risk minimization principle and this leads to better generalization than the BP networks.

  3. Precise localization of multiple epiphyseal dysplasia and pseudoachondroplasia mutations by genetic and physical mapping of chromosome 19

    Energy Technology Data Exchange (ETDEWEB)

    Knowlton, R.G.; Cekleniak, J.A. [Jefferson Medical College, Philadelphia, PA (United States); Cohn, D.H. [Cedars-Sinai Medical Center, Los Angeles, CA (United States)] [and others

    1994-09-01

    Multiple epiphyseal dysplasia (EDM1), a dominantly inherited chondrodysplasia resulting in peripheral joint deformities and premature osteoarthritis, and pseudoachondroplasia (PSACH), a more severe disorder associated with short-limbed dwarfism, have recently been mapped to the pericentromeric region of chromosome 19. Chondrocytes from some PSACH patients accumulate lamellar deposits in the endoplasmic reticulum that are immunologically cross-reactive with aggrecan. However, neither aggrecan nor any known candidate gene maps to the EDM1/PSACH region of chromosome 19. Genetic linkage mapping in two lage families had placed the disease locus between D19S215 (19p12) and D19S212 (19p13.1), an interval of about 3.5 Mb. With at least five potentially informative cross-overs within this interval, recombination mapping at greater resolution was undertaken. From cosmids assigned to the region by fluorescence in situ hybridization and contig assembly, dinucleotide repeat tracts were identified for use as polymorphic genetic markers. Linkage data from three new dinucleotide repeat markers from cosmids mapped between D19S212 and D19S215 limit the EDM1/PSACH locus to an interval spanning approximately 2 Mb.

  4. An integrated genetic, physical, and transcriptional map of chromosome 13

    Energy Technology Data Exchange (ETDEWEB)

    Scheffer, H.; Kooy, R.F.; Wijngaard, A. [Univ. of Groningen (Netherlands)] [and others

    1994-09-01

    In this study a genetic map containing 20 markers and typed in 40 CEPH families is presented. It includes 7 thusfar untyped microsatellite markers, 7 that have previously been mapped on a subset of 8 CEPH families, one reference marker, D13S71, and three telomeric VNTR markers. Also, 4 intragenic RB1 markers were typed. The markers have an average heterozygosity of 73% (80%, excluding the three RFLPs). The total sex average length of the map is 140 cM. The mean female to male ratio is 1.54. For the non-telomeric part of the chromosome between the markers D13S221 in 13q12 and D13S173 in 13q33-q34, this ratio is 1.99. This ratio is reversed in the telomeric part of the chromosome between D13S173 and D13S234 in distal 13q34, where it is 0.47. A high new mutation frequency of 1% was detected in the (CTTT(T)){sub n} repeat in intron 20 of the RB1 gene. The map has been integrated with 7 microsatellite markers and 2 RFLP markers from CEPH database version 7.0, resulting in a map with 32 markers (28 loci) of chromosome 13q. In addition, a deletion hybrid breakpoint map ordering 50 markers in 18 intervals is constructed. It includes 32 microsatellite markers, 4 genes, 5 STSs, and 9 ESTs. Each of 18 intervals contains at least one microsatellite marker included in the extended genetic map. These data allow a correlation between the genetic and physical map of chromosome 13. New ESTs are currently being identified and localized at this integrated map.

  5. Retrieval and Mapping of Heavy Metal Concentration in Soil Using Time Series Landsat 8 Imagery

    Science.gov (United States)

    Fang, Y.; Xu, L.; Peng, J.; Wang, H.; Wong, A.; Clausi, D. A.

    2018-04-01

    Heavy metal pollution is a critical global environmental problem which has always been a concern. Traditional approach to obtain heavy metal concentration relying on field sampling and lab testing is expensive and time consuming. Although many related studies use spectrometers data to build relational model between heavy metal concentration and spectra information, and then use the model to perform prediction using the hyperspectral imagery, this manner can hardly quickly and accurately map soil metal concentration of an area due to the discrepancies between spectrometers data and remote sensing imagery. Taking the advantage of easy accessibility of Landsat 8 data, this study utilizes Landsat 8 imagery to retrieve soil Cu concentration and mapping its distribution in the study area. To enlarge the spectral information for more accurate retrieval and mapping, 11 single date Landsat 8 imagery from 2013-2017 are selected to form a time series imagery. Three regression methods, partial least square regression (PLSR), artificial neural network (ANN) and support vector regression (SVR) are used to model construction. By comparing these models unbiasedly, the best model are selected to mapping Cu concentration distribution. The produced distribution map shows a good spatial autocorrelation and consistency with the mining area locations.

  6. Predicting restoration of kidney function during CRRT-free intervals

    Directory of Open Access Journals (Sweden)

    Heise Daniel

    2012-01-01

    Full Text Available Abstract Background Renal failure is common in critically ill patients and frequently requires continuous renal replacement therapy (CRRT. CRRT is discontinued at regular intervals for routine changes of the disposable equipment or for replacing clogged filter membrane assemblies. The present study was conducted to determine if the necessity to continue CRRT could be predicted during the CRRT-free period. Materials and methods In the period from 2003 to 2006, 605 patients were treated with CRRT in our ICU. A total of 222 patients with 448 CRRT-free intervals had complete data sets and were used for analysis. Of the total CRRT-free periods, 225 served as an evaluation group. Twenty-nine parameters with an assumed influence on kidney function were analyzed with regard to their potential to predict the restoration of kidney function during the CRRT-free interval. Using univariate analysis and logistic regression, a prospective index was developed and validated in the remaining 223 CRRT-free periods to establish its prognostic strength. Results Only three parameters showed an independent influence on the restoration of kidney function during CRRT-free intervals: the number of previous CRRT cycles (medians in the two outcome groups: 1 vs. 2, the "Sequential Organ Failure Assessment"-score (means in the two outcome groups: 8.3 vs. 9.2 and urinary output after the cessation of CRRT (medians in two outcome groups: 66 ml/h vs. 10 ml/h. The prognostic index, which was calculated from these three variables, showed a satisfactory potential to predict the kidney function during the CRRT-free intervals; Receiver operating characteristic (ROC analysis revealed an area under the curve of 0.798. Conclusion Restoration of kidney function during CRRT-free periods can be predicted with an index calculated from three variables. Prospective trials in other hospitals must clarify whether our results are generally transferable to other patient populations.

  7. Common y-intercept and single compound regressions of gas-particle partitioning data vs 1/T

    Science.gov (United States)

    Pankow, James F.

    Confidence intervals are placed around the log Kp vs 1/ T correlation equations obtained using simple linear regressions (SLR) with the gas-particle partitioning data set of Yamasaki et al. [(1982) Env. Sci. Technol.16, 189-194]. The compounds and groups of compounds studied include the polycylic aromatic hydrocarbons phenanthrene + anthracene, me-phenanthrene + me-anthracene, fluoranthene, pyrene, benzo[ a]fluorene + benzo[ b]fluorene, chrysene + benz[ a]anthracene + triphenylene, benzo[ b]fluoranthene + benzo[ k]fluoranthene, and benzo[ a]pyrene + benzo[ e]pyrene (note: me = methyl). For any given compound, at equilibrium, the partition coefficient Kp equals ( F/ TSP)/ A where F is the particulate-matter associated concentration (ng m -3), A is the gas-phase concentration (ng m -3), and TSP is the concentration of particulate matter (μg m -3). At temperatures more than 10°C from the mean sampling temperature of 17°C, the confidence intervals are quite wide. Since theory predicts that similar compounds sorbing on the same particulate matter should possess very similar y-intercepts, the data set was also fitted using a special common y-intercept regression (CYIR). For most of the compounds, the CYIR equations fell inside of the SLR 95% confidence intervals. The CYIR y-intercept value is -18.48, and is reasonably close to the type of value that can be predicted for PAH compounds. The set of CYIR regression equations is probably more reliable than the set of SLR equations. For example, the CYIR-derived desorption enthalpies are much more highly correlated with vaporization enthalpies than are the SLR-derived desorption enthalpies. It is recommended that the CYIR approach be considered whenever analysing temperature-dependent gas-particle partitioning data.

  8. Regression of left ventricular hypertrophy and microalbuminuria changes during antihypertensive treatment.

    Science.gov (United States)

    Rodilla, Enrique; Pascual, Jose Maria; Costa, Jose Antonio; Martin, Joaquin; Gonzalez, Carmen; Redon, Josep

    2013-08-01

    The objective of the present study was to assess the regression of left ventricular hypertrophy (LVH) during antihypertensive treatment, and its relationship with the changes in microalbuminuria. One hundred and sixty-eight previously untreated patients with echocardiographic LVH, 46 (27%) with microalbuminuria, were followed during a median period of 13 months (range 6-23 months) and treated with lifestyle changes and antihypertensive drugs. Twenty-four-hour ambulatory blood pressure monitoring, echocardiography and urinary albumin excretion were assessed at the beginning and at the end of the study period. Left ventricular mass index (LVMI) was reduced from 137 [interquartile interval (IQI), 129-154] to 121 (IQI, 104-137) g/m (P 50%) had the same odds of achieving regression of LVH as patients with normoalbuminuria (ORm 1.1; 95% CI 0.38-3.25; P = 0.85). However, those with microalbuminuria at baseline, who did not regress, had less probability of achieving LVH regression than the normoalbuminuric patients (OR 0.26; 95% CI 0.07-0.90; P = 0.03) even when adjusted for age, sex, initial LVMI, GFR, blood pressure and angiotensin-converting enzyme inhibitor (ACE-I) or angiotensin receptor blocker (ARB) treatment during the follow-up. Patients who do not have a significant reduction in microalbuminuria have less chance of achieving LVH regression, independent of blood pressure reduction.

  9. Programming with Intervals

    Science.gov (United States)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  10. Use of ocean color scanner data in water quality mapping

    Science.gov (United States)

    Khorram, S.

    1981-01-01

    Remotely sensed data, in combination with in situ data, are used in assessing water quality parameters within the San Francisco Bay-Delta. The parameters include suspended solids, chlorophyll, and turbidity. Regression models are developed between each of the water quality parameter measurements and the Ocean Color Scanner (OCS) data. The models are then extended to the entire study area for mapping water quality parameters. The results include a series of color-coded maps, each pertaining to one of the water quality parameters, and the statistical analysis of the OCS data and regression models. It is found that concurrently collected OCS data and surface truth measurements are highly useful in mapping the selected water quality parameters and locating areas having relatively high biological activity. In addition, it is found to be virtually impossible, at least within this test site, to locate such areas on U-2 color and color-infrared photography.

  11. A new perspective in the estimation of postmortem interval (PMI) based on vitreous.

    Science.gov (United States)

    Muñoz, J I; Suárez-Peñaranda, J M; Otero, X L; Rodríguez-Calvo, M S; Costas, E; Miguéns, X; Concheiro, L

    2001-03-01

    The relation between the potassium concentration in the vitreous humor, [K+], and the postmortem interval has been studied by several authors. Many formulae are available and they are based on a correlation test and linear regression using the PMI as the independent variable and [K+] as the dependent variable. The estimation of the confidence interval is based on this formulation. However, in forensic work, it is necessary to use [K+] as the independent variable to estimate the PMI. Although all authors have obtained the PMI by direct use of these formulae, it is, nevertheless, an inexact approach, which leads to false estimations. What is required is to change the variables, obtaining a new equation in which [K+] is considered as the independent variable and the PMI as the dependent. The regression line obtained from our data is [K+] = 5.35 + 0.22 PMI, by changing the variables we get PMI = 2.58[K+] - 9.30. When only nonhospital deaths are considered, the results are considerably improved. In this case, we get [K+] = 5.60 + 0.17 PMI and, consequently, PMI = 3.92[K+] - 19.04.

  12. Electrocardiographic Abnormalities and QTc Interval in Patients Undergoing Hemodialysis.

    Directory of Open Access Journals (Sweden)

    Yuxin Nie

    abnormalities were found in this group. In multiple regression analyses, serum Ca2+ concentration before HD and LAD were independent variables of QTc interval prolongation. UA, ferritin, and interventricular septum were independent variables of ΔQTc.Prolonged QT interval is very common in HD patients and is associated with several risk factors. An appropriate concentration of dialysate electrolytes should be chosen depending on patients' clinical conditions.

  13. Fine-mapping and initial characterization of QT interval loci in African Americans.

    Directory of Open Access Journals (Sweden)

    Christy L Avery

    Full Text Available The QT interval (QT is heritable and its prolongation is a risk factor for ventricular tachyarrhythmias and sudden death. Most genetic studies of QT have examined European ancestral populations; however, the increased genetic diversity in African Americans provides opportunities to narrow association signals and identify population-specific variants. We therefore evaluated 6,670 SNPs spanning eleven previously identified QT loci in 8,644 African American participants from two Population Architecture using Genomics and Epidemiology (PAGE studies: the Atherosclerosis Risk in Communities study and Women's Health Initiative Clinical Trial. Of the fifteen known independent QT variants at the eleven previously identified loci, six were significantly associated with QT in African American populations (P≤1.20×10(-4: ATP1B1, PLN1, KCNQ1, NDRG4, and two NOS1AP independent signals. We also identified three population-specific signals significantly associated with QT in African Americans (P≤1.37×10(-5: one at NOS1AP and two at ATP1B1. Linkage disequilibrium (LD patterns in African Americans assisted in narrowing the region likely to contain the functional variants for several loci. For example, African American LD patterns showed that 0 SNPs were in LD with NOS1AP signal rs12143842, compared with European LD patterns that indicated 87 SNPs, which spanned 114.2 Kb, were in LD with rs12143842. Finally, bioinformatic-based characterization of the nine African American signals pointed to functional candidates located exclusively within non-coding regions, including predicted binding sites for transcription factors such as TBX5, which has been implicated in cardiac structure and conductance. In this detailed evaluation of QT loci, we identified several African Americans SNPs that better define the association with QT and successfully narrowed intervals surrounding established loci. These results demonstrate that the same loci influence variation in QT

  14. Semitone frequency mapping to improve music representation for nucleus cochlear implants

    Directory of Open Access Journals (Sweden)

    Omran Sherif

    2011-01-01

    Full Text Available Abstract The frequency-to-channel mapping for Cochlear implant (CI signal processors was originally designed to optimize speech perception and generally does not preserve the harmonic structure of music sounds. An algorithm aimed at restoring the harmonic relationship of frequency components based on semitone mapping is presented in this article. Two semitone (Smt based mappings in different frequency ranges were investigated. The first, Smt-LF, covers a range from 130 to 1502 Hz which encompasses the fundamental frequency of most musical instruments. The second, Smt-MF, covers a range from 440 to 5040 Hz, allocating frequency bands of sounds close to their characteristic tonotopical sites according to Greenwood's function. Smt-LF, in contrast, transposes the input frequencies onto locations with higher characteristic frequencies. A sequence of 36 synthetic complex tones (C3 to B5, each consisting of a fundamental and 4 harmonic overtones, was processed using the standard (Std, Smt-LF and Smt-MF mappings. The analysis of output signals showed that the harmonic structure between overtones of all complex tones was preserved using Smt mapping. Semitone mapping preserves the harmonic structure and may in turn improve music representation for Nucleus cochlear implants. The proposed semitone mappings incorporate the use of virtual channels to allow frequencies spanning three and a half octaves to be mapped to 43 stimulation channels. A pitch difference limen test was done with normal hearing subjects discriminating pairs of pure tones with different semitone intervals which were processed by a vocoder type simulator of CI sound processing. The results showed better performance with wider semitone intervals. However, no significant difference was found between 22 and 43 channels maps.

  15. A consistent framework for Horton regression statistics that leads to a modified Hack's law

    Science.gov (United States)

    Furey, P.R.; Troutman, B.M.

    2008-01-01

    A statistical framework is introduced that resolves important problems with the interpretation and use of traditional Horton regression statistics. The framework is based on a univariate regression model that leads to an alternative expression for Horton ratio, connects Horton regression statistics to distributional simple scaling, and improves the accuracy in estimating Horton plot parameters. The model is used to examine data for drainage area A and mainstream length L from two groups of basins located in different physiographic settings. Results show that confidence intervals for the Horton plot regression statistics are quite wide. Nonetheless, an analysis of covariance shows that regression intercepts, but not regression slopes, can be used to distinguish between basin groups. The univariate model is generalized to include n > 1 dependent variables. For the case where the dependent variables represent ln A and ln L, the generalized model performs somewhat better at distinguishing between basin groups than two separate univariate models. The generalized model leads to a modification of Hack's law where L depends on both A and Strahler order ??. Data show that ?? plays a statistically significant role in the modified Hack's law expression. ?? 2008 Elsevier B.V.

  16. Carbon emissions risk map from deforestation in the tropical Amazon

    Science.gov (United States)

    Ometto, J.; Soler, L. S.; Assis, T. D.; Oliveira, P. V.; Aguiar, A. P.

    2011-12-01

    Assis, Pedro Valle This work aims to estimate the carbon emissions from tropical deforestation in the Brazilian Amazon associated to the risk assessment of future land use change. The emissions are estimated by incorporating temporal deforestation dynamics, accounting for the biophysical and socioeconomic heterogeneity in the region, as well secondary forest growth dynamic in abandoned areas. The land cover change model that supported the risk assessment of deforestation, was run based on linear regressions. This method takes into account spatial heterogeneity of deforestation as the spatial variables adopted to fit the final regression model comprise: environmental aspects, economic attractiveness, accessibility and land tenure structure. After fitting a suitable regression models for each land cover category, the potential of each cell to be deforested (25x25km and 5x5 km of resolution) in the near future was used to calculate the risk assessment of land cover change. The carbon emissions model combines high-resolution new forest clear-cut mapping and four alternative sources of spatial information on biomass distribution for different vegetation types. The risk assessment map of CO2 emissions, was obtained by crossing the simulation results of the historical land cover changes to a map of aboveground biomass contained in the remaining forest. This final map represents the risk of CO2 emissions at 25x25km and 5x5 km until 2020, under a scenario of carbon emission reduction target.

  17. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  18. Associations between dairy cow inter-service interval and probability of conception.

    Science.gov (United States)

    Remnant, J G; Green, M J; Huxley, J N; Hudson, C D

    2018-07-01

    Recent research has indicated that the interval between inseminations in modern dairy cattle is often longer than the commonly accepted cycle length of 18-24 days. This study analysed 257,396 inseminations in 75,745 cows from 312 herds in England and Wales. The interval between subsequent inseminations in the same cow in the same lactation (inter-service interval, ISI) were calculated and inseminations categorised as successful or unsuccessful depending on whether there was a corresponding calving event. Conception risk was calculated for each individual ISI between 16 and 28 days. A random effects logistic regression model was fitted to the data with pregnancy as the outcome variable and ISI (in days) included in the model as a categorical variable. The modal ISI was 22 days and the peak conception risk was 44% for ISIs of 21 days rising from 27% at 16 days. The logistic regression model revealed significant associations of conception risk with ISI as well as 305 day milk yield, insemination number, parity and days in milk. Predicted conception risk was lower for ISIs of 16, 17 and 18 days and higher for ISIs of 20, 21 and 22 days compared to 25 day ISIs. A mixture model was specified to identify clusters in insemination frequency and conception risk for ISIs between 3 and 50 days. A "high conception risk, high insemination frequency" cluster was identified between 19 and 26 days which indicated that this time period was the true latent distribution for ISI with optimal reproductive outcome. These findings suggest that the period of increased numbers of inseminations around 22 days identified in existing work coincides with the period of increased probability of conception and therefore likely represents true return estrus events. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  20. Comparing spatial regression to random forests for large ...

    Science.gov (United States)

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records. In this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (MMI) at 1859 stream sites with over 200 landscape covariates. Our primary goal is predicting MMI at over 1.1 million perennial stream reaches across the USA. For spatial regression modeling, we develop two new methods to accommodate large data: (1) a procedure that estimates optimal Box-Cox transformations to linearize covariate relationships; and (2) a computationally efficient covariate selection routine that takes into account spatial autocorrelation. We show that our new methods lead to cross-validated performance similar to random forests, but that there is an advantage for spatial regression when quantifying the uncertainty of the predictions. Simulations are used to clarify advantages for each method. This research investigates different approaches for modeling and mapping national stream condition. We use MMI data from the EPA's National Rivers and Streams Assessment and predictors from StreamCat (Hill et al., 2015). Previous studies have focused on modeling the MMI condition classes (i.e., good, fair, and po

  1. Impact of Risk Factors on Different Interval Cancer Subtypes in a Population-Based Breast Cancer Screening Programme

    Science.gov (United States)

    Blanch, Jordi; Sala, Maria; Ibáñez, Josefa; Domingo, Laia; Fernandez, Belén; Otegi, Arantza; Barata, Teresa; Zubizarreta, Raquel; Ferrer, Joana; Castells, Xavier; Rué, Montserrat; Salas, Dolores

    2014-01-01

    Background Interval cancers are primary breast cancers diagnosed in women after a negative screening test and before the next screening invitation. Our aim was to evaluate risk factors for interval cancer and their subtypes and to compare the risk factors identified with those associated with incident screen-detected cancers. Methods We analyzed data from 645,764 women participating in the Spanish breast cancer screening program from 2000–2006 and followed-up until 2009. A total of 5,309 screen-detected and 1,653 interval cancers were diagnosed. Among the latter, 1,012 could be classified on the basis of findings in screening and diagnostic mammograms, consisting of 489 true interval cancers (48.2%), 235 false-negatives (23.2%), 172 minimal-signs (17.2%) and 114 occult tumors (11.3%). Information on the screening protocol and women's characteristics were obtained from the screening program registry. Cause-specific Cox regression models were used to estimate the hazard ratios (HR) of risks factors for interval cancer and incident screen-detected cancer. A multinomial regression model, using screen-detected tumors as a reference group, was used to assess the effect of breast density and other factors on the occurrence of interval cancer subtypes. Results A previous false-positive was the main risk factor for interval cancer (HR = 2.71, 95%CI: 2.28–3.23); this risk was higher for false-negatives (HR = 8.79, 95%CI: 6.24–12.40) than for true interval cancer (HR = 2.26, 95%CI: 1.59–3.21). A family history of breast cancer was associated with true intervals (HR = 2.11, 95%CI: 1.60–2.78), previous benign biopsy with a false-negatives (HR = 1.83, 95%CI: 1.23–2.71). High breast density was mainly associated with occult tumors (RRR = 4.92, 95%CI: 2.58–9.38), followed by true intervals (RRR = 1.67, 95%CI: 1.18–2.36) and false-negatives (RRR = 1.58, 95%CI: 1.00–2.49). Conclusion The role of women's characteristics differs among

  2. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  3. Regression Analyses on the Butterfly Ballot Effect: A Statistical Perspective of the US 2000 Election

    Science.gov (United States)

    Wu, Dane W.

    2002-01-01

    The year 2000 US presidential election between Al Gore and George Bush has been the most intriguing and controversial one in American history. The state of Florida was the trigger for the controversy, mainly, due to the use of the misleading "butterfly ballot". Using prediction (or confidence) intervals for least squares regression lines…

  4. Randomness control of vehicular motion through a sequence of traffic signals at irregular intervals

    International Nuclear Information System (INIS)

    Nagatani, Takashi

    2010-01-01

    We study the regularization of irregular motion of a vehicle moving through the sequence of traffic signals with a disordered configuration. Each traffic signal is controlled by both cycle time and phase shift. The cycle time is the same for all signals, while the phase shift varies from signal to signal by synchronizing with intervals between a signal and the next signal. The nonlinear dynamic model of the vehicular motion is presented by the stochastic nonlinear map. The vehicle exhibits the very complex behavior with varying both cycle time and strength of irregular intervals. The irregular motion induced by the disordered configuration is regularized by adjusting the phase shift within the regularization regions.

  5. Contractive maps on normed linear spaces and their applications to nonlinear matrix equations.

    NARCIS (Netherlands)

    Reurings, M.C.B.

    2017-01-01

    In this paper the author gives necessary and sufficient conditions under which a map is a contraction on a certain subset of a normed linear space. These conditions are already well known for maps on intervals in R. Using the conditions and Banach's fixed point theorem a fixed point theorem can be

  6. Application of Logistic Regression Tree Model in Determining Habitat Distribution of Astragalus verus

    Directory of Open Access Journals (Sweden)

    M. Saki

    2013-03-01

    Full Text Available The relationship between plant species and environmental factors has always been a central issue in plant ecology. With rising power of statistical techniques, geo-statistics and geographic information systems (GIS, the development of predictive habitat distribution models of organisms has rapidly increased in ecology. This study aimed to evaluate the ability of Logistic Regression Tree model to create potential habitat map of Astragalus verus. This species produces Tragacanth and has economic value. A stratified- random sampling was applied to 100 sites (50 presence- 50 absence of given species, and produced environmental and edaphic factors maps by using Kriging and Inverse Distance Weighting methods in the ArcGIS software for the whole study area. Relationships between species occurrence and environmental factors were determined by Logistic Regression Tree model and extended to the whole study area. The results indicated species occurrence has strong correlation with environmental factors such as mean daily temperature and clay, EC and organic carbon content of the soil. Species occurrence showed direct relationship with mean daily temperature and clay and organic carbon, and inverse relationship with EC. Model accuracy was evaluated both by Cohen’s kappa statistics (κ and by area under Receiver Operating Characteristics curve based on independent test data set. Their values (kappa=0.9, Auc of ROC=0.96 indicated the high power of LRT to create potential habitat map on local scales. This model, therefore, can be applied to recognize potential sites for rangeland reclamation projects.

  7. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  8. Accurate estimation of short read mapping quality for next-generation genome sequencing

    Science.gov (United States)

    Ruffalo, Matthew; Koyutürk, Mehmet; Ray, Soumya; LaFramboise, Thomas

    2012-01-01

    Motivation: Several software tools specialize in the alignment of short next-generation sequencing reads to a reference sequence. Some of these tools report a mapping quality score for each alignment—in principle, this quality score tells researchers the likelihood that the alignment is correct. However, the reported mapping quality often correlates weakly with actual accuracy and the qualities of many mappings are underestimated, encouraging the researchers to discard correct mappings. Further, these low-quality mappings tend to correlate with variations in the genome (both single nucleotide and structural), and such mappings are important in accurately identifying genomic variants. Approach: We develop a machine learning tool, LoQuM (LOgistic regression tool for calibrating the Quality of short read mappings, to assign reliable mapping quality scores to mappings of Illumina reads returned by any alignment tool. LoQuM uses statistics on the read (base quality scores reported by the sequencer) and the alignment (number of matches, mismatches and deletions, mapping quality score returned by the alignment tool, if available, and number of mappings) as features for classification and uses simulated reads to learn a logistic regression model that relates these features to actual mapping quality. Results: We test the predictions of LoQuM on an independent dataset generated by the ART short read simulation software and observe that LoQuM can ‘resurrect’ many mappings that are assigned zero quality scores by the alignment tools and are therefore likely to be discarded by researchers. We also observe that the recalibration of mapping quality scores greatly enhances the precision of called single nucleotide polymorphisms. Availability: LoQuM is available as open source at http://compbio.case.edu/loqum/. Contact: matthew.ruffalo@case.edu. PMID:22962451

  9. Spatial Quantile Regression In Analysis Of Healthy Life Years In The European Union Countries

    Directory of Open Access Journals (Sweden)

    Trzpiot Grażyna

    2016-12-01

    Full Text Available The paper investigates the impact of the selected factors on the healthy life years of men and women in the EU countries. The multiple quantile spatial autoregression models are used in order to account for substantial differences in the healthy life years and life quality across the EU members. Quantile regression allows studying dependencies between variables in different quantiles of the response distribution. Moreover, this statistical tool is robust against violations of the classical regression assumption about the distribution of the error term. Parameters of the models were estimated using instrumental variable method (Kim, Muller 2004, whereas the confidence intervals and p-values were bootstrapped.

  10. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  11. Chaos caused by a topologically mixing map

    International Nuclear Information System (INIS)

    Xiong Jincheng; Yang Zhongguo

    1991-01-01

    In the present paper we show that for a topologically mixing map there exists a subset consisting of considerably many points in its domain, called chaotic subset, for which orbits of all points display time dependence greatly more erratic than for a scrambled subset, i.e., if a continuous map f : X → X is topologically mixing, where X is a separable locally compact metric space containing at least two points, then for any increasing sequence {p i } of positive integers there exists a c-dense subset C of X satisfying the condition for any continuous map F : A → X, where A is a subset of C, there is a subsequence {q i } of the sequence {p i } such that i→∞ lim f qi (x)=F(x) for every x is an element of A. As an application we show that the interval maps having a chaotic (or scrambled) subset with full Lebesgue measure is dense in the space consisting of all topologically mixing (transitive, respectively) maps. (author). 11 refs

  12. Do country-specific preference weights matter in the choice of mapping algorithms? The case of mapping the Diabetes-39 onto eight country-specific EQ-5D-5L value sets.

    Science.gov (United States)

    Lamu, Admassu N; Chen, Gang; Gamst-Klaussen, Thor; Olsen, Jan Abel

    2018-03-22

    To develop mapping algorithms that transform Diabetes-39 (D-39) scores onto EQ-5D-5L utility values for each of eight recently published country-specific EQ-5D-5L value sets, and to compare mapping functions across the EQ-5D-5L value sets. Data include 924 individuals with self-reported diabetes from six countries. The D-39 dimensions, age and gender were used as potential predictors for EQ-5D-5L utilities, which were scored using value sets from eight countries (England, Netherland, Spain, Canada, Uruguay, China, Japan and Korea). Ordinary least squares, generalised linear model, beta binomial regression, fractional regression, MM estimation and censored least absolute deviation were used to estimate the mapping algorithms. The optimal algorithm for each country-specific value set was primarily selected based on normalised root mean square error (NRMSE), normalised mean absolute error (NMAE) and adjusted-r 2 . Cross-validation with fivefold approach was conducted to test the generalizability of each model. The fractional regression model with loglog as a link function consistently performed best in all country-specific value sets. For instance, the NRMSE (0.1282) and NMAE (0.0914) were the lowest, while adjusted-r 2 was the highest (52.5%) when the English value set was considered. Among D-39 dimensions, the energy and mobility was the only one that was consistently significant for all models. The D-39 can be mapped onto the EQ-5D-5L utilities with good predictive accuracy. The fractional regression model, which is appropriate for handling bounded outcomes, outperformed other candidate methods in all country-specific value sets. However, the regression coefficients differed reflecting preference heterogeneity across countries.

  13. Estimating the postmortem interval (PMI) using accumulated degree-days (ADD) in a temperate region of South Africa.

    Science.gov (United States)

    Myburgh, Jolandie; L'Abbé, Ericka N; Steyn, Maryna; Becker, Piet J

    2013-06-10

    The validity of the method in which total body score (TBS) and accumulated degree-days (ADD) are used to estimate the postmortem interval (PMI) is examined. TBS and ADD were recorded for 232 days in northern South Africa, which has temperatures between 17 and 28 °C in summer and 6 and 20 °C in winter. Winter temperatures rarely go below 0°C. Thirty pig carcasses, which weighed between 38 and 91 kg, were used. TBS was scored using the modified method of Megyesi et al. [1]. Temperature was acquired from an on site data logger and the weather station bureau; differences between these two sources were not statistically significant. Using loglinear random-effects maximum likelihood regression, an r(2) value for ADD (0.6227) was produced and linear regression formulae to estimate PMI from ADD with a 95% prediction interval were developed. The data of 16 additional pigs that were placed a year later were then used to validate the accuracy of this method. The actual PMI and ADD were compared to the estimated PMI and ADD produced by the developed formulae as well as the estimated PMIs within the 95% prediction interval. A validation of the study produced poor results as only one pig of 16 fell within the 95% interval when using the formulae, showing that ADD has limited use in the prediction of PMI in a South African setting. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Buffer Overflow Period in a MAP Queue

    Directory of Open Access Journals (Sweden)

    Andrzej Chydzinski

    2007-01-01

    Full Text Available The buffer overflow period in a queue with Markovian arrival process (MAP and general service time distribution is investigated. The results include distribution of the overflow period in transient and stationary regimes and the distribution of the number of cells lost during the overflow interval. All theorems are illustrated via numerical calculations.

  15. Complete Bouguer gravity anomaly map of the state of Colorado

    Science.gov (United States)

    Abrams, Gerda A.

    1993-01-01

    The Bouguer gravity anomaly map is part of a folio of maps of Colorado cosponsored by the National Mineral Resources Assessment Program (NAMRAP) and the National Geologic Mapping Program (COGEOMAP) and was produced to assist in studies of the mineral resource potential and tectonic setting of the State. Previous compilations of about 12,000 gravity stations by Behrendt and Bajwa (1974a,b) are updated by this map. The data was reduced at a 2.67 g/cm3 and the grid contoured at 3 mGal intervals. This map will aid in the mineral resource assessment by indicating buried intrusive complexes, volcanic fields, major faults and shear zones, and sedimentary basins; helping to identify concealed geologic units; and identifying localities that might be hydrothermically altered or mineralized.

  16. Gaussian process regression for tool wear prediction

    Science.gov (United States)

    Kong, Dongdong; Chen, Yongjie; Li, Ning

    2018-05-01

    To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.

  17. Linear models for joint association and linkage QTL mapping

    Directory of Open Access Journals (Sweden)

    Fernando Rohan L

    2009-09-01

    Full Text Available Abstract Background Populational linkage disequilibrium and within-family linkage are commonly used for QTL mapping and marker assisted selection. The combination of both results in more robust and accurate locations of the QTL, but models proposed so far have been either single marker, complex in practice or well fit to a particular family structure. Results We herein present linear model theory to come up with additive effects of the QTL alleles in any member of a general pedigree, conditional to observed markers and pedigree, accounting for possible linkage disequilibrium among QTLs and markers. The model is based on association analysis in the founders; further, the additive effect of the QTLs transmitted to the descendants is a weighted (by the probabilities of transmission average of the substitution effects of founders' haplotypes. The model allows for non-complete linkage disequilibrium QTL-markers in the founders. Two submodels are presented: a simple and easy to implement Haley-Knott type regression for half-sib families, and a general mixed (variance component model for general pedigrees. The model can use information from all markers. The performance of the regression method is compared by simulation with a more complex IBD method by Meuwissen and Goddard. Numerical examples are provided. Conclusion The linear model theory provides a useful framework for QTL mapping with dense marker maps. Results show similar accuracies but a bias of the IBD method towards the center of the region. Computations for the linear regression model are extremely simple, in contrast with IBD methods. Extensions of the model to genomic selection and multi-QTL mapping are straightforward.

  18. Clinical value of regression of electrocardiographic left ventricular hypertrophy after aortic valve replacement.

    Science.gov (United States)

    Yamabe, Sayuri; Dohi, Yoshihiro; Higashi, Akifumi; Kinoshita, Hiroki; Sada, Yoshiharu; Hidaka, Takayuki; Kurisu, Satoshi; Shiode, Nobuo; Kihara, Yasuki

    2016-09-01

    Electrocardiographic left ventricular hypertrophy (ECG-LVH) gradually regressed after aortic valve replacement (AVR) in patients with severe aortic stenosis. Sokolow-Lyon voltage (SV1 + RV5/6) is possibly the most widely used criterion for ECG-LVH. The aim of this study was to determine whether decrease in Sokolow-Lyon voltage reflects left ventricular reverse remodeling detected by echocardiography after AVR. Of 129 consecutive patients who underwent AVR for severe aortic stenosis, 38 patients with preoperative ECG-LVH, defined by SV1 + RV5/6 of ≥3.5 mV, were enrolled in this study. Electrocardiography and echocardiography were performed preoperatively and 1 year postoperatively. The patients were divided into ECG-LVH regression group (n = 19) and non-regression group (n = 19) according to the median value of the absolute regression in SV1 + RV5/6. Multivariate logistic regression analysis was performed to assess determinants of ECG-LVH regression among echocardiographic indices. ECG-LVH regression group showed significantly greater decrease in left ventricular mass index and left ventricular dimensions than Non-regression group. ECG-LVH regression was independently determined by decrease in the left ventricular mass index [odds ratio (OR) 1.28, 95 % confidence interval (CI) 1.03-1.69, p = 0.048], left ventricular end-diastolic dimension (OR 1.18, 95 % CI 1.03-1.41, p = 0.014), and left ventricular end-systolic dimension (OR 1.24, 95 % CI 1.06-1.52, p = 0.0047). ECG-LVH regression could be a marker of the effect of AVR on both reducing the left ventricular mass index and left ventricular dimensions. The effect of AVR on reverse remodeling can be estimated, at least in part, by regression of ECG-LVH.

  19. Spectral signature selection for mapping unvegetated soils

    Science.gov (United States)

    May, G. A.; Petersen, G. W.

    1975-01-01

    Airborne multispectral scanner data covering the wavelength interval from 0.40-2.60 microns were collected at an altitude of 1000 m above the terrain in southeastern Pennsylvania. Uniform training areas were selected within three sites from this flightline. Soil samples were collected from each site and a procedure developed to allow assignment of scan line and element number from the multispectral scanner data to each sampling location. These soil samples were analyzed on a spectrophotometer and laboratory spectral signatures were derived. After correcting for solar radiation and atmospheric attenuation, the laboratory signatures were compared to the spectral signatures derived from these same soils using multispectral scanner data. Both signatures were used in supervised and unsupervised classification routines. Computer-generated maps using the laboratory and multispectral scanner derived signatures resulted in maps that were similar to maps resulting from field surveys. Approximately 90% agreement was obtained between classification maps produced using multispectral scanner derived signatures and laboratory derived signatures.

  20. Predicting Ambulance Time of Arrival to the Emergency Department Using Global Positioning System and Google Maps

    Science.gov (United States)

    Fleischman, Ross J.; Lundquist, Mark; Jui, Jonathan; Newgard, Craig D.; Warden, Craig

    2014-01-01

    Objective To derive and validate a model that accurately predicts ambulance arrival time that could be implemented as a Google Maps web application. Methods This was a retrospective study of all scene transports in Multnomah County, Oregon, from January 1 through December 31, 2008. Scene and destination hospital addresses were converted to coordinates. ArcGIS Network Analyst was used to estimate transport times based on street network speed limits. We then created a linear regression model to improve the accuracy of these street network estimates using weather, patient characteristics, use of lights and sirens, daylight, and rush-hour intervals. The model was derived from a 50% sample and validated on the remainder. Significance of the covariates was determined by p times recorded by computer-aided dispatch. We then built a Google Maps-based web application to demonstrate application in real-world EMS operations. Results There were 48,308 included transports. Street network estimates of transport time were accurate within 5 minutes of actual transport time less than 16% of the time. Actual transport times were longer during daylight and rush-hour intervals and shorter with use of lights and sirens. Age under 18 years, gender, wet weather, and trauma system entry were not significant predictors of transport time. Our model predicted arrival time within 5 minutes 73% of the time. For lights and sirens transports, accuracy was within 5 minutes 77% of the time. Accuracy was identical in the validation dataset. Lights and sirens saved an average of 3.1 minutes for transports under 8.8 minutes, and 5.3 minutes for longer transports. Conclusions An estimate of transport time based only on a street network significantly underestimated transport times. A simple model incorporating few variables can predict ambulance time of arrival to the emergency department with good accuracy. This model could be linked to global positioning system data and an automated Google Maps web

  1. Effect of Abdominal Visceral Fat Change on Regression of Erosive Esophagitis: Prospective Cohort Study.

    Science.gov (United States)

    Nam, Su Youn; Kim, Young Woo; Park, Bum Joon; Ryu, Kum Hei; Kim, Hyun Boem

    2018-05-04

    Although abdominal visceral fat has been associated with erosive esophagitis in cross-sectional studies, there are few data on the longitudinal effect. We evaluated the effects of abdominal visceral fat change on the regression of erosive esophagitis in a prospective cohort study. A total of 163 participants with erosive esophagitis at baseline were followed up at 34 months and underwent esophagogastroduodenoscopy and computed tomography at both baseline and follow-up. The longitudinal effects of abdominal visceral fat on the regression of erosive esophagitis were evaluated using relative risk (RR) and 95% confidence intervals (CIs). Regression was observed in approximately 49% of participants (n=80). The 3rd (RR, 0.13; 95% CI, 0.02 to 0.71) and 4th quartiles (RR, 0.07; 95% CI, 0.01 to 0.38) of visceral fat at follow-up were associated with decreased regression of erosive esophagitis. The highest quartile of visceral fat change reduced the probability of the regression of erosive esophagitis compared to the lowest quartile (RR, 0.10; 95% CI, 0.03 to 0.28). Each trend showed a dose-dependent pattern (p for trend fat at follow-up and a greater increase in visceral fat reduced the regression of erosive esophagitis in a dose-dependent manner.

  2. Accuracy Assessment of Timber Volume Maps Using Forest Inventory Data and LiDAR Canopy Height Models

    Directory of Open Access Journals (Sweden)

    Andreas Hill

    2014-09-01

    Full Text Available Maps of standing timber volume provide valuable decision support for forest managers and have therefore been the subject of recent studies. For map production, field observations are commonly combined with area-wide remote sensing data in order to formulate prediction models, which are then applied over the entire inventory area. The accuracy of such maps has frequently been described by parameters such as the root mean square error of the prediction model. The aim of this study was to additionally address the accuracy of timber volume classes, which are used to better represent the map predictions. However, the use of constant class intervals neglects the possibility that the precision of the underlying prediction model may not be constant across the entire volume range, resulting in pronounced gradients between class accuracies. This study proposes an optimization technique that automatically identifies a classification scheme which accounts for the properties of the underlying model and the implied properties of the remote sensing support information. We demonstrate the approach in a mountainous study site in Eastern Switzerland covering a forest area of 2000 hectares using a multiple linear regression model approach. A LiDAR-based canopy height model (CHM provided the auxiliary information; timber volume observations from the latest forest inventory were used for model calibration and map validation. The coefficient of determination (R2 = 0.64 and the cross-validated root mean square error (RMSECV = 123.79 m3 ha−1 were only slightly smaller than those of studies in less steep and heterogeneous landscapes. For a large set of pre-defined number of classes, the optimization model successfully identified those classification schemes that achieved the highest possible accuracies for each class.

  3. Application of logistic regression for landslide susceptibility zoning of Cekmece Area, Istanbul, Turkey

    Science.gov (United States)

    Duman, T. Y.; Can, T.; Gokceoglu, C.; Nefeslioglu, H. A.; Sonmez, H.

    2006-11-01

    As a result of industrialization, throughout the world, cities have been growing rapidly for the last century. One typical example of these growing cities is Istanbul, the population of which is over 10 million. Due to rapid urbanization, new areas suitable for settlement and engineering structures are necessary. The Cekmece area located west of the Istanbul metropolitan area is studied, because the landslide activity is extensive in this area. The purpose of this study is to develop a model that can be used to characterize landslide susceptibility in map form using logistic regression analysis of an extensive landslide database. A database of landslide activity was constructed using both aerial-photography and field studies. About 19.2% of the selected study area is covered by deep-seated landslides. The landslides that occur in the area are primarily located in sandstones with interbedded permeable and impermeable layers such as claystone, siltstone and mudstone. About 31.95% of the total landslide area is located at this unit. To apply logistic regression analyses, a data matrix including 37 variables was constructed. The variables used in the forwards stepwise analyses are different measures of slope, aspect, elevation, stream power index (SPI), plan curvature, profile curvature, geology, geomorphology and relative permeability of lithological units. A total of 25 variables were identified as exerting strong influence on landslide occurrence, and included by the logistic regression equation. Wald statistics values indicate that lithology, SPI and slope are more important than the other parameters in the equation. Beta coefficients of the 25 variables included the logistic regression equation provide a model for landslide susceptibility in the Cekmece area. This model is used to generate a landslide susceptibility map that correctly classified 83.8% of the landslide-prone areas.

  4. Chosen interval methods for solving linear interval systems with special type of matrix

    Science.gov (United States)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  5. Regression Phalanxes

    OpenAIRE

    Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.

    2017-01-01

    Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...

  6. An introduction to using Bayesian linear regression with clinical data.

    Science.gov (United States)

    Baldwin, Scott A; Larson, Michael J

    2017-11-01

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Mapping of the genomic regions controlling seed storability in soybean

    Indian Academy of Sciences (India)

    Composite interval mapping identified a total of three. QTLs on linkage ..... Soybean seeds decline in quality faster than seeds of other crops (Fabrizius et al. 1999). ... harvest and postharvest management practices (Lewis et al. 1998). Cho and ...

  8. Comparative investigation of two different self-organizing map ...

    African Journals Online (AJOL)

    Purpose: To demonstrate the ability and investigate the performance of two different wavelength selection approaches based on self-organizing map (SOM) technique in partial least-squares (PLS) regression for analysis of pharmaceutical binary mixtures with strongly overlapping spectra. Methods: Two different variable ...

  9. Mapping Soil Transmitted Helminths and Schistosomiasis under Uncertainty: A Systematic Review and Critical Appraisal of Evidence.

    Directory of Open Access Journals (Sweden)

    Andrea L Araujo Navas

    2016-12-01

    Full Text Available Spatial modelling of STH and schistosomiasis epidemiology is now commonplace. Spatial epidemiological studies help inform decisions regarding the number of people at risk as well as the geographic areas that need to be targeted with mass drug administration; however, limited attention has been given to propagated uncertainties, their interpretation, and consequences for the mapped values. Using currently published literature on the spatial epidemiology of helminth infections we identified: (1 the main uncertainty sources, their definition and quantification and (2 how uncertainty is informative for STH programme managers and scientists working in this domain.We performed a systematic literature search using the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA protocol. We searched Web of Knowledge and PubMed using a combination of uncertainty, geographic and disease terms. A total of 73 papers fulfilled the inclusion criteria for the systematic review. Only 9% of the studies did not address any element of uncertainty, while 91% of studies quantified uncertainty in the predicted morbidity indicators and 23% of studies mapped it. In addition, 57% of the studies quantified uncertainty in the regression coefficients but only 7% incorporated it in the regression response variable (morbidity indicator. Fifty percent of the studies discussed uncertainty in the covariates but did not quantify it. Uncertainty was mostly defined as precision, and quantified using credible intervals by means of Bayesian approaches.None of the studies considered adequately all sources of uncertainties. We highlighted the need for uncertainty in the morbidity indicator and predictor variable to be incorporated into the modelling framework. Study design and spatial support require further attention and uncertainty associated with Earth observation data should be quantified. Finally, more attention should be given to mapping and interpreting

  10. Sequence-Based Introgression Mapping Identifies Candidate White Mold Tolerance Genes in Common Bean

    Directory of Open Access Journals (Sweden)

    Sujan Mamidi

    2016-07-01

    Full Text Available White mold, caused by the necrotrophic fungus (Lib. de Bary, is a major disease of common bean ( L.. WM7.1 and WM8.3 are two quantitative trait loci (QTL with major effects on tolerance to the pathogen. Advanced backcross populations segregating individually for either of the two QTL, and a recombinant inbred (RI population segregating for both QTL were used to fine map and confirm the genetic location of the QTL. The QTL intervals were physically mapped using the reference common bean genome sequence, and the physical intervals for each QTL were further confirmed by sequence-based introgression mapping. Using whole-genome sequence data from susceptible and tolerant DNA pools, introgressed regions were identified as those with significantly higher numbers of single-nucleotide polymorphisms (SNPs relative to the whole genome. By combining the QTL and SNP data, WM7.1 was located to a 660-kb region that contained 41 gene models on the proximal end of chromosome Pv07, while the WM8.3 introgression was narrowed to a 1.36-Mb region containing 70 gene models. The most polymorphic candidate gene in the WM7.1 region encodes a BEACH-domain protein associated with apoptosis. Within the WM8.3 interval, a receptor-like protein with the potential to recognize pathogen effectors was the most polymorphic gene. The use of gene and sequence-based mapping identified two candidate genes whose putative functions are consistent with the current model of pathogenicity.

  11. Combining hyperspectral imagery and legacy measured soil profiles to map subsurface soil properties in a Mediterranean area (Cap-Bon, Tunisia)

    Science.gov (United States)

    Lagacherie, Philippe; Sneep, Anne-Ruth; Gomez, Cécile

    2013-04-01

    Previous studies have demonstrated that Visible Near InfraRed (Vis-NIR) Hyperspectral imagery is a cost-efficient way for mapping soil properties at fine resolutions (~5m) over large areas. However, such mapping is only feasible for soil surface since the effective penetration depths of optical sensors do not exceed several millimetres. This study aimed to extend the use of Vis-NIR hyperspectral imagery to the mapping of subsurface properties at three intervals of depth (15-30 cm, 30-60 cm and 60-100 cm) as specified by the GlobalSoilMap project. To avoid additional data collection, our basic idea was to develop an original Digital Soil Mapping approach that combines the digital maps of surface soil properties obtained from Vis-NIR hyperspectral imagery with legacy soil profiles of the region and with easily available images of DEM-derived parameters. The study was conducted in a pedologically-contrasted 300km² cultivated area located in the Cap Bon region (Northern Tunisia). AISA-Dual Vis-NIR hyperspectral airborne data were acquired over the studied area with a fine spatial resolution (5 m) and fine spectral resolution (260 spectral bands from 450 to 2500nm). Vegetated surfaces were masked to conserve only bare soil surface which represented around 50% of the study area. Three soil surface properties (clay and sand contents, Cation Exchange Capacity) were successfully mapped over the bare soils, from these data using Partial Least Square Regression models (R2 > 0.7). We used as additional data a set of images of landscape covariates derived from a 30 meter DEM and a local database of 152 legacy soil profiles from which soil properties values at the required intervals of depths were computed using an equal-area-spline algorithm. Our Digital Soil Mapping approach followed two steps: i) the development of surface-subsurface functions - linear models and random forests - that estimates subsurface property values from surface ones and landscape covariates and that

  12. Retrieval interval mapping, a tool to optimize the spectral retrieval range in differential optical absorption spectroscopy

    Science.gov (United States)

    Vogel, L.; Sihler, H.; Lampel, J.; Wagner, T.; Platt, U.

    2012-06-01

    Remote sensing via differential optical absorption spectroscopy (DOAS) has become a standard technique to identify and quantify trace gases in the atmosphere. The technique is applied in a variety of configurations, commonly classified into active and passive instruments using artificial and natural light sources, respectively. Platforms range from ground based to satellite instruments and trace-gases are studied in all kinds of different environments. Due to the wide range of measurement conditions, atmospheric compositions and instruments used, a specific challenge of a DOAS retrieval is to optimize the parameters for each specific case and particular trace gas of interest. This becomes especially important when measuring close to the detection limit. A well chosen evaluation wavelength range is crucial to the DOAS technique. It should encompass strong absorption bands of the trace gas of interest in order to maximize the sensitivity of the retrieval, while at the same time minimizing absorption structures of other trace gases and thus potential interferences. Also, instrumental limitations and wavelength depending sources of errors (e.g. insufficient corrections for the Ring effect and cross correlations between trace gas cross sections) need to be taken into account. Most often, not all of these requirements can be fulfilled simultaneously and a compromise needs to be found depending on the conditions at hand. Although for many trace gases the overall dependence of common DOAS retrieval on the evaluation wavelength interval is known, a systematic approach to find the optimal retrieval wavelength range and qualitative assessment is missing. Here we present a novel tool to determine the optimal evaluation wavelength range. It is based on mapping retrieved values in the retrieval wavelength space and thus visualize the consequence of different choices of retrieval spectral ranges, e.g. caused by slightly erroneous absorption cross sections, cross correlations and

  13. Constraints on eQTL Fine Mapping in the Presence of Multisite Local Regulation of Gene Expression

    Directory of Open Access Journals (Sweden)

    Biao Zeng

    2017-08-01

    Full Text Available Expression quantitative trait locus (eQTL detection has emerged as an important tool for unraveling of the relationship between genetic risk factors and disease or clinical phenotypes. Most studies use single marker linear regression to discover primary signals, followed by sequential conditional modeling to detect secondary genetic variants affecting gene expression. However, this approach assumes that functional variants are sparsely distributed and that close linkage between them has little impact on estimation of their precise location and the magnitude of effects. We describe a series of simulation studies designed to evaluate the impact of linkage disequilibrium (LD on the fine mapping of causal variants with typical eQTL effect sizes. In the presence of multisite regulation, even though between 80 and 90% of modeled eSNPs associate with normally distributed traits, up to 10% of all secondary signals could be statistical artifacts, and at least 5% but up to one-quarter of credible intervals of SNPs within r2 > 0.8 of the peak may not even include a causal site. The Bayesian methods eCAVIAR and DAP (Deterministic Approximation of Posteriors provide only modest improvement in resolution. Given the strong empirical evidence that gene expression is commonly regulated by more than one variant, we conclude that the fine mapping of causal variants needs to be adjusted for multisite influences, as conditional estimates can be highly biased by interference among linked sites, but ultimately experimental verification of individual effects is needed. Presumably similar conclusions apply not just to eQTL mapping, but to multisite influences on fine mapping of most types of quantitative trait.

  14. Probabilistic change mapping from airborne LiDAR for post-disaster damage assessment

    Science.gov (United States)

    Jalobeanu, A.; Runyon, S. C.; Kruse, F. A.

    2013-12-01

    When both pre- and post-event LiDAR point clouds are available, change detection can be performed to identify areas that were most affected by a disaster event, and to obtain a map of quantitative changes in terms of height differences. In the case of earthquakes in built-up areas for instance, first responders can use a LiDAR change map to help prioritize search and recovery efforts. The main challenge consists of producing reliable change maps, robust to collection conditions, free of processing artifacts (due for instance to triangulation or gridding), and taking into account the various sources of uncertainty. Indeed, datasets acquired within a few years interval are often of different point density (sometimes an order of magnitude higher for recent data), different acquisition geometries, and very likely suffer from georeferencing errors and geometric discrepancies. All these differences might not be important for producing maps from each dataset separately, but they are crucial when performing change detection. We have developed a novel technique for the estimation of uncertainty maps from the LiDAR point clouds, using Bayesian inference, treating all variables as random. The main principle is to grid all points on a common grid before attempting any comparison, as working directly with point clouds is cumbersome and time consuming. A non-parametric approach based on local linear regression was implemented, assuming a locally linear model for the surface. This enabled us to derive error bars on gridded elevations, and then elevation differences. In this way, a map of statistically significant changes could be computed - whereas a deterministic approach would not allow testing of the significance of differences between the two datasets. This approach allowed us to take into account not only the observation noise (due to ranging, position and attitude errors) but also the intrinsic roughness of the observed surfaces occurring when scanning vegetation. As only

  15. 3D-Digital soil property mapping by geoadditive models

    Science.gov (United States)

    Papritz, Andreas

    2016-04-01

    In many digital soil mapping (DSM) applications, soil properties must be predicted not only for a single but for multiple soil depth intervals. In the GlobalSoilMap project, as an example, predictions are computed for the 0-5 cm, 5-15 cm, 15-30 cm, 30-60 cm, 60-100 cm, 100-200 cm depth intervals (Arrouays et al., 2014). Legacy soil data are often used for DSM. It is common for such datasets that soil properties were measured for soil horizons or for layers at varying soil depth and with non-constant thickness (support). This poses problems for DSM: One strategy is to harmonize the soil data to common depth prior to the analyses (e.g. Bishop et al., 1999) and conduct the statistical analyses for each depth interval independently. The disadvantage of this approach is that the predictions for different depths are computed independently from each other so that the predicted depth profiles may be unrealistic. Furthermore, the error induced by the harmonization to common depth is ignored in this approach (Orton et al. 2016). A better strategy is therefore to process all soil data jointly without prior harmonization by a 3D-analysis that takes soil depth and geographical position explicitly into account. Usually, the non-constant support of the data is then ignored, but Orton et al. (2016) presented recently a geostatistical approach that accounts for non-constant support of soil data and relies on restricted maximum likelihood estimation (REML) of a linear geostatistical model with a separable, heteroscedastic, zonal anisotropic auto-covariance function and area-to-point kriging (Kyriakidis, 2004.) Although this model is theoretically coherent and elegant, estimating its many parameters by REML and selecting covariates for the spatial mean function is a formidable task. A simpler approach might be to use geoadditive models (Kammann and Wand, 2003; Wand, 2003) for 3D-analyses of soil data. geoAM extend the scope of the linear model with spatially correlated errors to

  16. Rapid genotyping with DNA micro-arrays for high-density linkage mapping and QTL mapping in common buckwheat (Fagopyrum esculentum Moench)

    Science.gov (United States)

    Yabe, Shiori; Hara, Takashi; Ueno, Mariko; Enoki, Hiroyuki; Kimura, Tatsuro; Nishimura, Satoru; Yasui, Yasuo; Ohsawa, Ryo; Iwata, Hiroyoshi

    2014-01-01

    For genetic studies and genomics-assisted breeding, particularly of minor crops, a genotyping system that does not require a priori genomic information is preferable. Here, we demonstrated the potential of a novel array-based genotyping system for the rapid construction of high-density linkage map and quantitative trait loci (QTL) mapping. By using the system, we successfully constructed an accurate, high-density linkage map for common buckwheat (Fagopyrum esculentum Moench); the map was composed of 756 loci and included 8,884 markers. The number of linkage groups converged to eight, which is the basic number of chromosomes in common buckwheat. The sizes of the linkage groups of the P1 and P2 maps were 773.8 and 800.4 cM, respectively. The average interval between adjacent loci was 2.13 cM. The linkage map constructed here will be useful for the analysis of other common buckwheat populations. We also performed QTL mapping for main stem length and detected four QTL. It took 37 days to process 178 samples from DNA extraction to genotyping, indicating the system enables genotyping of genome-wide markers for a few hundred buckwheat plants before the plants mature. The novel system will be useful for genomics-assisted breeding in minor crops without a priori genomic information. PMID:25914583

  17. Regression to Causality : Regression-style presentation influences causal attribution

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  18. A regression-based Kansei engineering system based on form feature lines for product form design

    Directory of Open Access Journals (Sweden)

    Yan Xiong

    2016-06-01

    Full Text Available When developing new products, it is important for a designer to understand users’ perceptions and develop product form with the corresponding perceptions. In order to establish the mapping between users’ perceptions and product design features effectively, in this study, we presented a regression-based Kansei engineering system based on form feature lines for product form design. First according to the characteristics of design concept representation, product form features–product form feature lines were defined. Second, Kansei words were chosen to describe image perceptions toward product samples. Then, multiple linear regression and support vector regression were used to construct the models, respectively, that predicted users’ image perceptions. Using mobile phones as experimental samples, Kansei prediction models were established based on the front view form feature lines of the samples. From the experimental results, these two predict models were of good adaptability. But in contrast to multiple linear regression, the predict performance of support vector regression model was better, and support vector regression is more suitable for form regression prediction. The results of the case showed that the proposed method provided an effective means for designers to manipulate product features as a whole, and it can optimize Kansei model and improve practical values.

  19. Translating QT interval prolongation from conscious dogs to humans.

    Science.gov (United States)

    Dubois, Vincent F S; Smania, Giovanni; Yu, Huixin; Graf, Ramona; Chain, Anne S Y; Danhof, Meindert; Della Pasqua, Oscar

    2017-02-01

    In spite of screening procedures in early drug development, uncertainty remains about the propensity of new chemical entities (NCEs) to prolong the QT/QTc interval. The evaluation of proarrhythmic activity using a comprehensive in vitro proarrhythmia assay does not fully account for pharmacokinetic-pharmacodynamic (PKPD) differences in vivo. In the present study, we evaluated the correlation between drug-specific parameters describing QT interval prolongation in dogs and in humans. Using estimates of the drug-specific parameter, data on the slopes of the PKPD relationships of nine compounds with varying QT-prolonging effects (cisapride, sotalol, moxifloxacin, carabersat, GSK945237, SB237376 and GSK618334, and two anonymized NCEs) were analysed. Mean slope estimates varied between -0.98 ms μM -1 and 6.1 ms μM -1 in dogs and -10 ms μM -1 and 90 ms μM -1 in humans, indicating a wide range of effects on the QT interval. Linear regression techniques were then applied to characterize the correlation between the parameter estimates across species. For compounds without a mixed ion channel block, a correlation was observed between the drug-specific parameter in dogs and humans (y = -1.709 + 11.6x; R 2  = 0.989). These results show that per unit concentration, the drug effect on the QT interval in humans is 11.6-fold larger than in dogs. Together with information about the expected therapeutic exposure, the evidence of a correlation between the compound-specific parameter in dogs and in humans represents an opportunity for translating preclinical safety data before progression into the clinic. Whereas further investigation is required to establish the generalizability of our findings, this approach can be used with clinical trial simulations to predict the probability of QT prolongation in humans. © 2016 The British Pharmacological Society.

  20. Age-matched normal values and topographic maps for regional cerebral blood flow measurements by Xe-133 inhalation

    International Nuclear Information System (INIS)

    Matsuda, H.; Maeda, T.; Yamada, M.; Gui, L.X.; Tonami, N.; Hisada, K.

    1984-01-01

    The relationship between normal aging and regional cerebral blood flow (rCBF) computed as initial slope index (ISI) by Fourier method was investigated in 105 right-handed healthy volunteers (132 measurements) by Xe-133 inhalation method, and age-matched normal values were calculated. Mean brain ISI values showed significant negative correlation with advancing age (r . 0.70, p less than 0.001), and the regression line and its 95% confidence interval was Y . -0.32 (X - 19) + 63.5 +/- 11.2 (19 less than or equal to X less than or equal to 80). Regional ISI values also showed significant negative correlations for the entire brain (p less than 0.001). The regional reductions of ISI values with advancing age were significantly greater in the regional distribution of the middle cerebral arteries bilaterally, compared with regions in the distribution of the other arteries (p less than 0.05). Therefore, measured rCBF values for patients must be compared to age-matched normal values for mean hemispheric and each region examined. Two kinds of topographic maps, brain map showing rCBF compared to age-matched normal values and showing hemispheric differences were made by dividing patient's values by the 95% confidence limits for age-matched normal values and displaying laterality index calculated as follows, respectively. (formula; see text) These maps were useful for evaluating significantly decreased or increased regions and regional hemispheric differences

  1. Stochastic development regression on non-linear manifolds

    DEFF Research Database (Denmark)

    Kühnel, Line; Sommer, Stefan Horst

    2017-01-01

    We introduce a regression model for data on non-linear manifolds. The model describes the relation between a set of manifold valued observations, such as shapes of anatomical objects, and Euclidean explanatory variables. The approach is based on stochastic development of Euclidean diffusion...... processes to the manifold. Defining the data distribution as the transition distribution of the mapped stochastic process, parameters of the model, the non-linear analogue of design matrix and intercept, are found via maximum likelihood. The model is intrinsically related to the geometry encoded...... in the connection of the manifold. We propose an estimation procedure which applies the Laplace approximation of the likelihood function. A simulation study of the performance of the model is performed and the model is applied to a real dataset of Corpus Callosum shapes....

  2. On interval and cyclic interval edge colorings of (3,5)-biregular graphs

    DEFF Research Database (Denmark)

    Casselgren, Carl Johan; Petrosyan, Petros; Toft, Bjarne

    2017-01-01

    A proper edge coloring f of a graph G with colors 1,2,3,…,t is called an interval coloring if the colors on the edges incident to every vertex of G form an interval of integers. The coloring f is cyclic interval if for every vertex v of G, the colors on the edges incident to v either form an inte...

  3. Terrain Mapping and Obstacle Detection Using Gaussian Processes

    DEFF Research Database (Denmark)

    Kjærgaard, Morten; Massaro, Alessandro Salvatore; Bayramoglu, Enis

    2011-01-01

    In this paper we consider a probabilistic method for extracting terrain maps from a scene and use the information to detect potential navigation obstacles within it. The method uses Gaussian process regression (GPR) to predict an estimate function and its relative uncertainty. To test the new...... show that the estimated maps follow the terrain shape, while protrusions are identified and may be isolated as potential obstacles. Representing the data with a covariance function allows a dramatic reduction of the amount of data to process, while maintaining the statistical properties of the measured...... and interpolated features....

  4. Towards New Mappings between Emotion Representation Models

    Directory of Open Access Journals (Sweden)

    Agnieszka Landowska

    2018-02-01

    Full Text Available There are several models for representing emotions in affect-aware applications, and available emotion recognition solutions provide results using diverse emotion models. As multimodal fusion is beneficial in terms of both accuracy and reliability of emotion recognition, one of the challenges is mapping between the models of affect representation. This paper addresses this issue by: proposing a procedure to elaborate new mappings, recommending a set of metrics for evaluation of the mapping accuracy, and delivering new mapping matrices for estimating the dimensions of a Pleasure-Arousal-Dominance model from Ekman’s six basic emotions. The results are based on an analysis using three datasets that were constructed based on affect-annotated lexicons. The new mappings were obtained with linear regression learning methods. The proposed mappings showed better results on the datasets in comparison with the state-of-the-art matrix. The procedure, as well as the proposed metrics, might be used, not only in evaluation of the mappings between representation models, but also in comparison of emotion recognition and annotation results. Moreover, the datasets are published along with the paper and new mappings might be created and evaluated using the proposed methods. The study results might be interesting for both researchers and developers, who aim to extend their software solutions with affect recognition techniques.

  5. Mapping the EORTC QLQ-C30 onto the EQ-5D-3L: assessing the external validity of existing mapping algorithms.

    Science.gov (United States)

    Doble, Brett; Lorgelly, Paula

    2016-04-01

    To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.

  6. GIS-based groundwater potential analysis using novel ensemble weights-of-evidence with logistic regression and functional tree models.

    Science.gov (United States)

    Chen, Wei; Li, Hui; Hou, Enke; Wang, Shengquan; Wang, Guirong; Panahi, Mahdi; Li, Tao; Peng, Tao; Guo, Chen; Niu, Chao; Xiao, Lele; Wang, Jiale; Xie, Xiaoshen; Ahmad, Baharin Bin

    2018-09-01

    The aim of the current study was to produce groundwater spring potential maps using novel ensemble weights-of-evidence (WoE) with logistic regression (LR) and functional tree (FT) models. First, a total of 66 springs were identified by field surveys, out of which 70% of the spring locations were used for training the models and 30% of the spring locations were employed for the validation process. Second, a total of 14 affecting factors including aspect, altitude, slope, plan curvature, profile curvature, stream power index (SPI), topographic wetness index (TWI), sediment transport index (STI), lithology, normalized difference vegetation index (NDVI), land use, soil, distance to roads, and distance to streams was used to analyze the spatial relationship between these affecting factors and spring occurrences. Multicollinearity analysis and feature selection of the correlation attribute evaluation (CAE) method were employed to optimize the affecting factors. Subsequently, the novel ensembles of the WoE, LR, and FT models were constructed using the training dataset. Finally, the receiver operating characteristic (ROC) curves, standard error, confidence interval (CI) at 95%, and significance level P were employed to validate and compare the performance of three models. Overall, all three models performed well for groundwater spring potential evaluation. The prediction capability of the FT model, with the highest AUC values, the smallest standard errors, the narrowest CIs, and the smallest P values for the training and validation datasets, is better compared to those of other models. The groundwater spring potential maps can be adopted for the management of water resources and land use by planners and engineers. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Circadian profile of QT interval and QT interval variability in 172 healthy volunteers

    DEFF Research Database (Denmark)

    Bonnemeier, Hendrik; Wiegand, Uwe K H; Braasch, Wiebke

    2003-01-01

    of sleep. QT and R-R intervals revealed a characteristic day-night-pattern. Diurnal profiles of QT interval variability exhibited a significant increase in the morning hours (6-9 AM; P ... lower at day- and nighttime. Aging was associated with an increase of QT interval mainly at daytime and a significant shift of the T wave apex towards the end of the T wave. The circadian profile of ventricular repolarization is strongly related to the mean R-R interval, however, there are significant...

  8. GIS-aided low flow mapping

    Science.gov (United States)

    Saghafian, B.; Mohammadi, A.

    2003-04-01

    Most studies involving water resources allocation, water quality, hydropower generation, and allowable water withdrawal and transfer require estimation of low flows. Normally, frequency analysis on at-station D-day low flow data is performed to derive various T-yr return period values. However, this analysis is restricted to the location of hydrometric stations where the flow discharge is measured. Regional analysis is therefore conducted to relate the at-station low flow quantiles to watershed characteristics. This enables the transposition of low flow quantiles to ungauged sites. Nevertheless, a procedure to map the regional regression relations for the entire stream network, within the bounds of the relations, is particularly helpful when one studies and weighs alternative sites for certain water resources project. In this study, we used a GIS-aided procedure for low flow mapping in Gilan province, part of northern region in Iran. Gilan enjoys a humid climate with an average of 1100 mm annual precipitation. Although rich in water resources, the highly populated area is quite dependent on minimum amount of water to sustain the vast rice farming and to maintain required flow discharge for quality purposes. To carry out the low flow analysis, a total of 36 hydrometric stations with sufficient and reliable discharge data were identified in the region. The average area of the watersheds was 250 sq. km. Log Pearson type 3 was found the best distribution for flow durations over 60 days, while log normal fitted well the shorter duration series. Low flows with return periods of 2, 5, 10, 25, 50, and 100 year were then computed. Cluster analysis identified two homogeneous areas. Although various watershed parameters were examined in factor analysis, the results showed watershed area, length of the main stream, and annual precipitation were the most effective low flow parameters. The regression equations were then mapped with the aid of GIS based on flow accumulation maps

  9. Forest Biomass Mapping From Lidar and Radar Synergies

    Science.gov (United States)

    Sun, Guoqing; Ranson, K. Jon; Guo, Z.; Zhang, Z.; Montesano, P.; Kimes, D.

    2011-01-01

    The use of lidar and radar instruments to measure forest structure attributes such as height and biomass at global scales is being considered for a future Earth Observation satellite mission, DESDynI (Deformation, Ecosystem Structure, and Dynamics of Ice). Large footprint lidar makes a direct measurement of the heights of scatterers in the illuminated footprint and can yield accurate information about the vertical profile of the canopy within lidar footprint samples. Synthetic Aperture Radar (SAR) is known to sense the canopy volume, especially at longer wavelengths and provides image data. Methods for biomass mapping by a combination of lidar sampling and radar mapping need to be developed. In this study, several issues in this respect were investigated using aircraft borne lidar and SAR data in Howland, Maine, USA. The stepwise regression selected the height indices rh50 and rh75 of the Laser Vegetation Imaging Sensor (LVIS) data for predicting field measured biomass with a R(exp 2) of 0.71 and RMSE of 31.33 Mg/ha. The above-ground biomass map generated from this regression model was considered to represent the true biomass of the area and used as a reference map since no better biomass map exists for the area. Random samples were taken from the biomass map and the correlation between the sampled biomass and co-located SAR signature was studied. The best models were used to extend the biomass from lidar samples into all forested areas in the study area, which mimics a procedure that could be used for the future DESDYnI Mission. It was found that depending on the data types used (quad-pol or dual-pol) the SAR data can predict the lidar biomass samples with R2 of 0.63-0.71, RMSE of 32.0-28.2 Mg/ha up to biomass levels of 200-250 Mg/ha. The mean biomass of the study area calculated from the biomass maps generated by lidar- SAR synergy 63 was within 10% of the reference biomass map derived from LVIS data. The results from this study are preliminary, but do show the

  10. Cigarette smoke chemistry market maps under Massachusetts Department of Public Health smoking conditions.

    Science.gov (United States)

    Morton, Michael J; Laffoon, Susan W

    2008-06-01

    This study extends the market mapping concept introduced by Counts et al. (Counts, M.E., Hsu, F.S., Tewes, F.J., 2006. Development of a commercial cigarette "market map" comparison methodology for evaluating new or non-conventional cigarettes. Regul. Toxicol. Pharmacol. 46, 225-242) to include both temporal cigarette and testing variation and also machine smoking with more intense puffing parameters, as defined by the Massachusetts Department of Public Health (MDPH). The study was conducted over a two year period and involved a total of 23 different commercial cigarette brands from the U.S. marketplace. Market mapping prediction intervals were developed for 40 mainstream cigarette smoke constituents and the potential utility of the market map as a comparison tool for new brands was demonstrated. The over-time character of the data allowed for the variance structure of the smoke constituents to be more completely characterized than is possible with one-time sample data. The variance was partitioned among brand-to-brand differences, temporal differences, and the remaining residual variation using a mixed random and fixed effects model. It was shown that a conventional weighted least squares model typically gave similar prediction intervals to those of the more complicated mixed model. For most constituents there was less difference in the prediction intervals calculated from over-time samples and those calculated from one-time samples than had been anticipated. One-time sample maps may be adequate for many purposes if the user is aware of their limitations. Cigarette tobacco fillers were analyzed for nitrate, nicotine, tobacco-specific nitrosamines, ammonia, chlorogenic acid, and reducing sugars. The filler information was used to improve predicting relationships for several of the smoke constituents, and it was concluded that the effects of filler chemistry on smoke chemistry were partial explanations of the observed brand-to-brand variation.

  11. Bias and Uncertainty in Regression-Calibrated Models of Groundwater Flow in Heterogeneous Media

    DEFF Research Database (Denmark)

    Cooley, R.L.; Christensen, Steen

    2006-01-01

    small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate θ* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear...... are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis....

  12. R-R interval variations influence the degree of mitral regurgitation in dogs with myxomatous mitral valve disease

    DEFF Research Database (Denmark)

    Reimann, M. J.; Moller, J. E.; Haggstrom, J.

    2014-01-01

    of congestive heart failure due to MMVD. The severity of MR was evaluated in apical four-chamber view using colour Doppler flow mapping (maximum % of the left atrium area) and colour Doppler M-mode (duration in ms). The influence of the ratio between present and preceding R-R interval on MR severity......Mitral regurgitation (MR) due to myxomatous mitral valve disease (MMVD) is a frequent finding in Cavalier King Charles Spaniels (CKCSs). Sinus arrhythmia and atrial premature complexes leading to R-R interval variations occur in dogs. The aim of the study was to evaluate whether the duration...... of the RR interval immediately influences the degree of MR assessed by echocardiography in dogs. Clinical examination including echocardiography was performed in 103 privately-owned dogs: 16 control Beagles, 70 CKCSs with different degree of MR and 17 dogs of different breeds with clinical signs...

  13. A Microsatellite Genetic Map of the Turbot (Scophthalmus maximus)

    Science.gov (United States)

    Bouza, Carmen; Hermida, Miguel; Pardo, Belén G.; Fernández, Carlos; Fortes, Gloria G.; Castro, Jaime; Sánchez, Laura; Presa, Pablo; Pérez, Montse; Sanjuán, Andrés; de Carlos, Alejandro; Álvarez-Dios, José Antonio; Ezcurra, Susana; Cal, Rosa M.; Piferrer, Francesc; Martínez, Paulino

    2007-01-01

    A consensus microsatellite-based linkage map of the turbot (Scophthalmus maximus) was constructed from two unrelated families. The mapping panel was derived from a gynogenetic family of 96 haploid embryos and a biparental diploid family of 85 full-sib progeny with known linkage phase. A total of 242 microsatellites were mapped in 26 linkage groups, six markers remaining unlinked. The consensus map length was 1343.2 cM, with an average distance between markers of 6.5 ± 0.5 cM. Similar length of female and male maps was evidenced. However, the mean recombination at common intervals throughout the genome revealed significant differences between sexes, ∼1.6 times higher in the female than in the male. The comparison of turbot microsatellite flanking sequences against the Tetraodon nigroviridis genome revealed 55 significant matches, with a mean length of 102 bp and high sequence similarity (81–100%). The comparative mapping revealed significant syntenic regions among fish species. This study represents the first linkage map in the turbot, one of the most important flatfish in European aquaculture. This map will be suitable for QTL identification of productive traits in this species and for further evolutionary studies in fish and vertebrate species. PMID:18073440

  14. A simple method for combining genetic mapping data from multiple crosses and experimental designs.

    Directory of Open Access Journals (Sweden)

    Jeremy L Peirce

    Full Text Available BACKGROUND: Over the past decade many linkage studies have defined chromosomal intervals containing polymorphisms that modulate a variety of traits. Many phenotypes are now associated with enough mapping data that meta-analysis could help refine locations of known QTLs and detect many novel QTLs. METHODOLOGY/PRINCIPAL FINDINGS: We describe a simple approach to combining QTL mapping results for multiple studies and demonstrate its utility using two hippocampus weight loci. Using data taken from two populations, a recombinant inbred strain set and an advanced intercross population we demonstrate considerable improvements in significance and resolution for both loci. 1-LOD support intervals were improved 51% for Hipp1a and 37% for Hipp9a. We first generate locus-wise permuted P-values for association with the phenotype from multiple maps, which can be done using a permutation method appropriate to each population. These results are then assigned to defined physical positions by interpolation between markers with known physical and genetic positions. We then use Fisher's combination test to combine position-by-position probabilities among experiments. Finally, we calculate genome-wide combined P-values by generating locus-specific P-values for each permuted map for each experiment. These permuted maps are then sampled with replacement and combined. The distribution of best locus-specific P-values for each combined map is the null distribution of genome-wide adjusted P-values. CONCLUSIONS/SIGNIFICANCE: Our approach is applicable to a wide variety of segregating and non-segregating mapping populations, facilitates rapid refinement of physical QTL position, is complementary to other QTL fine mapping methods, and provides an appropriate genome-wide criterion of significance for combined mapping results.

  15. A comparison of multiple regression and neural network techniques for mapping in situ pCO2 data

    International Nuclear Information System (INIS)

    Lefevre, Nathalie; Watson, Andrew J.; Watson, Adam R.

    2005-01-01

    Using about 138,000 measurements of surface pCO 2 in the Atlantic subpolar gyre (50-70 deg N, 60-10 deg W) during 1995-1997, we compare two methods of interpolation in space and time: a monthly distribution of surface pCO 2 constructed using multiple linear regressions on position and temperature, and a self-organizing neural network approach. Both methods confirm characteristics of the region found in previous work, i.e. the subpolar gyre is a sink for atmospheric CO 2 throughout the year, and exhibits a strong seasonal variability with the highest undersaturations occurring in spring and summer due to biological activity. As an annual average the surface pCO 2 is higher than estimates based on available syntheses of surface pCO 2 . This supports earlier suggestions that the sink of CO 2 in the Atlantic subpolar gyre has decreased over the last decade instead of increasing as previously assumed. The neural network is able to capture a more complex distribution than can be well represented by linear regressions, but both techniques agree relatively well on the average values of pCO 2 and derived fluxes. However, when both techniques are used with a subset of the data, the neural network predicts the remaining data to a much better accuracy than the regressions, with a residual standard deviation ranging from 3 to 11 μatm. The subpolar gyre is a net sink of CO 2 of 0.13 Gt-C/yr using the multiple linear regressions and 0.15 Gt-C/yr using the neural network, on average between 1995 and 1997. Both calculations were made with the NCEP monthly wind speeds converted to 10 m height and averaged between 1995 and 1997, and using the gas exchange coefficient of Wanninkhof

  16. Pseudo-Random Sequences Generated by a Class of One-Dimensional Smooth Map

    Science.gov (United States)

    Wang, Xing-Yuan; Qin, Xue; Xie, Yi-Xin

    2011-08-01

    We extend a class of a one-dimensional smooth map. We make sure that for each desired interval of the parameter the map's Lyapunov exponent is positive. Then we propose a novel parameter perturbation method based on the good property of the extended one-dimensional smooth map. We perturb the parameter r in each iteration by the real number xi generated by the iteration. The auto-correlation function and NIST statistical test suite are taken to illustrate the method's randomness finally. We provide an application of this method in image encryption. Experiments show that the pseudo-random sequences are suitable for this application.

  17. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  18. Regression analysis with categorized regression calibrated exposure: some interesting findings

    Directory of Open Access Journals (Sweden)

    Hjartåker Anette

    2006-07-01

    Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

  19. Inflation, Forecast Intervals and Long Memory Regression Models

    NARCIS (Netherlands)

    C.S. Bos (Charles); Ph.H.B.F. Franses (Philip Hans); M. Ooms (Marius)

    2001-01-01

    textabstractWe examine recursive out-of-sample forecasting of monthly postwar U.S. core inflation and log price levels. We use the autoregressive fractionally integrated moving average model with explanatory variables (ARFIMAX). Our analysis suggests a significant explanatory power of leading

  20. Inflation, Forecast Intervals and Long Memory Regression Models

    NARCIS (Netherlands)

    Ooms, M.; Bos, C.S.; Franses, P.H.

    2003-01-01

    We examine recursive out-of-sample forecasting of monthly postwar US core inflation and log price levels. We use the autoregressive fractionally integrated moving average model with explanatory variables (ARFIMAX). Our analysis suggests a significant explanatory power of leading indicators

  1. SPSS and SAS programs for comparing Pearson correlations and OLS regression coefficients.

    Science.gov (United States)

    Weaver, Bruce; Wuensch, Karl L

    2013-09-01

    Several procedures that use summary data to test hypotheses about Pearson correlations and ordinary least squares regression coefficients have been described in various books and articles. To our knowledge, however, no single resource describes all of the most common tests. Furthermore, many of these tests have not yet been implemented in popular statistical software packages such as SPSS and SAS. In this article, we describe all of the most common tests and provide SPSS and SAS programs to perform them. When they are applicable, our code also computes 100 × (1 - α)% confidence intervals corresponding to the tests. For testing hypotheses about independent regression coefficients, we demonstrate one method that uses summary data and another that uses raw data (i.e., Potthoff analysis). When the raw data are available, the latter method is preferred, because use of summary data entails some loss of precision due to rounding.

  2. A high-resolution genetic linkage map and QTL fine mapping for growth-related traits and sex in the Yangtze River common carp (Cyprinus carpio haematopterus).

    Science.gov (United States)

    Feng, Xiu; Yu, Xiaomu; Fu, Beide; Wang, Xinhua; Liu, Haiyang; Pang, Meixia; Tong, Jingou

    2018-04-02

    A high-density genetic linkage map is essential for QTL fine mapping, comparative genome analysis, identification of candidate genes and marker-assisted selection for economic traits in aquaculture species. The Yangtze River common carp (Cyprinus carpio haematopterus) is one of the most important aquacultured strains in China. However, quite limited genetics and genomics resources have been developed for genetic improvement of economic traits in such strain. A high-resolution genetic linkage map was constructed by using 7820 2b-RAD (2b-restriction site-associated DNA) and 295 microsatellite markers in a F2 family of the Yangtze River common carp (C. c. haematopterus). The length of the map was 4586.56 cM with an average marker interval of 0.57 cM. Comparative genome mapping revealed that a high proportion (70%) of markers with disagreed chromosome location was observed between C. c. haematopterus and another common carp strain (subspecies) C. c. carpio. A clear 2:1 relationship was observed between C. c. haematopterus linkage groups (LGs) and zebrafish (Danio rerio) chromosomes. Based on the genetic map, 21 QTLs for growth-related traits were detected on 12 LGs, and contributed values of phenotypic variance explained (PVE) ranging from 16.3 to 38.6%, with LOD scores ranging from 4.02 to 11.13. A genome-wide significant QTL (LOD = 10.83) and three chromosome-wide significant QTLs (mean LOD = 4.84) for sex were mapped on LG50 and LG24, respectively. A 1.4 cM confidence interval of QTL for all growth-related traits showed conserved synteny with a 2.06 M segment on chromosome 14 of D. rerio. Five potential candidate genes were identified by blast search in this genomic region, including a well-studied multi-functional growth related gene, Apelin. We mapped a set of suggestive and significant QTLs for growth-related traits and sex based on a high-density genetic linkage map using SNP and microsatellite markers for Yangtze River common carp. Several

  3. A primer for biomedical scientists on how to execute model II linear regression analysis.

    Science.gov (United States)

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  4. Exact treatment of mode locking for a piecewise linear map

    International Nuclear Information System (INIS)

    Ding, E.J.; Hemmer, P.C.

    1987-01-01

    A piecewise linear map with one discontinuity is studied by analytic means in the two-dimensional parameter space. When the slope of the map is less than unity, periodic orbits are present, and they give the precise symbolic dynamic classification of these. The localization of the periodic domains in parameter space is given by closed expressions. The winding number forms a devil's terrace, a two-dimensional function whose cross sections are complete devil's staircases. In such a cross section the complementary set to the periodic intervals is a Cantor set with dimension D = 0

  5. Estimating cavity tree and snag abundance using negative binomial regression models and nearest neighbor imputation methods

    Science.gov (United States)

    Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett

    2009-01-01

    Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....

  6. Islands of biogeodiversity in arid lands on a polygons map study: Detecting scale invariance patterns from natural resources maps.

    Science.gov (United States)

    Ibáñez, J J; Pérez-Gómez, R; Brevik, Eric C; Cerdà, A

    2016-12-15

    Many maps (geology, hydrology, soil, vegetation, etc.) are created to inventory natural resources. Each of these resources is mapped using a unique set of criteria, including scales and taxonomies. Past research indicates that comparing results of related maps (e.g., soil and geology maps) may aid in identifying mapping deficiencies. Therefore, this study was undertaken in Almeria Province, Spain to (i) compare the underlying map structures of soil and vegetation maps and (ii) investigate if a vegetation map can provide useful soil information that was not shown on a soil map. Soil and vegetation maps were imported into ArcGIS 10.1 for spatial analysis, and results then exported to Microsoft Excel worksheets for statistical analyses to evaluate fits to linear and power law regression models. Vegetative units were grouped according to the driving forces that determined their presence or absence: (i) climatophilous (ii) lithologic-climate; and (iii) edaphophylous. The rank abundance plots for both the soil and vegetation maps conformed to Willis or Hollow Curves, meaning the underlying structures of both maps were the same. Edaphophylous map units, which represent 58.5% of the vegetation units in the study area, did not show a good correlation with the soil map. Further investigation revealed that 87% of the edaphohygrophilous units were found in ramblas, ephemeral riverbeds that are not typically classified and mapped as soils in modern systems, even though they meet the definition of soil given by the most commonly used and most modern soil taxonomic systems. Furthermore, these edaphophylous map units tend to be islands of biodiversity that are threatened by anthropogenic activity in the region. Therefore, this study revealed areas that need to be revisited and studied pedologically. The vegetation mapped in these areas and the soils that support it are key components of the earth's critical zone that must be studied, understood, and preserved. Copyright © 2016

  7. Electrical PR Interval Variation Predicts New Occurrence of Atrial Fibrillation in Patients With Frequent Premature Atrial Contractions.

    Science.gov (United States)

    Chun, Kwang Jin; Hwang, Jin Kyung; Park, Seung-Jung; On, Young Keun; Kim, June Soo; Park, Kyoung-Min

    2016-04-01

    Atrial fibrillation (AF) is associated with the autonomic nervous system (ANS), and fluctuation of autonomic tone is more prominent in patients with AF. As autonomic tone affects the heart rate (HR), and there is an inverse relationship between HR and PR interval, PR interval variation could be greater in patients with AF than in those without AF. The purpose of this study was to investigate the correlation between PR interval variation and new-onset AF in patients with frequent PACs.We retrospectively enrolled 207 patients with frequent PACs who underwent electrocardiographs at least 4 times during the follow-up period. The PR variation was calculated by subtracting the minimum PR interval from the maximum PR interval. The outcomes were new occurrence of AF and all-cause mortality during the follow-up period.During a median follow-up of 8.3 years, 24 patients (11.6%) developed new-onset AF. Univariate analysis showed that prolonged PR interval (PR interval > 200 ms, P = 0.021), long PR variation (PR variation > 36.5 ms, P = 0.018), and PR variation (P = 0.004) as a continuous variable were associated with an increased risk of AF. Cox regression analysis showed that prolonged PR interval (hazard ratio = 3.321, 95% CI 1.064-10.362, P = 0.039) and PR variation (hazard ratio = 1.013, 95% CI 1.002-1.024, P = 0.022) were independent predictors for new-onset AF. However, PR variation and prolonged PR interval were not associated with all-cause mortality (P = 0.465 and 0.774, respectively).PR interval variation and prolonged PR interval are independent risk factors for new-onset AF in patients with frequent PACs. However we were unable to determine a cut-off value of PR interval variation for new-onset AF.

  8. Power law behavior of RR-interval variability in healthy middle-aged persons, patients with recent acute myocardial infarction, and patients with heart transplants

    Science.gov (United States)

    Bigger, J. T. Jr; Steinman, R. C.; Rolnitzky, L. M.; Fleiss, J. L.; Albrecht, P.; Cohen, R. J.

    1996-01-01

    BACKGROUND. The purposes of the present study were (1) to establish normal values for the regression of log(power) on log(frequency) for, RR-interval fluctuations in healthy middle-aged persons, (2) to determine the effects of myocardial infarction on the regression of log(power) on log(frequency), (3) to determine the effect of cardiac denervation on the regression of log(power) on log(frequency), and (4) to assess the ability of power law regression parameters to predict death after myocardial infarction. METHODS AND RESULTS. We studied three groups: (1) 715 patients with recent myocardial infarction; (2) 274 healthy persons age and sex matched to the infarct sample; and (3) 19 patients with heart transplants. Twenty-four-hour RR-interval power spectra were computed using fast Fourier transforms and log(power) was regressed on log(frequency) between 10(-4) and 10(-2) Hz. There was a power law relation between log(power) and log(frequency). That is, the function described a descending straight line that had a slope of approximately -1 in healthy subjects. For the myocardial infarction group, the regression line for log(power) on log(frequency) was shifted downward and had a steeper negative slope (-1.15). The transplant (denervated) group showed a larger downward shift in the regression line and a much steeper negative slope (-2.08). The correlation between traditional power spectral bands and slope was weak, and that with log(power) at 10(-4) Hz was only moderate. Slope and log(power) at 10(-4) Hz were used to predict mortality and were compared with the predictive value of traditional power spectral bands. Slope and log(power) at 10(-4) Hz were excellent predictors of all-cause mortality or arrhythmic death. To optimize the prediction of death, we calculated a log(power) intercept that was uncorrelated with the slope of the power law regression line. We found that the combination of slope and zero-correlation log(power) was an outstanding predictor, with a

  9. Evaluation of Tp-E Interval and Tp-E/QT Ratio in Patients with Aortic Stenosis.

    Science.gov (United States)

    Yayla, Çağrı; Bilgin, Murat; Akboğa, Mehmet Kadri; Gayretli Yayla, Kadriye; Canpolat, Uğur; Dinç Asarcikli, Lale; Doğan, Mehmet; Turak, Osman; Çay, Serkan; Özeke, Özcan; Akyel, Ahmet; Yeter, Ekrem; Aydoğdu, Sinan

    2016-05-01

    The risk of syncope and sudden cardiac death due to ventricular arrhythmias increased in patients with aortic stenosis (AS). Recently, it was shown that Tp-e interval, Tp-e/QT, and Tp-e/QTc ratio can be novel indicators for prediction of ventricular arrhythmias and mortality. We aimed to investigate the association between AS and ventricular repolarization using Tp-e interval and Tp-e/QT ratio. Totally, 105 patients with AS and 60 control subjects were enrolled to this study. The severity of AS was defined by transthoracic echocardiographic examination. Tp-e interval, Tp-e/QT, and Tp-e/QTc ratios were measured from the 12-lead electrocardiogram. Tp-e interval, Tp-e/QT, and Tp-e/QTc ratios were significantly increased in parallel to the severity of AS (P ratio had significant positive correlation with mean aortic gradient (r = 0.192, P = 0.049). In multivariate logistic regression analysis, Tp-e/QTc ratio and left ventricular mass were found to be independent predictors of severe AS (P = 0.03 and P = 0.04, respectively). Our study showed that Tp-e interval, Tp-e/QT, and Tp-e/QTc ratios were increased in patients with severe AS. Tp-e/QTc ratio and left ventricular mass were found as independent predictors of severe AS. © 2015 Wiley Periodicals, Inc.

  10. Mapping and Modelling the Geographical Distribution and Environmental Limits of Podoconiosis in Ethiopia.

    Science.gov (United States)

    Deribe, Kebede; Cano, Jorge; Newport, Melanie J; Golding, Nick; Pullan, Rachel L; Sime, Heven; Gebretsadik, Abeba; Assefa, Ashenafi; Kebede, Amha; Hailu, Asrat; Rebollo, Maria P; Shafi, Oumer; Bockarie, Moses J; Aseffa, Abraham; Hay, Simon I; Reithinger, Richard; Enquselassie, Fikre; Davey, Gail; Brooker, Simon J

    2015-01-01

    Ethiopia is assumed to have the highest burden of podoconiosis globally, but the geographical distribution and environmental limits and correlates are yet to be fully investigated. In this paper we use data from a nationwide survey to address these issues. Our analyses are based on data arising from the integrated mapping of podoconiosis and lymphatic filariasis (LF) conducted in 2013, supplemented by data from an earlier mapping of LF in western Ethiopia in 2008-2010. The integrated mapping used woreda (district) health offices' reports of podoconiosis and LF to guide selection of survey sites. A suite of environmental and climatic data and boosted regression tree (BRT) modelling was used to investigate environmental limits and predict the probability of podoconiosis occurrence. Data were available for 141,238 individuals from 1,442 communities in 775 districts from all nine regional states and two city administrations of Ethiopia. In 41.9% of surveyed districts no cases of podoconiosis were identified, with all districts in Affar, Dire Dawa, Somali and Gambella regional states lacking the disease. The disease was most common, with lymphoedema positivity rate exceeding 5%, in the central highlands of Ethiopia, in Amhara, Oromia and Southern Nations, Nationalities and Peoples regional states. BRT modelling indicated that the probability of podoconiosis occurrence increased with increasing altitude, precipitation and silt fraction of soil and decreased with population density and clay content. Based on the BRT model, we estimate that in 2010, 34.9 (95% confidence interval [CI]: 20.2-51.7) million people (i.e. 43.8%; 95% CI: 25.3-64.8% of Ethiopia's national population) lived in areas environmentally suitable for the occurrence of podoconiosis. Podoconiosis is more widespread in Ethiopia than previously estimated, but occurs in distinct geographical regions that are tied to identifiable environmental factors. The resultant maps can be used to guide programme planning

  11. Mapping and Modelling the Geographical Distribution and Environmental Limits of Podoconiosis in Ethiopia.

    Directory of Open Access Journals (Sweden)

    Kebede Deribe

    Full Text Available Ethiopia is assumed to have the highest burden of podoconiosis globally, but the geographical distribution and environmental limits and correlates are yet to be fully investigated. In this paper we use data from a nationwide survey to address these issues.Our analyses are based on data arising from the integrated mapping of podoconiosis and lymphatic filariasis (LF conducted in 2013, supplemented by data from an earlier mapping of LF in western Ethiopia in 2008-2010. The integrated mapping used woreda (district health offices' reports of podoconiosis and LF to guide selection of survey sites. A suite of environmental and climatic data and boosted regression tree (BRT modelling was used to investigate environmental limits and predict the probability of podoconiosis occurrence.Data were available for 141,238 individuals from 1,442 communities in 775 districts from all nine regional states and two city administrations of Ethiopia. In 41.9% of surveyed districts no cases of podoconiosis were identified, with all districts in Affar, Dire Dawa, Somali and Gambella regional states lacking the disease. The disease was most common, with lymphoedema positivity rate exceeding 5%, in the central highlands of Ethiopia, in Amhara, Oromia and Southern Nations, Nationalities and Peoples regional states. BRT modelling indicated that the probability of podoconiosis occurrence increased with increasing altitude, precipitation and silt fraction of soil and decreased with population density and clay content. Based on the BRT model, we estimate that in 2010, 34.9 (95% confidence interval [CI]: 20.2-51.7 million people (i.e. 43.8%; 95% CI: 25.3-64.8% of Ethiopia's national population lived in areas environmentally suitable for the occurrence of podoconiosis.Podoconiosis is more widespread in Ethiopia than previously estimated, but occurs in distinct geographical regions that are tied to identifiable environmental factors. The resultant maps can be used to guide

  12. Dispersion of repolarization in canine ventricle and the electrocardiographic T wave: Tp-e interval does not reflect transmural dispersion

    NARCIS (Netherlands)

    Opthof, Tobias; Coronel, Ruben; Wilms-Schopman, Francien J. G.; Plotnikov, Alexei N.; Shlapakova, Iryna N.; Danilo, Peter; Rosen, Michael R.; Janse, Michiel J.

    2007-01-01

    BACKGROUND: The concept that the interval between the peak (T(peak)) and the end (T(end)) of the T wave (T(p-e)) is a measure of transmural dispersion of repolarization time is widely accepted but has not been tested rigorously by transmural mapping of the intact heart. OBJECTIVES: The purpose of

  13. The efficacy of the 'mind map' study technique.

    Science.gov (United States)

    Farrand, Paul; Hussain, Fearzana; Hennessy, Enid

    2002-05-01

    To examine the effectiveness of using the 'mind map' study technique to improve factual recall from written information. To obtain baseline data, subjects completed a short test based on a 600-word passage of text prior to being randomly allocated to form two groups: 'self-selected study technique' and 'mind map'. After a 30-minute interval the self-selected study technique group were exposed to the same passage of text previously seen and told to apply existing study techniques. Subjects in the mind map group were trained in the mind map technique and told to apply it to the passage of text. Recall was measured after an interfering task and a week later. Measures of motivation were taken. Barts and the London School of Medicine and Dentistry, University of London. 50 second- and third-year medical students. Recall of factual material improved for both the mind map and self-selected study technique groups at immediate test compared with baseline. However this improvement was only robust after a week for those in the mind map group. At 1 week, the factual knowledge in the mind map group was greater by 10% (adjusting for baseline) (95% CI -1% to 22%). However motivation for the technique used was lower in the mind map group; if motivation could have been made equal in the groups, the improvement with mind mapping would have been 15% (95% CI 3% to 27%). Mind maps provide an effective study technique when applied to written material. However before mind maps are generally adopted as a study technique, consideration has to be given towards ways of improving motivation amongst users.

  14. Obtaining appropriate interval estimates for age when multiple indicators are used

    DEFF Research Database (Denmark)

    Fieuws, Steffen; Willems, Guy; Larsen, Sara Tangmose

    2016-01-01

    When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regres...... the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.......When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical...... regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple...

  15. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    Science.gov (United States)

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  16. `VIS/NIR mapping of TOC and extent of organic soils in the Nørre Å valley

    Science.gov (United States)

    Knadel, M.; Greve, M. H.; Thomsen, A.

    2009-04-01

    function. Spectra obtained near a sampled location were averaged. The collected spectra were correlated to TOC of the 15 representative samples using multivariate regression techniques (Unscrambler 9.7; Camo ASA, Oslo, Norway). Two types of calibrations were performed: using only spectra and using spectra together with the auxiliary data (EC-SH and EC-DP). These calibration equations were computed using PLS regression, segmented cross-validation method on centred data (using the raw spectral data, log 1/R). Six different spectra pre-treatments were conducted: (1) only spectra, (2) Savitsky-Golay smoothing over 11 wavelength points and transformation to a (3) 1'st and (4) 2'nd Savitzky and Golay derivative algorithm with a derivative interval of 21 wavelength points, (5) with or (6) without smoothing. The best treatment was considered to be the one with the lowest Root Mean Square Error of Prediction (RMSEP), the highest r2 between the VIS/NIR-predicted and measured values in the calibration model and the lowest mean deviation of predicted TOC values. The best calibration model was obtained with the mathematical pre-treatment's including smoothing, calculating the 2'nd derivative and outlier removal. The two TOC maps were compared after interpolation using kriging. They showed a similar pattern in the TOC distribution. Despite the unfavourable field conditions the VIS/NIR system performed well in both low and high TOC areas. Water content in places exceeding field capacity in the lower parts of the investigated field did not seriously degrade measurements. The present study represents the first attempt to apply the mobile Veris VIS/NIR system to the mapping of TOC of peat soils in Denmark. The result from this study show that a mobile VIS/NIR system can be applied to cost effective TOC mapping of mineral and organic soils with highly varying water content. Key words: VIS/NIR spectroscopy, organic soils, TOC

  17. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  18. Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.

    Science.gov (United States)

    Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H

    2016-01-01

    Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.

  19. Internal representations of temporal statistics and feedback calibrate motor-sensory interval timing.

    Directory of Open Access Journals (Sweden)

    Luigi Acerbi

    Full Text Available Humans have been shown to adapt to the temporal statistics of timing tasks so as to optimize the accuracy of their responses, in agreement with the predictions of Bayesian integration. This suggests that they build an internal representation of both the experimentally imposed distribution of time intervals (the prior and of the error (the loss function. The responses of a Bayesian ideal observer depend crucially on these internal representations, which have only been previously studied for simple distributions. To study the nature of these representations we asked subjects to reproduce time intervals drawn from underlying temporal distributions of varying complexity, from uniform to highly skewed or bimodal while also varying the error mapping that determined the performance feedback. Interval reproduction times were affected by both the distribution and feedback, in good agreement with a performance-optimizing Bayesian observer and actor model. Bayesian model comparison highlighted that subjects were integrating the provided feedback and represented the experimental distribution with a smoothed approximation. A nonparametric reconstruction of the subjective priors from the data shows that they are generally in agreement with the true distributions up to third-order moments, but with systematically heavier tails. In particular, higher-order statistical features (kurtosis, multimodality seem much harder to acquire. Our findings suggest that humans have only minor constraints on learning lower-order statistical properties of unimodal (including peaked and skewed distributions of time intervals under the guidance of corrective feedback, and that their behavior is well explained by Bayesian decision theory.

  20. Interval stability for complex systems

    Science.gov (United States)

    Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.

    2018-04-01

    Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.

  1. Estimation of evapotranspiration across the conterminous United States using a regression with climate and land-cover data

    Science.gov (United States)

    Sanford, Ward E.; Selnick, David L.

    2013-01-01

    Evapotranspiration (ET) is an important quantity for water resource managers to know because it often represents the largest sink for precipitation (P) arriving at the land surface. In order to estimate actual ET across the conterminous United States (U.S.) in this study, a water-balance method was combined with a climate and land-cover regression equation. Precipitation and streamflow records were compiled for 838 watersheds for 1971-2000 across the U.S. to obtain long-term estimates of actual ET. A regression equation was developed that related the ratio ET/P to climate and land-cover variables within those watersheds. Precipitation and temperatures were used from the PRISM climate dataset, and land-cover data were used from the USGS National Land Cover Dataset. Results indicate that ET can be predicted relatively well at a watershed or county scale with readily available climate variables alone, and that land-cover data can also improve those predictions. Using the climate and land-cover data at an 800-m scale and then averaging to the county scale, maps were produced showing estimates of ET and ET/P for the entire conterminous U.S. Using the regression equation, such maps could also be made for more detailed state coverages, or for other areas of the world where climate and land-cover data are plentiful.

  2. Hospital process intervals, not EMS time intervals, are the most important predictors of rapid reperfusion in EMS Patients with ST-segment elevation myocardial infarction.

    Science.gov (United States)

    Clark, Carol Lynn; Berman, Aaron D; McHugh, Ann; Roe, Edward Jedd; Boura, Judith; Swor, Robert A

    2012-01-01

    To assess the relationship of emergency medical services (EMS) intervals and internal hospital intervals to the rapid reperfusion of patients with ST-segment elevation myocardial infarction (STEMI). We performed a secondary analysis of a prospectively collected database of STEMI patients transported to a large academic community hospital between January 1, 2004, and December 31, 2009. EMS and hospital data intervals included EMS scene time, transport time, hospital arrival to myocardial infarction (MI) team activation (D2Page), page to catheterization laboratory arrival (P2Lab), and catheterization laboratory arrival to reperfusion (L2B). We used two outcomes: EMS scene arrival to reperfusion (S2B) ≤90 minutes and hospital arrival to reperfusion (D2B) ≤90 minutes. Means and proportions are reported. Pearson chi-square and multivariate regression were used for analysis. During the study period, we included 313 EMS-transported STEMI patients with 298 (95.2%) MI team activations. Of these STEMI patients, 295 (94.2%) were taken to the cardiac catheterization laboratory and 244 (78.0%) underwent percutaneous coronary intervention (PCI). For the patients who underwent PCI, 127 (52.5%) had prehospital EMS activation, 202 (82.8%) had D2B ≤90 minutes, and 72 (39%) had S2B ≤90 minutes. In a multivariate analysis, hospital processes EMS activation (OR 7.1, 95% CI 2.7, 18.4], Page to Lab [6.7, 95% CI 2.3, 19.2] and Lab arrival to Reperfusion [18.5, 95% CI 6.1, 55.6]) were the most important predictors of Scene to Balloon ≤ 90 minutes. EMS scene and transport intervals also had a modest association with rapid reperfusion (OR 0.85, 95% CI 0.78, 0.93 and OR 0.89, 95% CI 0.83, 0.95, respectively). In a secondary analysis, Hospital processes (Door to Page [OR 44.8, 95% CI 8.6, 234.4], Page 2 Lab [OR 5.4, 95% CI 1.9, 15.3], and Lab arrival to Reperfusion [OR 14.6 95% CI 2.5, 84.3]), but not EMS scene and transport intervals were the most important predictors D2B ≤90

  3. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    Science.gov (United States)

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  4. Pseudo-Random Sequences Generated by a Class of One-Dimensional Smooth Map

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Qin Xue; Xie Yi-Xin

    2011-01-01

    We extend a class of a one-dimensional smooth map. We make sure that for each desired interval of the parameter the map's Lyapunov exponent is positive. Then we propose a novel parameter perturbation method based on the good property of the extended one-dimensional smooth map. We perturb the parameter r in each iteration by the real number x i generated by the iteration. The auto-correlation function and NIST statistical test suite are taken to illustrate the method's randomness finally. We provide an application of this method in image encryption. Experiments show that the pseudo-random sequences are suitable for this application. (general)

  5. Genetic mapping and QTL analysis for body weight in Jian carp ( Cyprinus carpio var. Jian) compared with mirror carp ( Cyprinus carpio L.)

    Science.gov (United States)

    Gu, Ying; Lu, Cuiyun; Zhang, Xiaofeng; Li, Chao; Yu, Juhua; Sun, Xiaowen

    2015-05-01

    We report the genetic linkage map of Jian carp ( Cyprinus carpio var. Jian). An F1 population comprising 94 Jian carp individuals was mapped using 254 microsatellite markers. The genetic map spanned 1 381.592 cM and comprised 44 linkage groups, with an average marker distance of 6.58 cM. We identified eight quantitative trait loci (QTLs) for body weight (BW) in seven linkage groups, explaining 12.6% to 17.3% of the phenotypic variance. Comparative mapping was performed between Jian carp and mirror carp ( Cyprinus carpio L.), which both have 50 chromosomes. One hundred and ninety-eight Jian carp marker loci were found in common with the mirror carp map, with 186 (93.94%) showing synteny. All 44 Jian carp linkage groups could be one-to-one aligned to the 44 mirror carp linkage groups, mostly sharing two or more common loci. Three QTLs for BW in Jian carp were conserved in mirror carp. QTL comparison suggested that the QTL confidence interval in mirror carp was more precise than the homologous interval in Jian carp, which was contained within the QTL interval in Jian carp. The syntenic relationship and consensus QTLs between the two varieties provide a foundation for genomic research and genetic breeding in common carp.

  6. Introduction to statistical modelling 2: categorical variables and interactions in linear regression.

    Science.gov (United States)

    Lunt, Mark

    2015-07-01

    In the first article in this series we explored the use of linear regression to predict an outcome variable from a number of predictive factors. It assumed that the predictive factors were measured on an interval scale. However, this article shows how categorical variables can also be included in a linear regression model, enabling predictions to be made separately for different groups and allowing for testing the hypothesis that the outcome differs between groups. The use of interaction terms to measure whether the effect of a particular predictor variable differs between groups is also explained. An alternative approach to testing the difference between groups of the effect of a given predictor, which consists of measuring the effect in each group separately and seeing whether the statistical significance differs between the groups, is shown to be misleading. © The Author 2013. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Revised potentiometric-surface map, Yucca Mountain and vicinity, Nevada

    International Nuclear Information System (INIS)

    Ervin, E.M.; Luckey, R.R.; Burkhardt, D.J.

    1994-01-01

    The revised potentiometric-surface map presented in this report updates earlier maps of the Yucca Mountain area using mainly 1988 average water levels. Because of refinements in the corrections to the water-level measurements, these water levels have increased accuracy and precision over older values. The small-gradient area to the southeast of Yucca Mountain is contoured with a 0.25-meter interval and ranges in water-level altitude from 728.5 to 73 1.0 meters. Other areas with different water levels, to the north and west of Yucca Mountain, are illustrated with shaded patterns. The potentiometric surface can be divided into three regions: (1) A small-gradient area to the southeast of Yucca Mountain, which may be explained by flow through high-transmissivity rocks or low ground-water flux through the area; (2) A moderate-gradient area, on the western side of Yucca Mountain, where the water-level altitude ranges from 775 to 780 meters, and appears to be impeded by the Solitario Canyon Fault and a splay of that fault; and (3) A large-gradient area, to the north-northeast of Yucca Mountain, where water level altitude ranges from 738 to 1,035 meters, possibly as a result of a semi-perched groundwater system. Water levels from wells at Yucca Mountain were examined for yearly trends using linear least-squares regression. Data from five wells exhibited trends which were statistically significant, but some of those may be a result of slow equilibration of the water level from drilling in less permeable rocks. Adjustments for temperature and density changes in the deep wells with long fluid columns were attempted, but some of the adjusted data did not fit the surrounding data and, thus, were not used

  8. Statistical intervals a guide for practitioners

    CERN Document Server

    Hahn, Gerald J

    2011-01-01

    Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati

  9. Statistics of return intervals between long heartbeat intervals and their usability for online prediction of disorders

    International Nuclear Information System (INIS)

    Bogachev, Mikhail I; Bunde, Armin; Kireenkov, Igor S; Nifontov, Eugene M

    2009-01-01

    We study the statistics of return intervals between large heartbeat intervals (above a certain threshold Q) in 24 h records obtained from healthy subjects. We find that both the linear and the nonlinear long-term memory inherent in the heartbeat intervals lead to power-laws in the probability density function P Q (r) of the return intervals. As a consequence, the probability W Q (t; Δt) that at least one large heartbeat interval will occur within the next Δt heartbeat intervals, with an increasing elapsed number of intervals t after the last large heartbeat interval, follows a power-law. Based on these results, we suggest a method of obtaining a priori information about the occurrence of the next large heartbeat interval, and thus to predict it. We show explicitly that the proposed method, which exploits long-term memory, is superior to the conventional precursory pattern recognition technique, which focuses solely on short-term memory. We believe that our results can be straightforwardly extended to obtain more reliable predictions in other physiological signals like blood pressure, as well as in other complex records exhibiting multifractal behaviour, e.g. turbulent flow, precipitation, river flows and network traffic.

  10. [Optimization of processing technology for semen cuscuta by uniform and regression analysis].

    Science.gov (United States)

    Li, Chun-yu; Luo, Hui-yu; Wang, Shu; Zhai, Ya-nan; Tian, Shu-hui; Zhang, Dan-shen

    2011-02-01

    To optimize the best preparation technology for the contains of total flavornoids, polysaccharides, the percentage of water and alcohol-soluble components in Semen Cuscuta herb processing. UV-spectrophotometry was applied to determine the contains of total flavornoids and polysaccharides, which were extracted from Semen Cuscuta. And the processing was optimized by the way of uniform design and contour map. The best preparation technology was satisfied with some conditions as follows: baking temperature 150 degrees C, baking time 140 seconds. The regression models are notable and reasonable, which can forecast results precisely.

  11. Dynamic detection-rate-based bit allocation with genuine interval concealment for binary biometric representation.

    Science.gov (United States)

    Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann

    2013-06-01

    Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.

  12. [Bibliometrics and visualization analysis of land use regression models in ambient air pollution research].

    Science.gov (United States)

    Zhang, Y J; Zhou, D H; Bai, Z P; Xue, F X

    2018-02-10

    Objective: To quantitatively analyze the current status and development trends regarding the land use regression (LUR) models on ambient air pollution studies. Methods: Relevant literature from the PubMed database before June 30, 2017 was analyzed, using the Bibliographic Items Co-occurrence Matrix Builder (BICOMB 2.0). Keywords co-occurrence networks, cluster mapping and timeline mapping were generated, using the CiteSpace 5.1.R5 software. Relevant literature identified in three Chinese databases was also reviewed. Results: Four hundred sixty four relevant papers were retrieved from the PubMed database. The number of papers published showed an annual increase, in line with the growing trend of the index. Most papers were published in the journal of Environmental Health Perspectives . Results from the Co-word cluster analysis identified five clusters: cluster#0 consisted of birth cohort studies related to the health effects of prenatal exposure to air pollution; cluster#1 referred to land use regression modeling and exposure assessment; cluster#2 was related to the epidemiology on traffic exposure; cluster#3 dealt with the exposure to ultrafine particles and related health effects; cluster#4 described the exposure to black carbon and related health effects. Data from Timeline mapping indicated that cluster#0 and#1 were the main research areas while cluster#3 and#4 were the up-coming hot areas of research. Ninety four relevant papers were retrieved from the Chinese databases with most of them related to studies on modeling. Conclusion: In order to better assess the health-related risks of ambient air pollution, and to best inform preventative public health intervention policies, application of LUR models to environmental epidemiology studies in China should be encouraged.

  13. Post-General Anesthesia Ultrasound-Guided Venous Mapping Increases Autogenous Access Placement Rates.

    Science.gov (United States)

    Png, C Y Maximilian; Korayem, Adam; Finlay, David J

    2018-04-18

    This study investigates the impact of introducing a post-general anesthesia ultrasound mapping (PAUS) on the type of vascular access chosen for hemodialysis in patients without previous accesses. 203 of 297 consecutive patients met inclusion criteria and were reviewed. Within-subjects analysis was performed on patients with both an outpatient ultrasound-guided vein mapping and a PAUS using sign tests and Wilcoxon signed ranked tests. Further, a between-subjects analysis added patients with only the outpatient vein mapping; demographic and comorbidity data were analyzed using t-tests and chi-squared tests. An ordinal logit regression was run for the type of access placed, while a bivariate logit regression was used to compare rates of autogenous access maturation. 165 (81%) patients received both a standard outpatient vein mapping and a PAUS. At the outpatient vein mapping, 130 (79%) patients had suitable veins for an autogenous access while 35 (21%) patients did not have suitable veins for an autogenous access and were planned for a prosthetic access. During PAUS, all 165 (100%) patients were found to have suitable veins for autogenous access formation (P<0.001). When comparing specific autogenous access configurations, Wilcoxon signed rank testing showed significantly more preferable access configurations in the PAUS group compared to the outpatient mapping (P<0.001); Outpatient mapping resulted in 81 (47%) radiocephalic accesses, 10 (6%) radiobasilic accesses, 20 (12%) brachiocephalic accesses, 19 (12%) brachiobasilic accesses and 35 (21%) prosthetic accesses planned, in contrast to 149 (90%) radiocephalic accesses, 3 (2%) radiobasilic accesses, 10 (6%) brachiocephalic accesses, 3 (2%) brachiobasilic accesses and 0 prosthetic accesses when the same patients were analyzed using PAUS. With the analysis expanded to include the 38 (19%) patients with only the outpatient vein mapping (without-PAUS), the Wilcoxon-Mann-Whitney test showed no significant differences

  14. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  15. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    Kuzio, S.

    2001-01-01

    The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

  17. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  18. A high-density SNP map for accurate mapping of seed fibre QTL in Brassica napus L.

    Directory of Open Access Journals (Sweden)

    Liezhao Liu

    Full Text Available A high density genetic linkage map for the complex allotetraploid crop species Brassica napus (oilseed rape was constructed in a late-generation recombinant inbred line (RIL population, using genome-wide single nucleotide polymorphism (SNP markers assayed by the Brassica 60 K Infinium BeadChip Array. The linkage map contains 9164 SNP markers covering 1832.9 cM. 1232 bins account for 7648 of the markers. A subset of 2795 SNP markers, with an average distance of 0.66 cM between adjacent markers, was applied for QTL mapping of seed colour and the cell wall fiber components acid detergent lignin (ADL, cellulose and hemicellulose. After phenotypic analyses across four different environments a total of 11 QTL were detected for seed colour and fiber traits. The high-density map considerably improved QTL resolution compared to the previous low-density maps. A previously identified major QTL with very high effects on seed colour and ADL was pinpointed to a narrow genome interval on chromosome A09, while a minor QTL explaining 8.1% to 14.1% of variation for ADL was detected on chromosome C05. Five and three QTL accounting for 4.7% to 21.9% and 7.3% to 16.9% of the phenotypic variation for cellulose and hemicellulose, respectively, were also detected. To our knowledge this is the first description of QTL for seed cellulose and hemicellulose in B. napus, representing interesting new targets for improving oil content. The high density SNP genetic map enables navigation from interesting B. napus QTL to Brassica genome sequences, giving useful new information for understanding the genetics of key seed quality traits in rapeseed.

  19. Two-sorted Point-Interval Temporal Logics

    DEFF Research Database (Denmark)

    Balbiani, Philippe; Goranko, Valentin; Sciavicco, Guido

    2011-01-01

    There are two natural and well-studied approaches to temporal ontology and reasoning: point-based and interval-based. Usually, interval-based temporal reasoning deals with points as particular, duration-less intervals. Here we develop explicitly two-sorted point-interval temporal logical framework...... whereby time instants (points) and time periods (intervals) are considered on a par, and the perspective can shift between them within the formal discourse. We focus on fragments involving only modal operators that correspond to the inter-sort relations between points and intervals. We analyze...

  20. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    Science.gov (United States)

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  1. High-intensity interval training: Modulating interval duration in overweight/obese men.

    Science.gov (United States)

    Smith-Ryan, Abbie E; Melvin, Malia N; Wingfield, Hailee L

    2015-05-01

    High-intensity interval training (HIIT) is a time-efficient strategy shown to induce various cardiovascular and metabolic adaptations. Little is known about the optimal tolerable combination of intensity and volume necessary for adaptations, especially in clinical populations. In a randomized controlled pilot design, we evaluated the effects of two types of interval training protocols, varying in intensity and interval duration, on clinical outcomes in overweight/obese men. Twenty-five men [body mass index (BMI) > 25 kg · m(2)] completed baseline body composition measures: fat mass (FM), lean mass (LM) and percent body fat (%BF) and fasting blood glucose, lipids and insulin (IN). A graded exercise cycling test was completed for peak oxygen consumption (VO2peak) and power output (PO). Participants were randomly assigned to high-intensity short interval (1MIN-HIIT), high-intensity interval (2MIN-HIIT) or control groups. 1MIN-HIIT and 2MIN-HIIT completed 3 weeks of cycling interval training, 3 days/week, consisting of either 10 × 1 min bouts at 90% PO with 1 min rests (1MIN-HIIT) or 5 × 2 min bouts with 1 min rests at undulating intensities (80%-100%) (2MIN-HIIT). There were no significant training effects on FM (Δ1.06 ± 1.25 kg) or %BF (Δ1.13% ± 1.88%), compared to CON. Increases in LM were not significant but increased by 1.7 kg and 2.1 kg for 1MIN and 2MIN-HIIT groups, respectively. Increases in VO2peak were also not significant for 1MIN (3.4 ml·kg(-1) · min(-1)) or 2MIN groups (2.7 ml · kg(-1) · min(-1)). IN sensitivity (HOMA-IR) improved for both training groups (Δ-2.78 ± 3.48 units; p < 0.05) compared to CON. HIIT may be an effective short-term strategy to improve cardiorespiratory fitness and IN sensitivity in overweight males.

  2. Procedural Documentation and Accuracy Assessment of Bathymetric Maps and Area/Capacity Tables for Small Reservoirs

    Science.gov (United States)

    Wilson, Gary L.; Richards, Joseph M.

    2006-01-01

    Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.

  3. A landslide susceptibility map of Africa

    Science.gov (United States)

    Broeckx, Jente; Vanmaercke, Matthias; Duchateau, Rica; Poesen, Jean

    2017-04-01

    Studies on landslide risks and fatalities indicate that landslides are a global threat to humans, infrastructure and the environment, certainly in Africa. Nonetheless our understanding of the spatial patterns of landslides and rockfalls on this continent is very limited. Also in global landslide susceptibility maps, Africa is mostly underrepresented in the inventories used to construct these maps. As a result, predicted landslide susceptibilities remain subject to very large uncertainties. This research aims to produce a first continent-wide landslide susceptibility map for Africa, calibrated with a well-distributed landslide dataset. As a first step, we compiled all available landslide inventories for Africa. This data was supplemented by additional landslide mapping with Google Earth in underrepresented regions. This way, we compiled 60 landslide inventories from the literature (ca. 11000 landslides) and an additional 6500 landslides through mapping in Google Earth (including 1500 rockfalls). Various environmental variables such as slope, lithology, soil characteristics, land use, precipitation and seismic activity, were investigated for their significance in explaining the observed spatial patterns of landslides. To account for potential mapping biases in our dataset, we used Monte Carlo simulations that selected different subsets of mapped landslides, tested the significance of the considered environmental variables and evaluated the performance of the fitted multiple logistic regression model against another subset of mapped landslides. Based on these analyses, we constructed two landslide susceptibility maps for Africa: one for all landslide types and one excluding rockfalls. In both maps, topography, lithology and seismic activity were the most significant variables. The latter factor may be surprising, given the overall limited degree of seismicity in Africa. However, its significance indicates that frequent seismic events may serve as in important

  4. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  5. Integrated consensus genetic and physical maps of flax (Linum usitatissimum L.).

    Science.gov (United States)

    Cloutier, Sylvie; Ragupathy, Raja; Miranda, Evelyn; Radovanovic, Natasa; Reimer, Elsa; Walichnowski, Andrzej; Ward, Kerry; Rowland, Gordon; Duguid, Scott; Banik, Mitali

    2012-12-01

    Three linkage maps of flax (Linum usitatissimum L.) were constructed from populations CDC Bethune/Macbeth, E1747/Viking and SP2047/UGG5-5 containing between 385 and 469 mapped markers each. The first consensus map of flax was constructed incorporating 770 markers based on 371 shared markers including 114 that were shared by all three populations and 257 shared between any two populations. The 15 linkage group map corresponds to the haploid number of chromosomes of this species. The marker order of the consensus map was largely collinear in all three individual maps but a few local inversions and marker rearrangements spanning short intervals were observed. Segregation distortion was present in all linkage groups which contained 1-52 markers displaying non-Mendelian segregation. The total length of the consensus genetic map is 1,551 cM with a mean marker density of 2.0 cM. A total of 670 markers were anchored to 204 of the 416 fingerprinted contigs of the physical map corresponding to ~274 Mb or 74 % of the estimated flax genome size of 370 Mb. This high resolution consensus map will be a resource for comparative genomics, genome organization, evolution studies and anchoring of the whole genome shotgun sequence.

  6. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  7. Detailed geologic modeling of a turbidity reservoir interval at the Mars discovery

    Energy Technology Data Exchange (ETDEWEB)

    Mahaffie, M.J.; Chapin, M.A. [Shell Exploration and Production Technology Co. (United States); Henry, W.A. [Shell Offshore, Inc. (United States)

    1995-12-31

    Detailed reservoir architecture studies using high resolution seismic data coupled with geologic and seismic inversion modeling have been used to evaluate a major hydrocarbon bearing turbidite reservoir found within Prospect Mars. Early interpretations of this interval, based on lower frequency (40 Hz) seismic data, indicated the presence of a single, laterally continuous event covering an area nearly 3 miles square ({approx} 5200 acres). Correlations from well control supported the notion that this seismic event comprised a series of continuous sheet sands exhibiting a high degree of lateral continuity and connectivity. However pressure data taken during fluid sampling of the reservoir suggested the possibility of discontinuities not observed within the resolution of the seismic data. Seismic reprocessing enhancements to increase frequency content revealed the presence of multiple stratigraphic features not previously recognized. Detailed seismic mapping using loop-level seismic attributes and seismic inversion studies constrained by geologic models provide a more realistic depiction of the environment of deposition and improve reservoir simulation modeling for this stratigraphic interval. (author). 3 figs

  8. Estimating the magnitude of annual peak discharges with recurrence intervals between 1.1 and 3.0 years for rural, unregulated streams in West Virginia

    Science.gov (United States)

    Wiley, Jeffrey B.; Atkins, John T.; Newell, Dawn A.

    2002-01-01

    Multiple and simple least-squares regression models for the log10-transformed 1.5- and 2-year recurrence intervals of peak discharges with independent variables describing the basin characteristics (log10-transformed and untransformed) for 236 streamflow-gaging stations were evaluated, and the regression residuals were plotted as areal distributions that defined three regions in West Virginia designated as East, North, and South. Regional equations for the 1.1-, 1.2-, 1.3-, 1.4-, 1.5-, 1.6-, 1.7-, 1.8-, 1.9-, 2.0-, 2.5-, and 3-year recurrence intervals of peak discharges were determined by generalized least-squares regression. Log10-transformed drainage area was the most significant independent variable for all regions. Equations developed in this study are applicable only to rural, unregulated streams within the boundaries of West Virginia. The accuracies of estimating equations are quantified by measuring the average prediction error (from 27.4 to 52.4 percent) and equivalent years of record (from 1.1 to 3.4 years).

  9. Chaotic particle swarm optimization algorithm in a support vector regression electric load forecasting model

    International Nuclear Information System (INIS)

    Hong, W.-C.

    2009-01-01

    Accurate forecasting of electric load has always been the most important issues in the electricity industry, particularly for developing countries. Due to the various influences, electric load forecasting reveals highly nonlinear characteristics. Recently, support vector regression (SVR), with nonlinear mapping capabilities of forecasting, has been successfully employed to solve nonlinear regression and time series problems. However, it is still lack of systematic approaches to determine appropriate parameter combination for a SVR model. This investigation elucidates the feasibility of applying chaotic particle swarm optimization (CPSO) algorithm to choose the suitable parameter combination for a SVR model. The empirical results reveal that the proposed model outperforms the other two models applying other algorithms, genetic algorithm (GA) and simulated annealing algorithm (SA). Finally, it also provides the theoretical exploration of the electric load forecasting support system (ELFSS)

  10. Quantitive DNA Fiber Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Chun-Mei; Wang, Mei; Greulich-Bode, Karin M.; Weier, Jingly F.; Weier, Heinz-Ulli G.

    2008-01-28

    Several hybridization-based methods used to delineate single copy or repeated DNA sequences in larger genomic intervals take advantage of the increased resolution and sensitivity of free chromatin, i.e., chromatin released from interphase cell nuclei. Quantitative DNA fiber mapping (QDFM) differs from the majority of these methods in that it applies FISH to purified, clonal DNA molecules which have been bound with at least one end to a solid substrate. The DNA molecules are then stretched by the action of a receding meniscus at the water-air interface resulting in DNA molecules stretched homogeneously to about 2.3 kb/{micro}m. When non-isotopically, multicolor-labeled probes are hybridized to these stretched DNA fibers, their respective binding sites are visualized in the fluorescence microscope, their relative distance can be measured and converted into kilobase pairs (kb). The QDFM technique has found useful applications ranging from the detection and delineation of deletions or overlap between linked clones to the construction of high-resolution physical maps to studies of stalled DNA replication and transcription.

  11. A genetic linkage map of the chromosome 4 short arm

    Energy Technology Data Exchange (ETDEWEB)

    Locke, P.A.; MacDonald, M.E.; Srinidhi, J.; Tanzi, R.E.; Haines, J.L. (Massachusetts General Hospital, Boston (United States)); Gilliam, T.C. (Columbia Univ., New York, NY (United States)); Conneally, P.M. (Indiana Univ. Medical Center, Indianapolis (United States)); Wexler, N.S. (Columbia Univ., New York, NY (United States) Hereditary Disease Foundation, Santa Monica, CA (United States)); Gusella, J.F. (Massachusetts General Hospital, Boston (United States) Harvard Univ., Boston, MA (United States))

    1993-01-01

    The authors have generated an 18-interval contiguous genetic linkage map of human chromosome 4 spanning the entire short arm and proximal long arm. Fifty-seven polymorphisms, representing 42 loci, were analyzed in the Venezuelan reference pedigree. The markers included seven genes (ADRA2C, ALB, GABRB1, GC, HOX7, IDUA, QDPR), one pseudogene (RAF1P1), and 34 anonymous DNA loci. Four loci were represented by microsatellite polymorphisms and one (GC) was expressed as a protein polymorphism. The remainder were genotyped based on restriction fragment length polymorphism. The sex-averaged map covered 123 cM. Significant differences in sex-specific rates of recombination were observed only in the pericentromeric and proximal long arm regions, but these contributed to different overall map lengths of 115 cM in males and 138 cM in females. This map provides 19 reference points along chromosome 4 that will be particularly useful in anchoring and seeding physical mapping studies and in aiding in disease studies. 26 refs., 1 fig., 1 tab.

  12. Some Characterizations of Convex Interval Games

    NARCIS (Netherlands)

    Brânzei, R.; Tijs, S.H.; Alparslan-Gok, S.Z.

    2008-01-01

    This paper focuses on new characterizations of convex interval games using the notions of exactness and superadditivity. We also relate big boss interval games with concave interval games and obtain characterizations of big boss interval games in terms of exactness and subadditivity.

  13. Remote Sensing Analysis of Vegetation Recovery following Short-Interval Fires in Southern California Shrublands

    Science.gov (United States)

    Meng, Ran; Dennison, Philip E.; D’Antonio, Carla M.; Moritz, Max A.

    2014-01-01

    Increased fire frequency has been shown to promote alien plant invasions in the western United States, resulting in persistent vegetation type change. Short interval fires are widely considered to be detrimental to reestablishment of shrub species in southern California chaparral, facilitating the invasion of exotic annuals and producing “type conversion”. However, supporting evidence for type conversion has largely been at local, site scales and over short post-fire time scales. Type conversion has not been shown to be persistent or widespread in chaparral, and past range improvement studies present evidence that chaparral type conversion may be difficult and a relatively rare phenomenon across the landscape. With the aid of remote sensing data covering coastal southern California and a historical wildfire dataset, the effects of short interval fires (fire history, climate and elevation) were analyzed by linear regression. Reduced vegetation cover was found in some lower elevation areas that were burned twice in short interval fires, where non-sprouting species are more common. However, extensive type conversion of chaparral to grassland was not evident in this study. Most variables, with the exception of elevation, were moderately or poorly correlated with differences in vegetation recovery. PMID:25337785

  14. Motor Unit Interpulse Intervals During High Force Contractions.

    Science.gov (United States)

    Stock, Matt S; Thompson, Brennan J

    2016-01-01

    We examined the means, medians, and variability for motor-unit interpulse intervals (IPIs) during voluntary, high force contractions. Eight men (mean age = 22 years) attempted to perform isometric contractions at 90% of their maximal voluntary contraction force while bipolar surface electromyographic (EMG) signals were detected from the vastus lateralis and vastus medialis muscles. Surface EMG signal decomposition was used to determine the recruitment thresholds and IPIs of motor units that demonstrated accuracy levels ≥ 96.0%. Motor units with high recruitment thresholds demonstrated longer mean IPIs, but the coefficients of variation were similar across all recruitment thresholds. Polynomial regression analyses indicated that for both muscles, the relationship between the means and standard deviations of the IPIs was linear. The majority of IPI histograms were positively skewed. Although low-threshold motor units were associated with shorter IPIs, the variability among motor units with differing recruitment thresholds was comparable.

  15. Site investigation SFR. Boremap mapping of percussion drilled borehole HFR106

    Energy Technology Data Exchange (ETDEWEB)

    Winell, Sofia (Geosigma AB (Sweden))

    2010-06-15

    This report presents the result from the Boremap mapping of the percussion drilled borehole HFR106, which is drilled from an islet located ca 220 m southeast of the pier above SFR. The purpose of the location and orientation of the borehole is to investigate the possible occurrence of gently dipping, water-bearing structures in the area. HFR106 has a length of 190.4 m and oriented 269.4 deg/-60.9 deg. The mapping is based on the borehole image (BIPS), investigation of drill cuttings and generalized, as well as more detailed geophysical logs. The dominating rock type, which occupies 68% of HFR106, is fine- to medium-grained, pinkish grey metagranite-granodiorite (rock code 101057) mapped as foliated with a medium to strong intensity. Pegmatite to pegmatitic granite (rock code 101061) occupies 29% of the borehole. Subordinate rock types are felsic to intermediate meta volcanic rock (rock code 103076) and fine- to medium-grained granite (rock code 111058). Rock occurrences (rock types < 1 m in length) occupy about 16% of the mapped interval, of which half is veins, dykes and unspecified occurrences of pegmatite and pegmatitic granite. Only 5.5% of HFR106 is inferred to be altered, mainly oxidation in two intervals with an increased fracture frequency. A total number of 845 fractures are registered in HFR106. Of these are 64 interpreted as open with a certain aperture, 230 open with a possible aperture, and 551 sealed. This result in the following fracture frequencies: 1.6 open fractures/m and 3.0 sealed fractures/m. Three fracture sets of open and sealed fractures with the orientations 290 deg/70 deg, 150 deg/85 deg and 200 deg/85 deg can be distinguished in HFR106. The fracture frequency is generally higher in the second half of the borehole, and particularly in the interval 176-187.4 m.

  16. Site investigation SFR. Boremap mapping of percussion drilled borehole HFR106

    International Nuclear Information System (INIS)

    Winell, Sofia

    2010-06-01

    This report presents the result from the Boremap mapping of the percussion drilled borehole HFR106, which is drilled from an islet located ca 220 m southeast of the pier above SFR. The purpose of the location and orientation of the borehole is to investigate the possible occurrence of gently dipping, water-bearing structures in the area. HFR106 has a length of 190.4 m and oriented 269.4 deg/-60.9 deg. The mapping is based on the borehole image (BIPS), investigation of drill cuttings and generalized, as well as more detailed geophysical logs. The dominating rock type, which occupies 68% of HFR106, is fine- to medium-grained, pinkish grey metagranite-granodiorite (rock code 101057) mapped as foliated with a medium to strong intensity. Pegmatite to pegmatitic granite (rock code 101061) occupies 29% of the borehole. Subordinate rock types are felsic to intermediate meta volcanic rock (rock code 103076) and fine- to medium-grained granite (rock code 111058). Rock occurrences (rock types < 1 m in length) occupy about 16% of the mapped interval, of which half is veins, dykes and unspecified occurrences of pegmatite and pegmatitic granite. Only 5.5% of HFR106 is inferred to be altered, mainly oxidation in two intervals with an increased fracture frequency. A total number of 845 fractures are registered in HFR106. Of these are 64 interpreted as open with a certain aperture, 230 open with a possible aperture, and 551 sealed. This result in the following fracture frequencies: 1.6 open fractures/m and 3.0 sealed fractures/m. Three fracture sets of open and sealed fractures with the orientations 290 deg/70 deg, 150 deg/85 deg and 200 deg/85 deg can be distinguished in HFR106. The fracture frequency is generally higher in the second half of the borehole, and particularly in the interval 176-187.4 m

  17. Ensemble of ground subsidence hazard maps using fuzzy logic

    Science.gov (United States)

    Park, Inhye; Lee, Jiyeong; Saro, Lee

    2014-06-01

    Hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok, Korea, were constructed using fuzzy ensemble techniques and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, groundwater, and ground subsidence maps. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 70/30 for training and validation of the models. The relationships between the detected ground-subsidence area and the factors were identified and quantified by frequency ratio (FR), logistic regression (LR) and artificial neural network (ANN) models. The relationships were used as factor ratings in the overlay analysis to create ground-subsidence hazard indexes and maps. The three GSH maps were then used as new input factors and integrated using fuzzy-ensemble methods to make better hazard maps. All of the hazard maps were validated by comparison with known subsidence areas that were not used directly in the analysis. As the result, the ensemble model was found to be more effective in terms of prediction accuracy than the individual model.

  18. Multivariate interval-censored survival data

    DEFF Research Database (Denmark)

    Hougaard, Philip

    2014-01-01

    Interval censoring means that an event time is only known to lie in an interval (L,R], with L the last examination time before the event, and R the first after. In the univariate case, parametric models are easily fitted, whereas for non-parametric models, the mass is placed on some intervals, de...

  19. Adaptive adjustment of interval predictive control based on combined model and application in shell brand petroleum distillation tower

    Science.gov (United States)

    Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin

    2017-10-01

    Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.

  20. Understanding poisson regression.

    Science.gov (United States)

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  1. Alternative Methods of Regression

    CERN Document Server

    Birkes, David

    2011-01-01

    Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

  2. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis

    Directory of Open Access Journals (Sweden)

    Maarten van Smeden

    2016-11-01

    Full Text Available Abstract Background Ten events per variable (EPV is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. Methods The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth’s correction, are compared. Results The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect (‘separation’. We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth’s correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. Conclusions The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  3. Mapping of the locus for autosomal dominant amelogenesis imperfecta (AIH2) to a 4-Mb YAC contig on chromosome 4q11-q21

    Energy Technology Data Exchange (ETDEWEB)

    Kaerrman, C.; Holmgren, G.; Forsman, K. [Univ. Hospital, Umea (Sweden)]|[Univ. of Umea (Sweden)] [and others

    1997-01-15

    Amelogenesis imperfecta (Al) is a clinically and genetically heterogeneous group of inherited enamel defects. We recently mapped a locus for autosomal dominant local hypoplastic amelogenesis imperfecta (AIH2) to the long arm of chromosome 4. The disease gene was localized to a 17.6-cM region between the markers D4S392 and D4S395. The albumin gene (ALB), located in the same interval, was a candidate gene for autosomal dominant AI (ADAI) since albumin has a potential role in enamel maturation. Here we describe refined mapping of the AIH2 locus and the construction of marker maps by radiation hybrid mapping and yeast artificial chromosome (YAC)-based sequence tagged site-content mapping. A radiation hybrid map consisting of 11 microsatellite markers in the 5-cM interval between D4S409 and D4S1558 was constructed. Recombinant haplotypes in six Swedish ADAI families suggest that the disease gene is located in the interval between D4S2421 and ALB. ALB is therefore not likely to be the disease-causing gene. Affected members in all six families share the same allele haplotypes, indicating a common ancestral mutation in all families. The AIH2 critical region is less than 4 cM and spans a physical distance of approximately 4 Mb as judged from radiation hybrid maps. A YAC contig over the AIH2 critical region including several potential candidate genes was constructed. 35 refs., 4 figs., 1 tab.

  4. Use of multispectral satellite imagery and hyperspectral endmember libraries for urban land cover mapping at the metropolitan scale

    Science.gov (United States)

    Priem, Frederik; Okujeni, Akpona; van der Linden, Sebastian; Canters, Frank

    2016-10-01

    The value of characteristic reflectance features for mapping urban materials has been demonstrated in many experiments with airborne imaging spectrometry. Analysis of larger areas requires satellite-based multispectral imagery, which typically lacks the spatial and spectral detail of airborne data. Consequently the need arises to develop mapping methods that exploit the complementary strengths of both data sources. In this paper a workflow for sub-pixel quantification of Vegetation-Impervious-Soil urban land cover is presented, using medium resolution multispectral satellite imagery, hyperspectral endmember libraries and Support Vector Regression. A Landsat 8 Operational Land Imager surface reflectance image covering the greater metropolitan area of Brussels is selected for mapping. Two spectral libraries developed for the cities of Brussels and Berlin based on airborne hyperspectral APEX and HyMap data are used. First the combined endmember library is resampled to match the spectral response of the Landsat sensor. The library is then optimized to avoid spectral redundancy and confusion. Subsequently the spectra of the endmember library are synthetically mixed to produce training data for unmixing. Mapping is carried out using Support Vector Regression models trained with spectra selected through stratified sampling of the mixed library. Validation on building block level (mean size = 46.8 Landsat pixels) yields an overall good fit between reference data and estimation with Mean Absolute Errors of 0.06, 0.06 and 0.08 for vegetation, impervious and soil respectively. Findings of this work may contribute to the use of universal spectral libraries for regional scale land cover fraction mapping using regression approaches.

  5. Predicting fecal coliform using the interval-to-interval approach and SWAT in the Miyun watershed, China.

    Science.gov (United States)

    Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu; Qiu, Jiali; Li, Yangyang

    2017-06-01

    Pathogens in manure can cause waterborne-disease outbreaks, serious illness, and even death in humans. Therefore, information about the transformation and transport of bacteria is crucial for determining their source. In this study, the Soil and Water Assessment Tool (SWAT) was applied to simulate fecal coliform bacteria load in the Miyun Reservoir watershed, China. The data for the fecal coliform were obtained at three sampling sites, Chenying (CY), Gubeikou (GBK), and Xiahui (XH). The calibration processes of the fecal coliform were conducted using the CY and GBK sites, and validation was conducted at the XH site. An interval-to-interval approach was designed and incorporated into the processes of fecal coliform calibration and validation. The 95% confidence interval of the predicted values and the 95% confidence interval of measured values were considered during calibration and validation in the interval-to-interval approach. Compared with the traditional point-to-point comparison, this method can improve simulation accuracy. The results indicated that the simulation of fecal coliform using the interval-to-interval approach was reasonable for the watershed. This method could provide a new research direction for future model calibration and validation studies.

  6. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  7. Regression Association Analysis of Yield-Related Traits with RAPD Molecular Markers in Pistachio (Pistacia vera L.

    Directory of Open Access Journals (Sweden)

    Saeid Mirzaei

    2017-10-01

    Full Text Available Introduction: The pistachio (Pistacia vera, a member of the cashew family, is a small tree originating from Central Asia and the Middle East. The tree produces seeds that are widely consumed as food. Pistacia vera often is confused with other species in the genus Pistacia that are also known as pistachio. These other species can be distinguished by their geographic distributions and their seeds which are much smaller and have a soft shell. Continual advances in crop improvement through plant breeding are driven by the available genetic diversity. Therefore, the recognition and measurement of such diversity is crucial to breeding programs. In the past 20 years, the major effort in plant breeding has changed from quantitative to molecular genetics with emphasis on quantitative trait loci (QTL identification and marker assisted selection (MAS. The germplasm-regression-combined association studies not only allow mapping of genes/QTLs with higher level of confidence, but also allow detection of genes/QTLs, which will otherwise escape detection in linkage-based QTL studies based on the planned populations. The development of the marker-based technology offers a fast, reliable, and easy way to perform multiple regression analysis and comprise an alternative approach to breeding in diverse species of plants. The availability of many makers and morphological traits can help to regression analysis between these markers and morphological traits. Materials and Methods: In this study, 20 genotypes of Pistachio were studied and yield related traits were measured. Young well-expanded leaves were collected for DNA extraction and total genomic DNA was extracted. Genotyping was performed using 15 RAPD primers and PCR amplification products were visualized by gel electrophoresis. The reproducible RAPD fragments were scored on the basis of present (1 or absent (0 bands and a binary matrix constructed using each molecular marker. Association analysis between

  8. Improving model predictions for RNA interference activities that use support vector machine regression by combining and filtering features

    Directory of Open Access Journals (Sweden)

    Peek Andrew S

    2007-06-01

    Full Text Available Abstract Background RNA interference (RNAi is a naturally occurring phenomenon that results in the suppression of a target RNA sequence utilizing a variety of possible methods and pathways. To dissect the factors that result in effective siRNA sequences a regression kernel Support Vector Machine (SVM approach was used to quantitatively model RNA interference activities. Results Eight overall feature mapping methods were compared in their abilities to build SVM regression models that predict published siRNA activities. The primary factors in predictive SVM models are position specific nucleotide compositions. The secondary factors are position independent sequence motifs (N-grams and guide strand to passenger strand sequence thermodynamics. Finally, the factors that are least contributory but are still predictive of efficacy are measures of intramolecular guide strand secondary structure and target strand secondary structure. Of these, the site of the 5' most base of the guide strand is the most informative. Conclusion The capacity of specific feature mapping methods and their ability to build predictive models of RNAi activity suggests a relative biological importance of these features. Some feature mapping methods are more informative in building predictive models and overall t-test filtering provides a method to remove some noisy features or make comparisons among datasets. Together, these features can yield predictive SVM regression models with increased predictive accuracy between predicted and observed activities both within datasets by cross validation, and between independently collected RNAi activity datasets. Feature filtering to remove features should be approached carefully in that it is possible to reduce feature set size without substantially reducing predictive models, but the features retained in the candidate models become increasingly distinct. Software to perform feature prediction and SVM training and testing on nucleic acid

  9. Mapping specific soil functions based on digital soil property maps

    Science.gov (United States)

    Pásztor, László; Fodor, Nándor; Farkas-Iványi, Kinga; Szabó, József; Bakacsi, Zsófia; Koós, Sándor

    2016-04-01

    Quantification of soil functions and services is a great challenge in itself even if the spatial relevance is supposed to be identified and regionalized. Proxies and indicators are widely used in ecosystem service mapping. Soil services could also be approximated by elementary soil features. One solution is the association of soil types with services as basic principle. Soil property maps however provide quantified spatial information, which could be utilized more versatilely for the spatial inference of soil functions and services. In the frame of the activities referred as "Digital, Optimized, Soil Related Maps and Information in Hungary" (DOSoReMI.hu) numerous soil property maps have been compiled so far with proper DSM techniques partly according to GSM.net specifications, partly by slightly or more strictly changing some of its predefined parameters (depth intervals, pixel size, property etc.). The elaborated maps have been further utilized, since even DOSoReMI.hu was intended to take steps toward the regionalization of higher level soil information (secondary properties, functions, services). In the meantime the recently started AGRAGIS project requested spatial soil related information in order to estimate agri-environmental related impacts of climate change and support the associated vulnerability assessment. One of the most vulnerable services of soils in the context of climate change is their provisioning service. In our work it was approximated by productivity, which was estimated by a sequential scenario based crop modelling. It took into consideration long term (50 years) time series of both measured and predicted climatic parameters as well as accounted for the potential differences in agricultural practice and crop production. The flexible parametrization and multiple results of modelling was then applied for the spatial assessment of sensitivity, vulnerability, exposure and adaptive capacity of soils in the context of the forecasted changes in

  10. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  11. [A study on the relationship between postmortem interval and the changes of DNA content in the kidney cellule of rat].

    Science.gov (United States)

    Liu, L; Peng, D B; Liu, Y; Deng, W N; Liu, Y L; Li, J J

    2001-05-01

    To study changes of DNA content in the kidney cellule of rats and relationship with the postmortem interval. This experiment chose seven parameter of cell nuclear, including the area and integral optical density, determined the changes of DNA content in the kidney cellule of 15 rats at different intervals between 0 and 48 h postmortem with auto-TV-image system. The degradation rate of DNA in nuclear has a certainty relationship to early PMI(in 48 h) of rat, and get binomial regress equation. Determining the quantity of DNA in nuclear should be an objective and exact way to estimate the PMI.

  12. MAP kinase genes and colon and rectal cancer

    Science.gov (United States)

    Slattery, Martha L.

    2012-01-01

    Mitogen-activated protein kinase (MAPK) pathways regulate many cellular functions including cell proliferation, differentiation, migration and apoptosis. We evaluate genetic variation in the c-Jun-N-terminal kinases, p38, and extracellular regulated kinases 1/2 MAPK-signaling pathways and colon and rectal cancer risk using data from population-based case-control studies (colon: n = 1555 cases, 1956 controls; rectal: n = 754 cases, 959 controls). We assess 19 genes (DUSP1, DUSP2, DUSP4, DUSP6, DUSP7, MAP2K1, MAP3K1, MAP3K2, MAP3K3, MAP3K7, MAP3K9, MAP3K10, MAP3K11, MAPK1, MAPK3, MAPK8, MAPK12, MAPK14 and RAF1). MAP2K1 rs8039880 [odds ratio (OR) = 0.57, 95% confidence interval (CI) = 0.38, 0.83; GG versus AA genotype] and MAP3K9 rs11625206 (OR = 1.41, 95% CI = 1.14, 1.76; recessive model) were associated with colon cancer (P adj value rectal cancer (P adj cancer risk. Genetic variants had unique associations with KRAS, TP53 and CIMP+ tumors. DUSP2 rs1724120 [hazard rate ratio (HRR) = 0.72, 95%CI = 0.54, 0.96; AA versus GG/GA), MAP3K10 rs112956 (HRR = 1.40, 95% CI = 1.10, 1.76; CT/TT versus CC) and MAP3K11 (HRR = 1.76, 95% CI 1.18, 2.62 TT versus GG/GT) influenced survival after diagnosis with colon cancer; MAP2K1 rs8039880 (HRR = 2.53, 95% CI 1.34, 4.79 GG versus AG/GG) and Raf1 rs11923427 (HRR = 0.59 95% CI = 0.40, 0.86; AA versus TT/TA) were associated with rectal cancer survival. These data suggest that genetic variation in the MAPK-signaling pathway influences colorectal cancer risk and survival after diagnosis. Associations may be modified by lifestyle factors that influence inflammation and oxidative stress. PMID:23027623

  13. The development of flood map in Malaysia

    Science.gov (United States)

    Zakaria, Siti Fairus; Zin, Rosli Mohamad; Mohamad, Ismail; Balubaid, Saeed; Mydin, Shaik Hussein; MDR, E. M. Roodienyanto

    2017-11-01

    In Malaysia, flash floods are common occurrences throughout the year in flood prone areas. In terms of flood extent, flash floods affect smaller areas but because of its tendency to occur in densely urbanized areas, the value of damaged property is high and disruption to traffic flow and businesses are substantial. However, in river floods especially the river floods of Kelantan and Pahang, the flood extent is widespread and can extend over 1,000 square kilometers. Although the value of property and density of affected population is lower, the damage inflicted by these floods can also be high because the area affected is large. In order to combat these floods, various flood mitigation measures have been carried out. Structural flood mitigation alone can only provide protection levels from 10 to 100 years Average Recurrence Intervals (ARI). One of the economically effective non-structural approaches in flood mitigation and flood management is using a geospatial technology which involves flood forecasting and warning services to the flood prone areas. This approach which involves the use of Geographical Information Flood Forecasting system also includes the generation of a series of flood maps. There are three types of flood maps namely Flood Hazard Map, Flood Risk Map and Flood Evacuation Map. Flood Hazard Map is used to determine areas susceptible to flooding when discharge from a stream exceeds the bank-full stage. Early warnings of incoming flood events will enable the flood victims to prepare themselves before flooding occurs. Properties and life's can be saved by keeping their movable properties above the flood levels and if necessary, an early evacuation from the area. With respect to flood fighting, an early warning with reference through a series of flood maps including flood hazard map, flood risk map and flood evacuation map of the approaching flood should be able to alert the organization in charge of the flood fighting actions and the authority to

  14. HIV intertest interval among MSM in King County, Washington.

    Science.gov (United States)

    Katz, David A; Dombrowski, Julia C; Swanson, Fred; Buskin, Susan E; Golden, Matthew R; Stekler, Joanne D

    2013-02-01

    The authors examined temporal trends and correlates of HIV testing frequency among men who have sex with men (MSM) in King County, Washington. The authors evaluated data from MSM testing for HIV at the Public Health-Seattle & King County (PHSKC) STD Clinic and Gay City Health Project (GCHP) and testing history data from MSM in PHSKC HIV surveillance. The intertest interval (ITI) was defined as the number of days between the last negative HIV test and the current testing visit or first positive test. Correlates of the log(10)-transformed ITI were determined using generalised estimating equations linear regression. Between 2003 and 2010, the median ITI among MSM seeking HIV testing at the STD Clinic and GCHP were 215 (IQR: 124-409) and 257 (IQR: 148-503) days, respectively. In multivariate analyses, younger age, having only male partners and reporting ≥10 male sex partners in the last year were associated with shorter ITIs at both testing sites (pGCHP attendees, having a regular healthcare provider, seeking a test as part of a regular schedule and inhaled nitrite use in the last year were also associated with shorter ITIs (pGCHP (median 359 vs 255 days, p=0.02). Although MSM in King County appear to be testing at frequent intervals, further efforts are needed to reduce the time that HIV-infected persons are unaware of their status.

  15. Maps and plans reliability in tourism activities

    Directory of Open Access Journals (Sweden)

    Олександр Донцов

    2016-10-01

    Full Text Available The paper is devoted to creation of an effective system of mapping at all levels of tourist-excursion functioning that will boost the promotion of tourist product in a domestic and foreign tourist market. The State Scientific - Production Enterprise «Kartographia» actively participates in cartographic tourism provision by producing travel pieces, survey, large-scale, route maps, atlases, travel guides, city plans. They produce maps covering different content of the territory of Ukraine, its individual regions, cities interested in tourist excursions. The list and scope of cartographic products has been prepared for publication and released for the last five years. The development of new types of tourism encourages publishers to create various cartographic products for the needs of tourists guaranteeing high accuracy, reliability of information, ease of use. A variety of scientific and practical problems in tourism and excursion activities that are solved using maps and plans makes it difficult to determine the criteria for assessing their reliability. The author proposes to introduce the concept of «relevance» - as maps suitability to solving specific problems. The basis of the peer review is suitability of maps for the objective results release criteria: appropriateness of the target maps tasks (area, theme, destination; accuracy of given parameters (projection, scale, height interval; year according to the shooting of location or mapping; selection methods, methods of results measurement processing algorithm; availability of assistive devices (instrumentation, computer technology, simulation devices. These criteria provide the reliability and accuracy of the result as acceptable to consumers as possible. The author proposes a set of measures aimed at improving the content, quality and reliability of cartographic production.

  16. Dependency of magnetocardiographically determined fetal cardiac time intervals on gestational age, gender and postnatal biometrics in healthy pregnancies

    Directory of Open Access Journals (Sweden)

    Geue Daniel

    2004-04-01

    Full Text Available Abstract Background Magnetocardiography enables the precise determination of fetal cardiac time intervals (CTI as early as the second trimester of pregnancy. It has been shown that fetal CTI change in course of gestation. The aim of this work was to investigate the dependency of fetal CTI on gestational age, gender and postnatal biometric data in a substantial sample of subjects during normal pregnancy. Methods A total of 230 fetal magnetocardiograms were obtained in 47 healthy fetuses between the 15th and 42nd week of gestation. In each recording, after subtraction of the maternal cardiac artifact and the identification of fetal beats, fetal PQRST courses were signal averaged. On the basis of therein detected wave onsets and ends, the following CTI were determined: P wave, PR interval, PQ interval, QRS complex, ST segment, T wave, QT and QTc interval. Using regression analysis, the dependency of the CTI were examined with respect to gestational age, gender and postnatal biometric data. Results Atrioventricular conduction and ventricular depolarization times could be determined dependably whereas the T wave was often difficult to detect. Linear and nonlinear regression analysis established strong dependency on age for the P wave and QRS complex (r2 = 0.67, p r2 = 0.66, p r2 = 0.21, p r2 = 0.13, p st week onward (p Conclusion We conclude that 1 from approximately the 18th week to term, fetal CTI which quantify depolarization times can be reliably determined using magnetocardiography, 2 the P wave and QRS complex duration show a high dependency on age which to a large part reflects fetal growth and 3 fetal gender plays a role in QRS complex duration in the third trimester. Fetal development is thus in part reflected in the CTI and may be useful in the identification of intrauterine growth retardation.

  17. Relative Importance for Linear Regression in R: The Package relaimpo

    Directory of Open Access Journals (Sweden)

    Ulrike Gromping

    2006-09-01

    Full Text Available Relative importance is a topic that has seen a lot of interest in recent years, particularly in applied work. The R package relaimpo implements six different metrics for assessing relative importance of regressors in the linear model, two of which are recommended - averaging over orderings of regressors and a newly proposed metric (Feldman 2005 called pmvd. Apart from delivering the metrics themselves, relaimpo also provides (exploratory bootstrap confidence intervals. This paper offers a brief tutorial introduction to the package. The methods and relaimpo’s functionality are illustrated using the data set swiss that is generally available in R. The paper targets readers who have a basic understanding of multiple linear regression. For the background of more advanced aspects, references are provided.

  18. Confidence intervals for modeling anthocyanin retention in grape pomace during nonisothermal heating.

    Science.gov (United States)

    Mishra, D K; Dolan, K D; Yang, L

    2008-01-01

    Degradation of nutraceuticals in low- and intermediate-moisture foods heated at high temperature (>100 degrees C) is difficult to model because of the nonisothermal condition. Isothermal experiments above 100 degrees C are difficult to design because they require high pressure and small sample size in sealed containers. Therefore, a nonisothermal method was developed to estimate the thermal degradation kinetic parameter of nutraceuticals and determine the confidence intervals for the parameters and the predicted Y (concentration). Grape pomace at 42% moisture content (wb) was heated in sealed 202 x 214 steel cans in a steam retort at 126.7 degrees C for > 30 min. Can center temperature was measured by thermocouple and predicted using Comsol software. Thermal conductivity (k) and specific heat (C(p)) were estimated as quadratic functions of temperature using Comsol and nonlinear regression. The k and C(p) functions were then used to predict temperature inside the grape pomace during retorting. Similar heating experiments were run at different time-temperature treatments from 8 to 25 min for kinetic parameter estimation. Anthocyanin concentration in the grape pomace was measured using HPLC. Degradation rate constant (k(110 degrees C)) and activation energy (E(a)) were estimated using nonlinear regression. The thermophysical properties estimates at 100 degrees C were k = 0.501 W/m degrees C, Cp= 3600 J/kg and the kinetic parameters were k(110 degrees C)= 0.0607/min and E(a)= 65.32 kJ/mol. The 95% confidence intervals for the parameters and the confidence bands and prediction bands for anthocyanin retention were plotted. These methods are useful for thermal processing design for nutraceutical products.

  19. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  20. The effect of postoperative medical treatment on left ventricular mass regression after aortic valve replacement.

    Science.gov (United States)

    Helder, Meghana R K; Ugur, Murat; Bavaria, Joseph E; Kshettry, Vibhu R; Groh, Mark A; Petracek, Michael R; Jones, Kent W; Suri, Rakesh M; Schaff, Hartzell V

    2015-03-01

    The study objective was to analyze factors associated with left ventricular mass regression in patients undergoing aortic valve replacement with a newer bioprosthesis, the Trifecta valve pericardial bioprosthesis (St Jude Medical Inc, St Paul, Minn). A total of 444 patients underwent aortic valve replacement with the Trifecta bioprosthesis from 2007 to 2009 at 6 US institutions. The clinical and echocardiographic data of 200 of these patients who had left ventricular hypertrophy and follow-up studies 1 year postoperatively were reviewed and compared to analyze factors affecting left ventricular mass regression. Mean (standard deviation) age of the 200 study patients was 73 (9) years, 66% were men, and 92% had pure or predominant aortic valve stenosis. Complete left ventricular mass regression was observed in 102 patients (51%) by 1 year postoperatively. In univariate analysis, male sex, implantation of larger valves, larger left ventricular end-diastolic volume, and beta-blocker or calcium-channel blocker treatment at dismissal were significantly associated with complete mass regression. In the multivariate model, odds ratios (95% confidence intervals) indicated that male sex (3.38 [1.39-8.26]) and beta-blocker or calcium-channel blocker treatment at dismissal (3.41 [1.40-8.34]) were associated with increased probability of complete left ventricular mass regression. Patients with higher preoperative systolic blood pressure were less likely to have complete left ventricular mass regression (0.98 [0.97-0.99]). Among patients with left ventricular hypertrophy, postoperative treatment with beta-blockers or calcium-channel blockers may enhance mass regression. This highlights the need for close medical follow-up after operation. Labeled valve size was not predictive of left ventricular mass regression. Copyright © 2015 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  1. Stabilizing embedology: Geometry-preserving delay-coordinate maps

    Science.gov (United States)

    Eftekhari, Armin; Yap, Han Lun; Wakin, Michael B.; Rozell, Christopher J.

    2018-02-01

    Delay-coordinate mapping is an effective and widely used technique for reconstructing and analyzing the dynamics of a nonlinear system based on time-series outputs. The efficacy of delay-coordinate mapping has long been supported by Takens' embedding theorem, which guarantees that delay-coordinate maps use the time-series output to provide a reconstruction of the hidden state space that is a one-to-one embedding of the system's attractor. While this topological guarantee ensures that distinct points in the reconstruction correspond to distinct points in the original state space, it does not characterize the quality of this embedding or illuminate how the specific parameters affect the reconstruction. In this paper, we extend Takens' result by establishing conditions under which delay-coordinate mapping is guaranteed to provide a stable embedding of a system's attractor. Beyond only preserving the attractor topology, a stable embedding preserves the attractor geometry by ensuring that distances between points in the state space are approximately preserved. In particular, we find that delay-coordinate mapping stably embeds an attractor of a dynamical system if the stable rank of the system is large enough to be proportional to the dimension of the attractor. The stable rank reflects the relation between the sampling interval and the number of delays in delay-coordinate mapping. Our theoretical findings give guidance to choosing system parameters, echoing the tradeoff between irrelevancy and redundancy that has been heuristically investigated in the literature. Our initial result is stated for attractors that are smooth submanifolds of Euclidean space, with extensions provided for the case of strange attractors.

  2. An Interval-Valued Approach to Business Process Simulation Based on Genetic Algorithms and the BPMN

    Directory of Open Access Journals (Sweden)

    Mario G.C.A. Cimino

    2014-05-01

    Full Text Available Simulating organizational processes characterized by interacting human activities, resources, business rules and constraints, is a challenging task, because of the inherent uncertainty, inaccuracy, variability and dynamicity. With regard to this problem, currently available business process simulation (BPS methods and tools are unable to efficiently capture the process behavior along its lifecycle. In this paper, a novel approach of BPS is presented. To build and manage simulation models according to the proposed approach, a simulation system is designed, developed and tested on pilot scenarios, as well as on real-world processes. The proposed approach exploits interval-valued data to represent model parameters, in place of conventional single-valued or probability-valued parameters. Indeed, an interval-valued parameter is comprehensive; it is the easiest to understand and express and the simplest to process, among multi-valued representations. In order to compute the interval-valued output of the system, a genetic algorithm is used. The resulting process model allows forming mappings at different levels of detail and, therefore, at different model resolutions. The system has been developed as an extension of a publicly available simulation engine, based on the Business Process Model and Notation (BPMN standard.

  3. Security scheme in IMDD-OFDM-PON system with the chaotic pilot interval and scrambling

    Science.gov (United States)

    Chen, Qianghua; Bi, Meihua; Fu, Xiaosong; Lu, Yang; Zeng, Ran; Yang, Guowei; Yang, Xuelin; Xiao, Shilin

    2018-01-01

    In this paper, a random chaotic pilot interval and permutations scheme without any requirement of redundant sideband information is firstly proposed for the physical layer security-enhanced intensity modulation direct detection orthogonal frequency division multiplexing passive optical network (IMDD-OFDM-PON) system. With the help of the position feature of inserting the pilot, a simple logistic chaos map is used to generate the random pilot interval and scramble the chaotic subcarrier allocation of each column pilot data for improving the physical layer confidentiality. Due to the dynamic chaotic permutations of pilot data, the enhanced key space of ∼103303 is achieved in OFDM-PON. Moreover, the transmission experiment of 10-Gb/s 16-QAM encrypted OFDM data is successfully demonstrated over 20-km single-mode fiber, which indicates that the proposed scheme not only improves the system security, but also can achieve the same performance as in the common IMDD-OFDM-PON system without encryption scheme.

  4. An integrated genetic map based on four mapping populations and quantitative trait loci associated with economically important traits in watermelon (Citrullus lanatus)

    Science.gov (United States)

    2014-01-01

    Background Modern watermelon (Citrullus lanatus L.) cultivars share a narrow genetic base due to many years of selection for desirable horticultural qualities. Wild subspecies within C. lanatus are important potential sources of novel alleles for watermelon breeding, but successful trait introgression into elite cultivars has had limited success. The application of marker assisted selection (MAS) in watermelon is yet to be realized, mainly due to the past lack of high quality genetic maps. Recently, a number of useful maps have become available, however these maps have few common markers, and were constructed using different marker sets, thus, making integration and comparative analysis among maps difficult. The objective of this research was to use single-nucleotide polymorphism (SNP) anchor markers to construct an integrated genetic map for C. lanatus. Results Under the framework of the high density genetic map, an integrated genetic map was constructed by merging data from four independent mapping experiments using a genetically diverse array of parental lines, which included three subspecies of watermelon. The 698 simple sequence repeat (SSR), 219 insertion-deletion (InDel), 36 structure variation (SV) and 386 SNP markers from the four maps were used to construct an integrated map. This integrated map contained 1339 markers, spanning 798 cM with an average marker interval of 0.6 cM. Fifty-eight previously reported quantitative trait loci (QTL) for 12 traits in these populations were also integrated into the map. In addition, new QTL identified for brix, fructose, glucose and sucrose were added. Some QTL associated with economically important traits detected in different genetic backgrounds mapped to similar genomic regions of the integrated map, suggesting that such QTL are responsible for the phenotypic variability observed in a broad array of watermelon germplasm. Conclusions The integrated map described herein enhances the utility of genomic tools over

  5. An integrated genetic map based on four mapping populations and quantitative trait loci associated with economically important traits in watermelon (Citrullus lanatus).

    Science.gov (United States)

    Ren, Yi; McGregor, Cecilia; Zhang, Yan; Gong, Guoyi; Zhang, Haiying; Guo, Shaogui; Sun, Honghe; Cai, Wantao; Zhang, Jie; Xu, Yong

    2014-01-20

    Modern watermelon (Citrullus lanatus L.) cultivars share a narrow genetic base due to many years of selection for desirable horticultural qualities. Wild subspecies within C. lanatus are important potential sources of novel alleles for watermelon breeding, but successful trait introgression into elite cultivars has had limited success. The application of marker assisted selection (MAS) in watermelon is yet to be realized, mainly due to the past lack of high quality genetic maps. Recently, a number of useful maps have become available, however these maps have few common markers, and were constructed using different marker sets, thus, making integration and comparative analysis among maps difficult. The objective of this research was to use single-nucleotide polymorphism (SNP) anchor markers to construct an integrated genetic map for C. lanatus. Under the framework of the high density genetic map, an integrated genetic map was constructed by merging data from four independent mapping experiments using a genetically diverse array of parental lines, which included three subspecies of watermelon. The 698 simple sequence repeat (SSR), 219 insertion-deletion (InDel), 36 structure variation (SV) and 386 SNP markers from the four maps were used to construct an integrated map. This integrated map contained 1339 markers, spanning 798 cM with an average marker interval of 0.6 cM. Fifty-eight previously reported quantitative trait loci (QTL) for 12 traits in these populations were also integrated into the map. In addition, new QTL identified for brix, fructose, glucose and sucrose were added. Some QTL associated with economically important traits detected in different genetic backgrounds mapped to similar genomic regions of the integrated map, suggesting that such QTL are responsible for the phenotypic variability observed in a broad array of watermelon germplasm. The integrated map described herein enhances the utility of genomic tools over previous watermelon genetic maps. A

  6. Mapping out Map Libraries

    Directory of Open Access Journals (Sweden)

    Ferjan Ormeling

    2008-09-01

    Full Text Available Discussing the requirements for map data quality, map users and their library/archives environment, the paper focuses on the metadata the user would need for a correct and efficient interpretation of the map data. For such a correct interpretation, knowledge of the rules and guidelines according to which the topographers/cartographers work (such as the kind of data categories to be collected, and the degree to which these rules and guidelines were indeed followed are essential. This is not only valid for the old maps stored in our libraries and archives, but perhaps even more so for the new digital files as the format in which we now have to access our geospatial data. As this would be too much to ask from map librarians/curators, some sort of web 2.0 environment is sought where comments about data quality, completeness and up-to-dateness from knowledgeable map users regarding the specific maps or map series studied can be collected and tagged to scanned versions of these maps on the web. In order not to be subject to the same disadvantages as Wikipedia, where the ‘communis opinio’ rather than scholarship, seems to be decisive, some checking by map curators of this tagged map use information would still be needed. Cooperation between map curators and the International Cartographic Association ( ICA map and spatial data use commission to this end is suggested.

  7. Construction of a high-density genetic map using specific length amplified fragment markers and identification of a quantitative trait locus for anthracnose resistance in walnut (Juglans regia L.).

    Science.gov (United States)

    Zhu, Yufeng; Yin, Yanfei; Yang, Keqiang; Li, Jihong; Sang, Yalin; Huang, Long; Fan, Shu

    2015-08-18

    Walnut (Juglans regia, 2n = 32, approximately 606 Mb per 1C genome) is an economically important tree crop. Resistance to anthracnose, caused by Colletotrichum gloeosporioides, is a major objective of walnut genetic improvement in China. The recently developed specific length amplified fragment sequencing (SLAF-seq) is an efficient strategy that can obtain large numbers of markers with sufficient sequence information to construct high-density genetic maps and permits detection of quantitative trait loci (QTLs) for molecular breeding. SLAF-seq generated 161.64 M paired-end reads. 153,820 SLAF markers were obtained, of which 49,174 were polymorphic. 13,635 polymorphic markers were sorted into five segregation types and 2,577 markers of them were used to construct genetic linkage maps: 2,395 of these fell into 16 linkage groups (LGs) for the female map, 448 markers for the male map, and 2,577 markers for the integrated map. Taking into account the size of all LGs, the marker coverage was 2,664.36 cM for the female map, 1,305.58 cM for the male map, and 2,457.82 cM for the integrated map. The average intervals between two adjacent mapped markers were 1.11 cM, 2.91 cM and 0.95 cM for three maps, respectively. 'SNP_only' markers accounted for 89.25% of the markers on the integrated map. Mapping markers contained 5,043 single nucleotide polymorphisms (SNPs) loci, which corresponded to two SNP loci per SLAF marker. According to the integrated map, we used interval mapping (Logarithm of odds, LOD > 3.0) to detect our quantitative trait. One QTL was detected for anthracnose resistance. The interval of this QTL ranged from 165.51 cM to 176.33 cM on LG14, and ten markers in this interval that were above the threshold value were considered to be linked markers to the anthracnose resistance trait. The phenotypic variance explained by each marker ranged from 16.2 to 19.9%, and their LOD scores varied from 3.22 to 4.04. High-density genetic maps for walnut containing 16

  8. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  9. An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.

    Science.gov (United States)

    Liu, Zhiyuan; Wang, Changhui

    2015-10-23

    In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.

  10. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis.

    Science.gov (United States)

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended.

  11. Wind Power Ramp Events Prediction with Hybrid Machine Learning Regression Techniques and Reanalysis Data

    Directory of Open Access Journals (Sweden)

    Laura Cornejo-Bueno

    2017-11-01

    Full Text Available Wind Power Ramp Events (WPREs are large fluctuations of wind power in a short time interval, which lead to strong, undesirable variations in the electric power produced by a wind farm. Its accurate prediction is important in the effort of efficiently integrating wind energy in the electric system, without affecting considerably its stability, robustness and resilience. In this paper, we tackle the problem of predicting WPREs by applying Machine Learning (ML regression techniques. Our approach consists of using variables from atmospheric reanalysis data as predictive inputs for the learning machine, which opens the possibility of hybridizing numerical-physical weather models with ML techniques for WPREs prediction in real systems. Specifically, we have explored the feasibility of a number of state-of-the-art ML regression techniques, such as support vector regression, artificial neural networks (multi-layer perceptrons and extreme learning machines and Gaussian processes to solve the problem. Furthermore, the ERA-Interim reanalysis from the European Center for Medium-Range Weather Forecasts is the one used in this paper because of its accuracy and high resolution (in both spatial and temporal domains. Aiming at validating the feasibility of our predicting approach, we have carried out an extensive experimental work using real data from three wind farms in Spain, discussing the performance of the different ML regression tested in this wind power ramp event prediction problem.

  12. New algorithm improves fine structure of the barley consensus SNP map

    Directory of Open Access Journals (Sweden)

    Endelman Jeffrey B

    2011-08-01

    Full Text Available Abstract Background The need to integrate information from multiple linkage maps is a long-standing problem in genetics. One way to visualize the complex ordinal relationships is with a directed graph, where each vertex in the graph is a bin of markers. When there are no ordering conflicts between the linkage maps, the result is a directed acyclic graph, or DAG, which can then be linearized to produce a consensus map. Results New algorithms for the simplification and linearization of consensus graphs have been implemented as a package for the R computing environment called DAGGER. The simplified consensus graphs produced by DAGGER exactly capture the ordinal relationships present in a series of linkage maps. Using either linear or quadratic programming, DAGGER generates a consensus map with minimum error relative to the linkage maps while remaining ordinally consistent with them. Both linearization methods produce consensus maps that are compressed relative to the mean of the linkage maps. After rescaling, however, the consensus maps had higher accuracy (and higher marker density than the individual linkage maps in genetic simulations. When applied to four barley linkage maps genotyped at nearly 3000 SNP markers, DAGGER produced a consensus map with improved fine structure compared to the existing barley consensus SNP map. The root-mean-squared error between the linkage maps and the DAGGER map was 0.82 cM per marker interval compared to 2.28 cM for the existing consensus map. Examination of the barley hardness locus at the 5HS telomere, for which there is a physical map, confirmed that the DAGGER output was more accurate for fine structure analysis. Conclusions The R package DAGGER is an effective, freely available resource for integrating the information from a set of consistent linkage maps.

  13. Development of a Watershed-Scale Long-Term Hydrologic Impact Assessment Model with the Asymptotic Curve Number Regression Equation

    Directory of Open Access Journals (Sweden)

    Jichul Ryu

    2016-04-01

    Full Text Available In this study, 52 asymptotic Curve Number (CN regression equations were developed for combinations of representative land covers and hydrologic soil groups. In addition, to overcome the limitations of the original Long-term Hydrologic Impact Assessment (L-THIA model when it is applied to larger watersheds, a watershed-scale L-THIA Asymptotic CN (ACN regression equation model (watershed-scale L-THIA ACN model was developed by integrating the asymptotic CN regressions and various modules for direct runoff/baseflow/channel routing. The watershed-scale L-THIA ACN model was applied to four watersheds in South Korea to evaluate the accuracy of its streamflow prediction. The coefficient of determination (R2 and Nash–Sutcliffe Efficiency (NSE values for observed versus simulated streamflows over intervals of eight days were greater than 0.6 for all four of the watersheds. The watershed-scale L-THIA ACN model, including the asymptotic CN regression equation method, can simulate long-term streamflow sufficiently well with the ten parameters that have been added for the characterization of streamflow.

  14. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    International Nuclear Information System (INIS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-01-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine. - Highlights: • Both training and testing samples are considered for analytical lines selection. • The analytical lines are auto-selected based on the built-in characteristics of spectral lines. • The new method can achieve better prediction accuracy and modeling robustness. • Model predictions are given with confidence interval of probabilistic distribution

  15. Direct and accelerated parameter mapping using the unscented Kalman filter.

    Science.gov (United States)

    Zhao, Li; Feng, Xue; Meyer, Craig H

    2016-05-01

    To accelerate parameter mapping using a new paradigm that combines image reconstruction and model regression as a parameter state-tracking problem. In T2 mapping, the T2 map is first encoded in parameter space by multi-TE measurements and then encoded by Fourier transformation with readout/phase encoding gradients. Using a state transition function and a measurement function, the unscented Kalman filter can describe T2 mapping as a dynamic system and directly estimate the T2 map from the k-space data. The proposed method was validated with a numerical brain phantom and volunteer experiments with a multiple-contrast spin echo sequence. Its performance was compared with a conjugate-gradient nonlinear inversion method at undersampling factors of 2 to 8. An accelerated pulse sequence was developed based on this method to achieve prospective undersampling. Compared with the nonlinear inversion reconstruction, the proposed method had higher precision, improved structural similarity and reduced normalized root mean squared error, with acceleration factors up to 8 in numerical phantom and volunteer studies. This work describes a new perspective on parameter mapping by state tracking. The unscented Kalman filter provides a highly accelerated and efficient paradigm for T2 mapping. © 2015 Wiley Periodicals, Inc.

  16. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  17. Reviewing interval cancers: Time well spent?

    International Nuclear Information System (INIS)

    Gower-Thomas, Kate; Fielder, Hilary M.P.; Branston, Lucy; Greening, Sarah; Beer, Helen; Rogers, Cerilan

    2002-01-01

    OBJECTIVES: To categorize interval cancers, and thus identify false-negatives, following prevalent and incident screens in the Welsh breast screening programme. SETTING: Breast Test Wales (BTW) Llandudno, Cardiff and Swansea breast screening units. METHODS: Five hundred and sixty interval breast cancers identified following negative mammographic screening between 1989 and 1997 were reviewed by eight screening radiologists. The blind review was achieved by mixing the screening films of women who subsequently developed an interval cancer with screen negative films of women who did not develop cancer, in a ratio of 4:1. Another radiologist used patients' symptomatic films to record a reference against which the reviewers' reports of the screening films were compared. Interval cancers were categorized as 'true', 'occult', 'false-negative' or 'unclassified' interval cancers or interval cancers with minimal signs, based on the National Health Service breast screening programme (NHSBSP) guidelines. RESULTS: Of the classifiable interval films, 32% were false-negatives, 55% were true intervals and 12% occult. The proportion of false-negatives following incident screens was half that following prevalent screens (P = 0.004). Forty percent of the seed films were recalled by the panel. CONCLUSIONS: Low false-negative interval cancer rates following incident screens (18%) versus prevalent screens (36%) suggest that lower cancer detection rates at incident screens may have resulted from fewer cancers than expected being present, rather than from a failure to detect tumours. The panel method for categorizing interval cancers has significant flaws as the results vary markedly with different protocol and is no more accurate than other, quicker and more timely methods. Gower-Thomas, K. et al. (2002)

  18. Characterization of Cardiac Time Intervals in Healthy Bonnet Macaques (Macaca radiata) by Using an Electronic Stethoscope

    Science.gov (United States)

    Kamran, Haroon; Salciccioli, Louis; Pushilin, Sergei; Kumar, Paraag; Carter, John; Kuo, John; Novotney, Carol; Lazar, Jason M

    2011-01-01

    Nonhuman primates are used frequently in cardiovascular research. Cardiac time intervals derived by phonocardiography have long been used to assess left ventricular function. Electronic stethoscopes are simple low-cost systems that display heart sound signals. We assessed the use of an electronic stethoscope to measure cardiac time intervals in 48 healthy bonnet macaques (age, 8 ± 5 y) based on recorded heart sounds. Technically adequate recordings were obtained from all animals and required 1.5 ± 1.3 min. The following cardiac time intervals were determined by simultaneously recording acoustic and single-lead electrocardiographic data: electromechanical activation time (QS1), electromechanical systole (QS2), the time interval between the first and second heart sounds (S1S2), and the time interval between the second and first sounds (S2S1). QS2 was correlated with heart rate, mean arterial pressure, diastolic blood pressure, and left ventricular ejection time determined by using echocardiography. S1S2 correlated with heart rate, mean arterial pressure, diastolic blood pressure, left ventricular ejection time, and age. S2S1 correlated with heart rate, mean arterial pressure, diastolic blood pressure, systolic blood pressure, and left ventricular ejection time. QS1 did not correlate with any anthropometric or echocardiographic parameter. The relation S1S2/S2S1 correlated with systolic blood pressure. On multivariate analyses, heart rate was the only independent predictor of QS2, S1S2, and S2S1. In conclusion, determination of cardiac time intervals is feasible and reproducible by using an electrical stethoscope in nonhuman primates. Heart rate is a major determinant of QS2, S1S2, and S2S1 but not QS1; regression equations for reference values for cardiac time intervals in bonnet macaques are provided. PMID:21439218

  19. Logistic Regression for Seismically Induced Landslide Predictions: Using Uniform Hazard and Geophysical Layers as Predictor Variables

    Science.gov (United States)

    Nowicki, M. A.; Hearne, M.; Thompson, E.; Wald, D. J.

    2012-12-01

    Seismically induced landslides present a costly and often fatal threats in many mountainous regions. Substantial effort has been invested to understand where seismically induced landslides may occur in the future. Both slope-stability methods and, more recently, statistical approaches to the problem are described throughout the literature. Though some regional efforts have succeeded, no uniformly agreed-upon method is available for predicting the likelihood and spatial extent of seismically induced landslides. For use in the U. S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system, we would like to routinely make such estimates, in near-real time, around the globe. Here we use the recently produced USGS ShakeMap Atlas of historic earthquakes to develop an empirical landslide probability model. We focus on recent events, yet include any digitally-mapped landslide inventories for which well-constrained ShakeMaps are also available. We combine these uniform estimates of the input shaking (e.g., peak acceleration and velocity) with broadly available susceptibility proxies, such as topographic slope and surface geology. The resulting database is used to build a predictive model of the probability of landslide occurrence with logistic regression. The landslide database includes observations from the Northridge, California (1994); Wenchuan, China (2008); ChiChi, Taiwan (1999); and Chuetsu, Japan (2004) earthquakes; we also provide ShakeMaps for moderate-sized events without landslide for proper model testing and training. The performance of the regression model is assessed with both statistical goodness-of-fit metrics and a qualitative review of whether or not the model is able to capture the spatial extent of landslides for each event. Part of our goal is to determine which variables can be employed based on globally-available data or proxies, and whether or not modeling results from one region are transferrable to

  20. Regression and regression analysis time series prediction modeling on climate data of quetta, pakistan

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2007-01-01

    Various statistical techniques was used on five-year data from 1998-2002 of average humidity, rainfall, maximum and minimum temperatures, respectively. The relationships to regression analysis time series (RATS) were developed for determining the overall trend of these climate parameters on the basis of which forecast models can be corrected and modified. We computed the coefficient of determination as a measure of goodness of fit, to our polynomial regression analysis time series (PRATS). The correlation to multiple linear regression (MLR) and multiple linear regression analysis time series (MLRATS) were also developed for deciphering the interdependence of weather parameters. Spearman's rand correlation and Goldfeld-Quandt test were used to check the uniformity or non-uniformity of variances in our fit to polynomial regression (PR). The Breusch-Pagan test was applied to MLR and MLRATS, respectively which yielded homoscedasticity. We also employed Bartlett's test for homogeneity of variances on a five-year data of rainfall and humidity, respectively which showed that the variances in rainfall data were not homogenous while in case of humidity, were homogenous. Our results on regression and regression analysis time series show the best fit to prediction modeling on climatic data of Quetta, Pakistan. (author)

  1. Linear regression in astronomy. I

    Science.gov (United States)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  2. Logic regression and its extensions.

    Science.gov (United States)

    Schwender, Holger; Ruczinski, Ingo

    2010-01-01

    Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. Predicting subject-driven actions and sensory experience in a virtual world with relevance vector machine regression of fMRI data.

    Science.gov (United States)

    Valente, Giancarlo; De Martino, Federico; Esposito, Fabrizio; Goebel, Rainer; Formisano, Elia

    2011-05-15

    In this work we illustrate the approach of the Maastricht Brain Imaging Center to the PBAIC 2007 competition, where participants had to predict, based on fMRI measurements of brain activity, subject driven actions and sensory experience in a virtual world. After standard pre-processing (slice scan time correction, motion correction), we generated rating predictions based on linear Relevance Vector Machine (RVM) learning from all brain voxels. Spatial and temporal filtering of the time series was optimized rating by rating. For some of the ratings (e.g. Instructions, Hits, Faces, Velocity), linear RVM regression was accurate and very consistent within and between subjects. For other ratings (e.g. Arousal, Valence) results were less satisfactory. Our approach ranked overall second. To investigate the role of different brain regions in ratings prediction we generated predictive maps, i.e. maps of the weighted contribution of each voxel to the predicted rating. These maps generally included (but were not limited to) "specialized" regions which are consistent with results from conventional neuroimaging studies and known functional neuroanatomy. In conclusion, Sparse Bayesian Learning models, such as RVM, appear to be a valuable approach to the multivariate regression of fMRI time series. The implementation of the Automatic Relevance Determination criterion is particularly suitable and provides a good generalization, despite the limited number of samples which is typically available in fMRI. Predictive maps allow disclosing multi-voxel patterns of brain activity that predict perceptual and behavioral subjective experience. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. COVAR: Computer Program for Multifactor Relative Risks and Tests of Hypotheses Using a Variance-Covariance Matrix from Linear and Log-Linear Regression

    Directory of Open Access Journals (Sweden)

    Leif E. Peterson

    1997-11-01

    Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed.

  5. Tumor regression patterns in retinoblastoma

    International Nuclear Information System (INIS)

    Zafar, S.N.; Siddique, S.N.; Zaheer, N.

    2016-01-01

    To observe the types of tumor regression after treatment, and identify the common pattern of regression in our patients. Study Design: Descriptive study. Place and Duration of Study: Department of Pediatric Ophthalmology and Strabismus, Al-Shifa Trust Eye Hospital, Rawalpindi, Pakistan, from October 2011 to October 2014. Methodology: Children with unilateral and bilateral retinoblastoma were included in the study. Patients were referred to Pakistan Institute of Medical Sciences, Islamabad, for chemotherapy. After every cycle of chemotherapy, dilated funds examination under anesthesia was performed to record response of the treatment. Regression patterns were recorded on RetCam II. Results: Seventy-four tumors were included in the study. Out of 74 tumors, 3 were ICRB group A tumors, 43 were ICRB group B tumors, 14 tumors belonged to ICRB group C, and remaining 14 were ICRB group D tumors. Type IV regression was seen in 39.1% (n=29) tumors, type II in 29.7% (n=22), type III in 25.6% (n=19), and type I in 5.4% (n=4). All group A tumors (100%) showed type IV regression. Seventeen (39.5%) group B tumors showed type IV regression. In group C, 5 tumors (35.7%) showed type II regression and 5 tumors (35.7%) showed type IV regression. In group D, 6 tumors (42.9%) regressed to type II non-calcified remnants. Conclusion: The response and success of the focal and systemic treatment, as judged by the appearance of different patterns of tumor regression, varies with the ICRB grouping of the tumor. (author)

  6. Confirmation and Fine Mapping of a Major QTL for Aflatoxin Resistance in Maize Using a Combination of Linkage and Association Mapping

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2016-09-01

    Full Text Available Maize grain contamination with aflatoxin from Aspergillus flavus (A. flavus is a serious health hazard to animals and humans. To map the quantitative trait loci (QTLs associated with resistance to A. flavus, we employed a powerful approach that differs from previous methods in one important way: it combines the advantages of the genome-wide association analysis (GWAS and traditional linkage mapping analysis. Linkage mapping was performed using 228 recombinant inbred lines (RILs, and a highly significant QTL that affected aflatoxin accumulation, qAA8, was mapped. This QTL spanned approximately 7 centi-Morgan (cM on chromosome 8. The confidence interval was too large for positional cloning of the causal gene. To refine this QTL, GWAS was performed with 558,629 single nucleotide polymorphisms (SNPs in an association population comprising 437 maize inbred lines. Twenty-five significantly associated SNPs were identified, most of which co-localised with qAA8 and explained 6.7% to 26.8% of the phenotypic variation observed. Based on the rapid linkage disequilibrium (LD and the high density of SNPs in the association population, qAA8 was further localised to a smaller genomic region of approximately 1500 bp. A high-resolution map of the qAA8 region will be useful towards a marker-assisted selection (MAS of A. flavus resistance and a characterisation of the causal gene.

  7. Combining Alphas via Bounded Regression

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-11-01

    Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

  8. Mapping human health risks from exposure to trace metal contamination of drinking water sources in Pakistan

    International Nuclear Information System (INIS)

    Bhowmik, Avit Kumar; Alamdar, Ambreen; Katsoyiannis, Ioannis; Shen, Heqing; Ali, Nadeem; Ali, Syeda Maria; Bokhari, Habib; Schäfer, Ralf B.; Eqani, Syed Ali Musstjab Akber Shah

    2015-01-01

    The consumption of contaminated drinking water is one of the major causes of mortality and many severe diseases in developing countries. The principal drinking water sources in Pakistan, i.e. ground and surface water, are subject to geogenic and anthropogenic trace metal contamination. However, water quality monitoring activities have been limited to a few administrative areas and a nationwide human health risk assessment from trace metal exposure is lacking. Using geographically weighted regression (GWR) and eight relevant spatial predictors, we calculated nationwide human health risk maps by predicting the concentration of 10 trace metals in the drinking water sources of Pakistan and comparing them to guideline values. GWR incorporated local variations of trace metal concentrations into prediction models and hence mitigated effects of large distances between sampled districts due to data scarcity. Predicted concentrations mostly exhibited high accuracy and low uncertainty, and were in good agreement with observed concentrations. Concentrations for Central Pakistan were predicted with higher accuracy than for the North and South. A maximum 150–200 fold exceedance of guideline values was observed for predicted cadmium concentrations in ground water and arsenic concentrations in surface water. In more than 53% (4 and 100% for the lower and upper boundaries of 95% confidence interval (CI)) of the total area of Pakistan, the drinking water was predicted to be at risk of contamination from arsenic, chromium, iron, nickel and lead. The area with elevated risks is inhabited by more than 74 million (8 and 172 million for the lower and upper boundaries of 95% CI) people. Although these predictions require further validation by field monitoring, the results can inform disease mitigation and water resources management regarding potential hot spots. - Highlights: • Predictions of trace metal concentration use geographically weighted regression • Human health risk

  9. Mapping human health risks from exposure to trace metal contamination of drinking water sources in Pakistan

    Energy Technology Data Exchange (ETDEWEB)

    Bhowmik, Avit Kumar [Institute for Environmental Sciences, University of Koblenz-Landau, Fortstrasse 7, D-76829 Landau in der Pfalz (Germany); Alamdar, Ambreen [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021 (China); Katsoyiannis, Ioannis [Aristotle University of Thessaloniki, Department of Chemistry, Division of Chemical Technology, Box 116, Thessaloniki 54124 (Greece); Shen, Heqing [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021 (China); Ali, Nadeem [Department of Environmental Sciences, FBAS, International Islamic University, Islamabad (Pakistan); Ali, Syeda Maria [Center of Excellence in Environmental Studies, King Abdulaziz University, Jeddah (Saudi Arabia); Bokhari, Habib [Public Health and Environment Division, Department of Biosciences, COMSATS Institute of Information Technology, Islamabad (Pakistan); Schäfer, Ralf B. [Institute for Environmental Sciences, University of Koblenz-Landau, Fortstrasse 7, D-76829 Landau in der Pfalz (Germany); Eqani, Syed Ali Musstjab Akber Shah, E-mail: ali_ebl2@yahoo.com [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, Xiamen 361021 (China); Public Health and Environment Division, Department of Biosciences, COMSATS Institute of Information Technology, Islamabad (Pakistan)

    2015-12-15

    The consumption of contaminated drinking water is one of the major causes of mortality and many severe diseases in developing countries. The principal drinking water sources in Pakistan, i.e. ground and surface water, are subject to geogenic and anthropogenic trace metal contamination. However, water quality monitoring activities have been limited to a few administrative areas and a nationwide human health risk assessment from trace metal exposure is lacking. Using geographically weighted regression (GWR) and eight relevant spatial predictors, we calculated nationwide human health risk maps by predicting the concentration of 10 trace metals in the drinking water sources of Pakistan and comparing them to guideline values. GWR incorporated local variations of trace metal concentrations into prediction models and hence mitigated effects of large distances between sampled districts due to data scarcity. Predicted concentrations mostly exhibited high accuracy and low uncertainty, and were in good agreement with observed concentrations. Concentrations for Central Pakistan were predicted with higher accuracy than for the North and South. A maximum 150–200 fold exceedance of guideline values was observed for predicted cadmium concentrations in ground water and arsenic concentrations in surface water. In more than 53% (4 and 100% for the lower and upper boundaries of 95% confidence interval (CI)) of the total area of Pakistan, the drinking water was predicted to be at risk of contamination from arsenic, chromium, iron, nickel and lead. The area with elevated risks is inhabited by more than 74 million (8 and 172 million for the lower and upper boundaries of 95% CI) people. Although these predictions require further validation by field monitoring, the results can inform disease mitigation and water resources management regarding potential hot spots. - Highlights: • Predictions of trace metal concentration use geographically weighted regression • Human health risk

  10. High Density Linkage Map Construction and Mapping of Yield Trait QTLs in Maize (Zea mays) Using the Genotyping-by-Sequencing (GBS) Technology

    Science.gov (United States)

    Su, Chengfu; Wang, Wei; Gong, Shunliang; Zuo, Jinghui; Li, Shujiang; Xu, Shizhong

    2017-01-01

    Increasing grain yield is the ultimate goal for maize breeding. High resolution quantitative trait loci (QTL) mapping can help us understand the molecular basis of phenotypic variation of yield and thus facilitate marker assisted breeding. The aim of this study is to use genotyping-by-sequencing (GBS) for large-scale SNP discovery and simultaneous genotyping of all F2 individuals from a cross between two varieties of maize that are in clear contrast in yield and related traits. A set of 199 F2 progeny derived from the cross of varieties SG-5 and SG-7 were generated and genotyped by GBS. A total of 1,046,524,604 reads with an average of 5,258,918 reads per F2 individual were generated. This number of reads represents an approximately 0.36-fold coverage of the maize reference genome Zea_mays.AGPv3.29 for each F2 individual. A total of 68,882 raw SNPs were discovered in the F2 population, which, after stringent filtering, led to a total of 29,927 high quality SNPs. Comparative analysis using these physically mapped marker loci revealed a higher degree of synteny with the reference genome. The SNP genotype data were utilized to construct an intra-specific genetic linkage map of maize consisting of 3,305 bins on 10 linkage groups spanning 2,236.66 cM at an average distance of 0.68 cM between consecutive markers. From this map, we identified 28 QTLs associated with yield traits (100-kernel weight, ear length, ear diameter, cob diameter, kernel row number, corn grains per row, ear weight, and grain weight per plant) using the composite interval mapping (CIM) method and 29 QTLs using the least absolute shrinkage selection operator (LASSO) method. QTLs identified by the CIM method account for 6.4% to 19.7% of the phenotypic variation. Small intervals of three QTLs (qCGR-1, qKW-2, and qGWP-4) contain several genes, including one gene (GRMZM2G139872) encoding the F-box protein, three genes (GRMZM2G180811, GRMZM5G828139, and GRMZM5G873194) encoding the WD40-repeat protein, and

  11. Desenvolvimento de Modelos de Regressão Multivariada para a Quantificação de Benzoilmetronidazol na Presença de seus Produtos de Degradação por Espectroscopia no Infravermelho Próximo

    Directory of Open Access Journals (Sweden)

    Willian Ricardo da Rosa de Almeida

    2015-12-01

    Full Text Available Benzoyl metronidazole (BMZ is a drug with antiparasitic and antibacterial activity available in the form of pediatric suspensions. The BMZ main degradation products are metronidazole and benzoic acid, and there are no reports in the literature on the determination of BMZ in the presence of its degradation products using near infrared spectroscopy. Therefore, in this study a method for determining the content of BMZ pharmaceutical ingredient in the presence of its main degradation products by near infrared spectroscopy associated with multivariate calibration were to develop. Regression with variable selection methods such as partial least squares regression for interval (iPLS and partial least squares regression for synergism intervals (siPLS were applied in order to select spectral regions that produce models with smaller errors. The best model using the iPLS algorithm was obtained when the spectrum was divided into 12 sub-intervals and select a period 11 (RSEP% = 1.37. Once the spectrum has been divided into 16 intervals and combined subintervals 9, 13 and 18 yielded the best model for siPLS algorithm (RSEP = 1.30%. The proposed method can be considered selective; it allows determining the BMZ in the presence of its degradation products. DOI: http://dx.doi.org/10.17807/orbital.v7i4.741

  12. Mathematical models application for mapping soils spatial distribution on the example of the farm from the North of Udmurt Republic of Russia

    Science.gov (United States)

    Dokuchaev, P. M.; Meshalkina, J. L.; Yaroslavtsev, A. M.

    2018-01-01

    Comparative analysis of soils geospatial modeling using multinomial logistic regression, decision trees, random forest, regression trees and support vector machines algorithms was conducted. The visual interpretation of the digital maps obtained and their comparison with the existing map, as well as the quantitative assessment of the individual soil groups detection overall accuracy and of the models kappa showed that multiple logistic regression, support vector method, and random forest models application with spatial prediction of the conditional soil groups distribution can be reliably used for mapping of the study area. It has shown the most accurate detection for sod-podzolics soils (Phaeozems Albic) lightly eroded and moderately eroded soils. In second place, according to the mean overall accuracy of the prediction, there are sod-podzolics soils - non-eroded and warp one, as well as sod-gley soils (Umbrisols Gleyic) and alluvial soils (Fluvisols Dystric, Umbric). Heavy eroded sod-podzolics and gray forest soils (Phaeozems Albic) were detected by methods of automatic classification worst of all.

  13. Regression in autistic spectrum disorders.

    Science.gov (United States)

    Stefanatos, Gerry A

    2008-12-01

    A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.

  14. Determinação do Poder Calorífico de Amostras de Gasolina Utilizando Espectroscopia no Infravermelho Próximo e Regressão Multivariada

    Directory of Open Access Journals (Sweden)

    Janice Zulma Francesquett

    2013-08-01

    Full Text Available The aim this study was quantify the calorific power of 111 gasoline samples available at filling stations using near infrared spectroscopy in conjunction with the multivariate regression. The calorific power value of the fuels was determined using an adiabatic bomb calorimeter (norm ASTM D 4.809. For the construction of multivariate regression models were used 2/3 of the samples for calibration and the remainder to prediction, using the interval partial least squares (iPLS and synergy interval partial least square (siPLS algorithms. In the best iPLS model was selected the spectral range from 5561 to 6650 cm-1, obtaining RMSEP of 102 g cal-1 and showing a correlation coefficient (r of 0.8218 and 0.71% to calibration errors and 0.47% for prediction errors. The siPLS model divided into 32 intervals and grouped into three intervals was the highlighted model, which selected the region below 6000 cm-1 and above 6500 cm-1 with, presenting values ​​of RMSECV of 89.8 cal g-1 and RMSEP of 96.7 cal g-1, and correlation coefficients for the cross-validation and prediction of 0.7834 and 0.7293, respectively. The methodology proposed in this work is efficient, with prediction errors lower than 1%, being a clean alternative, fast, safe and practical.

  15. Four-dimensional optoacoustic temperature mapping in laser-induced thermotherapy

    Science.gov (United States)

    Oyaga Landa, Francisco Javier; Deán-Ben, Xosé Luís.; Sroka, Ronald; Razansky, Daniel

    2018-02-01

    Photoablative laser therapy is in common use for selective destruction of malignant masses, vascular and brain abnormalities. Tissue ablation and coagulation are irreversible processes occurring shortly after crossing a certain thermal exposure threshold. As a result, accurate mapping of the temperature field is essential for optimizing the outcome of these clinical interventions. Here we demonstrate four-dimensional optoacoustic temperature mapping of the entire photoablated region. Accuracy of the method is investigated in tissue-mimicking phantom experiments. Deviations of the volumetric optoacoustic temperature readings provided at 40ms intervals remained below 10% for temperature elevations above 3°C, as validated by simultaneous thermocouple measurements. The excellent spatio-temporal resolution of the new temperature monitoring approach aims at improving safety and efficacy of laser-based photothermal procedures.

  16. Generation and Assessment of Urban Land Cover Maps Using High-Resolution Multispectral Aerial Images

    DEFF Research Database (Denmark)

    Höhle, Joachim; Höhle, Michael

    2013-01-01

    a unique method for the automatic generation of urban land cover maps. In the present paper, imagery of a new medium-format aerial camera and advanced geoprocessing software are applied to derive normalized digital surface models and vegetation maps. These two intermediate products then become input...... to a tree structured classifier, which automatically derives land cover maps in 2D or 3D. We investigate the thematic accuracy of the produced land cover map by a class-wise stratified design and provide a method for deriving necessary sample sizes. Corresponding survey adjusted accuracy measures...... and their associated confidence intervals are used to adequately reflect uncertainty in the assessment based on the chosen sample size. Proof of concept for the method is given for an urban area in Switzerland. Here, the produced land cover map with six classes (building, wall and carport, road and parking lot, hedge...

  17. Short communication: QTL mapping for ear tip-barrenness in maize

    Energy Technology Data Exchange (ETDEWEB)

    Ding, J.; Ma, J.; Chen, J.; Ai, T.; Li, Z.; Tian, Z.; Wu, S.; Chen, W.; Wu, J.

    2016-11-01

    Barren tip on corn ear is an important agronomic trait in maize, which is highly associated with grain yield. Understanding the genetic basis of tip-barrenness may help to reduce the ear tip-barrenness in breeding programs. In this study, ear tip-barrenness was evaluated in two environments in a F2:3 population, and it showed significant genotypic variation for ear tip-barrenness in both environments. Using mixed-model composite interval mapping method, three additive effects quantitative trait loci (QTL) for ear tip-barrenness were mapped on chromosomes 2, 3 and 6, respectively. They explained 16.6% of the phenotypic variation, and no significant QTL × Environment interactions and digenic interactions were detected. The results indicated that additive effect was the main genetic basis for ear tip-barrenness in maize. This is the first report of QTL mapped for ear tip-barrenness in maize. (Author)

  18. Producing landslide susceptibility maps by utilizing machine learning methods. The case of Finikas catchment basin, North Peloponnese, Greece.

    Science.gov (United States)

    Tsangaratos, Paraskevas; Ilia, Ioanna; Loupasakis, Constantinos; Papadakis, Michalis; Karimalis, Antonios

    2017-04-01

    The main objective of the present study was to apply two machine learning methods for the production of a landslide susceptibility map in the Finikas catchment basin, located in North Peloponnese, Greece and to compare their results. Specifically, Logistic Regression and Random Forest were utilized, based on a database of 40 sites classified into two categories, non-landslide and landslide areas that were separated into a training dataset (70% of the total data) and a validation dataset (remaining 30%). The identification of the areas was established by analyzing airborne imagery, extensive field investigation and the examination of previous research studies. Six landslide related variables were analyzed, namely: lithology, elevation, slope, aspect, distance to rivers and distance to faults. Within the Finikas catchment basin most of the reported landslides were located along the road network and within the residential complexes, classified as rotational and translational slides, and rockfalls, mainly caused due to the physical conditions and the general geotechnical behavior of the geological formation that cover the area. Each landslide susceptibility map was reclassified by applying the Geometric Interval classification technique into five classes, namely: very low susceptibility, low susceptibility, moderate susceptibility, high susceptibility, and very high susceptibility. The comparison and validation of the outcomes of each model were achieved using statistical evaluation measures, the receiving operating characteristic and the area under the success and predictive rate curves. The computation process was carried out using RStudio an integrated development environment for R language and ArcGIS 10.1 for compiling the data and producing the landslide susceptibility maps. From the outcomes of the Logistic Regression analysis it was induced that the highest b coefficient is allocated to lithology and slope, which was 2.8423 and 1.5841, respectively. From the

  19. Detailed forest formation mapping in the land cover map series for the Caribbean islands

    Science.gov (United States)

    Helmer, E. H.; Schill, S.; Pedreros, D. H.; Tieszen, L. L.; Kennaway, T.; Cushing, M.; Ruzycki, T.

    2006-12-01

    Forest formation and land cover maps for several Caribbean islands were developed from Landsat ETM+ imagery as part of a multi-organizational project. The spatially explicit data on forest formation types will permit more refined estimates of some forest attributes. The woody vegetation classification scheme relates closely to that of Areces-Malea et al. (1), who classify Caribbean vegetation according to standards of the US Federal Geographic Data Committee (FGDC, 1997), with modifications similar to those in Helmer et al. (2). For several of the islands, we developed image mosaics that filled cloudy parts of scenes with data from other scene dates after using regression tree normalization (3). The regression tree procedure permitted us to develop mosaics for wet and drought seasons for a few of the islands. The resulting multiseason imagery facilitated separation between classes such as seasonal evergreen forest, semi-deciduous forest (including semi-evergreen forest), and drought deciduous forest or woodland formations. We used decision tree classification methods to classify the Landsat image mosaics to detailed forest formations and land cover for Puerto Rico (4), St. Kitts and Nevis, St. Lucia, St. Vincent and the Grenadines and Grenada. The decision trees classified a stack of raster layers for each mapping area that included the Landsat image bands and various ancillary raster data layers. For Puerto Rico, for example, the ancillary data included climate parameters (5). For some islands, the ancillary data included topographic derivatives such as aspect, slope and slope position, SRTM (6) or other topographic data. Mapping forest formations with decision tree classifiers, ancillary geospatial data, and cloud-free image mosaics, accurately distinguished spectrally similar forest formations, without the aid of ecological zone maps, on the islands where the approach was used. The approach resulted in maps of forest formations with comparable or better detail

  20. Understanding logistic regression analysis

    OpenAIRE

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using ex...

  1. Correlation maps to assess soybean yield from EVI data in Paraná State, Brazil

    Directory of Open Access Journals (Sweden)

    Gleyce Kelly Dantas Araújo Figueiredo

    Full Text Available ABSTRACT Vegetation indices are widely used to monitor crop development and generally used as input data in models to forecast yield. The first step of this study consisted of using monthly Maximum Value Composites to create correlation maps using Enhanced Vegetation Index (EVI from Moderate Resolution Imaging Spectroradiometer (MODIS sensor mounted on Terra satellite and historical yield during the soybean crop cycle in Paraná State, Brazil, from 2000/2001 to 2010/2011. We compared the ability of forecasting crop yield based on correlation maps and crop specific masks. We ran a preliminary regression model to test its ability on yield estimation for four municipalities during the soybean growing season. A regression model was developed for both methodologies to forecast soybean crop yield using leave-one-out cross validation. The Root Mean Squared Error (RMSE values in the implementation of the model ranged from 0.037 t ha−1 to 0.19 t ha−1 using correlation maps, while for crop specific masks, it varied from 0.21 t ha−1 to 0.35 t ha−1. The model was able to explain 96 % to 98 % of the variance in estimated yield from correlation maps, while it was able to explain only 2 % to 67 % for crop specific mask approach. The results showed that the correlation maps could be used to predict crop yield more effectively than crop specific masks. In addition, this method can provide an indication of soybean yield prior to harvesting.

  2. User guide to the UNC1NLI1 package and three utility programs for computation of nonlinear confidence and prediction intervals using MODFLOW-2000

    DEFF Research Database (Denmark)

    Christensen, Steen; Cooley, R.L.

    a model (for example when using the Parameter-Estimation Process of MODFLOW-2000) it is advantageous to also use regression-based methods to quantify uncertainty. For this reason the UNC Process computes (1) confidence intervals for parameters of the Parameter-Estimation Process and (2) confidence...

  3. A prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2, based on simple clinical parameters.

    Science.gov (United States)

    Koeneman, Margot M; van Lint, Freyja H M; van Kuijk, Sander M J; Smits, Luc J M; Kooreman, Loes F S; Kruitwagen, Roy F P M; Kruse, Arnold J

    2017-01-01

    This study aims to develop a prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2 (CIN 2) lesions based on simple clinicopathological parameters. The study was conducted at Maastricht University Medical Center, the Netherlands. The prediction model was developed in a retrospective cohort of 129 women with a histologic diagnosis of CIN 2 who were managed by watchful waiting for 6 to 24months. Five potential predictors for spontaneous regression were selected based on the literature and expert opinion and were analyzed in a multivariable logistic regression model, followed by backward stepwise deletion based on the Wald test. The prediction model was internally validated by the bootstrapping method. Discriminative capacity and accuracy were tested by assessing the area under the receiver operating characteristic curve (AUC) and a calibration plot. Disease regression within 24months was seen in 91 (71%) of 129 patients. A prediction model was developed including the following variables: smoking, Papanicolaou test outcome before the CIN 2 diagnosis, concomitant CIN 1 diagnosis in the same biopsy, and more than 1 biopsy containing CIN 2. Not smoking, Papanicolaou class predictive of disease regression. The AUC was 69.2% (95% confidence interval, 58.5%-79.9%), indicating a moderate discriminative ability of the model. The calibration plot indicated good calibration of the predicted probabilities. This prediction model for spontaneous regression of CIN 2 may aid physicians in the personalized management of these lesions. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  5. Regression analysis of MCS Intensity and peak ground motion data in Italy

    Science.gov (United States)

    Faenza, L.; Michelini, A.

    2009-04-01

    Intensity scales are historically important because no instrumentation is necessary, and useful measurements of earthquake shaking can be made by an unequipped observer. The use of macroseismics data are essential for the revision of historical seismicity and of great importance for seismic hazard assessment of vulnerable areas. The procedure ShakeMap (Wald et al., Earthquake Spectra., 15, 1999) provides instrumentally based estimates of intensity maps. In Italy, intensities have been hitherto reported through the use of the MCS (Mercalli, Cancani Sieberg) intensity scale. The DBMI2004 (and the most recent DBMI08) report intensities for earthquakes in Italy that date back to Roman age. In order to exploit fully the potential of such a long intensity catalogue for past large events and with the aim of presenting ShakeMaps using an intensity scale consistent with that of the past, we have ri-calibrated the relationships between MCS intensity and observed peak ground motion (PGM) values in terms of both peak-ground acceleration and peak-ground velocities. To this end, we have used the two most updataed and complete dataset available for Italy - the strong motion Itaca database and the DBMI08 macroseismic database. In this work we have first assembled a data set consisting of PGM-intensity pairs and we have then determined the most suitable regressions parameters. Many tests have been made to quantify the accuracy and robustness of the results. The new instrumental intensity scale is going to be adopted for mapping the level of shaking resulting from earthquakes in Italy replacing the instrumental Modified Mercalli scale currently in use (Michelini et al., SRL, 79, 2008) and to determine shakemaps for historical events.

  6. Mapping QTL Contributing to Variation in Posterior Lobe Morphology between Strains of Drosophila melanogaster.

    Directory of Open Access Journals (Sweden)

    Jennifer L Hackett

    Full Text Available Closely-related, and otherwise morphologically similar insect species frequently show striking divergence in the shape and/or size of male genital structures, a phenomenon thought to be driven by sexual selection. Comparative interspecific studies can help elucidate the evolutionary forces acting on genital structures to drive this rapid differentiation. However, genetic dissection of sexual trait divergence between species is frequently hampered by the difficulty generating interspecific recombinants. Intraspecific variation can be leveraged to investigate the genetics of rapidly-evolving sexual traits, and here we carry out a genetic analysis of variation in the posterior lobe within D. melanogaster. The lobe is a male-specific process emerging from the genital arch of D. melanogaster and three closely-related species, is essential for copulation, and shows radical divergence in form across species. There is also abundant variation within species in the shape and size of the lobe, and while this variation is considerably more subtle than that seen among species, it nonetheless provides the raw material for QTL mapping. We created an advanced intercross population from a pair of phenotypically-different inbred strains, and after phenotyping and genotyping-by-sequencing the recombinants, mapped several QTL contributing to various measures of lobe morphology. The additional generations of crossing over in our mapping population led to QTL intervals that are smaller than is typical for an F2 mapping design. The intervals we map overlap with a pair of lobe QTL we previously identified in an independent mapping cross, potentially suggesting a level of shared genetic control of trait variation. Our QTL additionally implicate a suite of genes that have been shown to contribute to the development of the posterior lobe. These loci are strong candidates to harbor naturally-segregating sites contributing to phenotypic variation within D. melanogaster, and

  7. A Matlab program for stepwise regression

    Directory of Open Access Journals (Sweden)

    Yanhong Qi

    2016-03-01

    Full Text Available The stepwise linear regression is a multi-variable regression for identifying statistically significant variables in the linear regression equation. In present study, we presented the Matlab program of stepwise regression.

  8. A new hybrid nonlinear congruential number generator based on higher functional power of logistic maps

    International Nuclear Information System (INIS)

    Cecen, Songul; Demirer, R. Murat; Bayrak, Coskun

    2009-01-01

    We propose a nonlinear congruential pseudorandom number generator consisting of summation of higher order composition of random logistic maps under certain congruential mappings. We change both bifurcation parameters of logistic maps in the interval of U=[3.5599,4) and coefficients of the polynomials in each higher order composition of terms up to degree d. This helped us to obtain a perfect random decorrelated generator which is infinite and aperiodic. It is observed from the simulation results that our new PRNG has good uniformity and power spectrum properties with very flat white noise characteristics. The results are interesting, new and may have applications in cryptography and in Monte Carlo simulations.

  9. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    S. Kuzio

    2004-01-01

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  10. High-resolution genetic maps of Eucalyptus improve Eucalyptus grandis genome assembly.

    Science.gov (United States)

    Bartholomé, Jérôme; Mandrou, Eric; Mabiala, André; Jenkins, Jerry; Nabihoudine, Ibouniyamine; Klopp, Christophe; Schmutz, Jeremy; Plomion, Christophe; Gion, Jean-Marc

    2015-06-01

    Genetic maps are key tools in genetic research as they constitute the framework for many applications, such as quantitative trait locus analysis, and support the assembly of genome sequences. The resequencing of the two parents of a cross between Eucalyptus urophylla and Eucalyptus grandis was used to design a single nucleotide polymorphism (SNP) array of 6000 markers evenly distributed along the E. grandis genome. The genotyping of 1025 offspring enabled the construction of two high-resolution genetic maps containing 1832 and 1773 markers with an average marker interval of 0.45 and 0.5 cM for E. grandis and E. urophylla, respectively. The comparison between genetic maps and the reference genome highlighted 85% of collinear regions. A total of 43 noncollinear regions and 13 nonsynthetic regions were detected and corrected in the new genome assembly. This improved version contains 4943 scaffolds totalling 691.3 Mb of which 88.6% were captured by the 11 chromosomes. The mapping data were also used to investigate the effect of population size and number of markers on linkage mapping accuracy. This study provides the most reliable linkage maps for Eucalyptus and version 2.0 of the E. grandis genome. © 2014 CIRAD. New Phytologist © 2014 New Phytologist Trust.

  11. Segregation of a QTL cluster for home-cage activity using a new mapping method based on regression analysis of congenic mouse strains

    Science.gov (United States)

    Kato, S; Ishii, A; Nishi, A; Kuriki, S; Koide, T

    2014-01-01

    Recent genetic studies have shown that genetic loci with significant effects in whole-genome quantitative trait loci (QTL) analyses were lost or weakened in congenic strains. Characterisation of the genetic basis of this attenuated QTL effect is important to our understanding of the genetic mechanisms of complex traits. We previously found that a consomic strain, B6-Chr6CMSM, which carries chromosome 6 of a wild-derived strain MSM/Ms on the genetic background of C57BL/6J, exhibited lower home-cage activity than C57BL/6J. In the present study, we conducted a composite interval QTL analysis using the F2 mice derived from a cross between C57BL/6J and B6-Chr6CMSM. We found one QTL peak that spans 17.6 Mbp of chromosome 6. A subconsomic strain that covers the entire QTL region also showed lower home-cage activity at the same level as the consomic strain. We developed 15 congenic strains, each of which carries a shorter MSM/Ms-derived chromosomal segment from the subconsomic strain. Given that the results of home-cage activity tests on the congenic strains cannot be explained by a simple single-gene model, we applied regression analysis to segregate the multiple genetic loci. The results revealed three loci (loci 1–3) that have the effect of reducing home-cage activity and one locus (locus 4) that increases activity. We also found that the combination of loci 3 and 4 cancels out the effects of the congenic strains, which indicates the existence of a genetic mechanism related to the loss of QTLs. PMID:24781804

  12. Regression and local control rates after radiotherapy for jugulotympanic paragangliomas: Systematic review and meta-analysis

    International Nuclear Information System (INIS)

    Hulsteijn, Leonie T. van; Corssmit, Eleonora P.M.; Coremans, Ida E.M.; Smit, Johannes W.A.; Jansen, Jeroen C.; Dekkers, Olaf M.

    2013-01-01

    The primary treatment goal of radiotherapy for paragangliomas of the head and neck region (HNPGLs) is local control of the tumor, i.e. stabilization of tumor volume. Interestingly, regression of tumor volume has also been reported. Up to the present, no meta-analysis has been performed giving an overview of regression rates after radiotherapy in HNPGLs. The main objective was to perform a systematic review and meta-analysis to assess regression of tumor volume in HNPGL-patients after radiotherapy. A second outcome was local tumor control. Design of the study is systematic review and meta-analysis. PubMed, EMBASE, Web of Science, COCHRANE and Academic Search Premier and references of key articles were searched in March 2012 to identify potentially relevant studies. Considering the indolent course of HNPGLs, only studies with ⩾12 months follow-up were eligible. Main outcomes were the pooled proportions of regression and local control after radiotherapy as initial, combined (i.e. directly post-operatively or post-embolization) or salvage treatment (i.e. after initial treatment has failed) for HNPGLs. A meta-analysis was performed with an exact likelihood approach using a logistic regression with a random effect at the study level. Pooled proportions with 95% confidence intervals (CI) were reported. Fifteen studies were included, concerning a total of 283 jugulotympanic HNPGLs in 276 patients. Pooled regression proportions for initial, combined and salvage treatment were respectively 21%, 33% and 52% in radiosurgery studies and 4%, 0% and 64% in external beam radiotherapy studies. Pooled local control proportions for radiotherapy as initial, combined and salvage treatment ranged from 79% to 100%. Radiotherapy for jugulotympanic paragangliomas results in excellent local tumor control and therefore is a valuable treatment for these types of tumors. The effects of radiotherapy on regression of tumor volume remain ambiguous, although the data suggest that regression can

  13. Risk factors for low birth weight according to the multiple logistic regression model. A retrospective cohort study in José María Morelos municipality, Quintana Roo, Mexico.

    Science.gov (United States)

    Franco Monsreal, José; Tun Cobos, Miriam Del Ruby; Hernández Gómez, José Ricardo; Serralta Peraza, Lidia Esther Del Socorro

    2018-01-17

    Low birth weight has been an enigma for science over time. There have been many researches on its causes and its effects. Low birth weight is an indicator that predicts the probability of a child surviving. In fact, there is an exponential relationship between weight deficit, gestational age, and perinatal mortality. Multiple logistic regression is one of the most expressive and versatile statistical instruments available for the analysis of data in both clinical and epidemiology settings, as well as in public health. To assess in a multivariate fashion the importance of 17 independent variables in low birth weight (dependent variable) of children born in the Mayan municipality of José María Morelos, Quintana Roo, Mexico. Analytical observational epidemiological cohort study with retrospective temporality. Births that met the inclusion criteria occurred in the "Hospital Integral Jose Maria Morelos" of the Ministry of Health corresponding to the Maya municipality of Jose Maria Morelos during the period from August 1, 2014 to July 31, 2015. The total number of newborns recorded was 1,147; 84 of which (7.32%) had low birth weight. To estimate the independent association between the explanatory variables (potential risk factors) and the response variable, a multiple logistic regression analysis was performed using the IBM SPSS Statistics 22 software. In ascending numerical order values of odds ratio > 1 indicated the positive contribution of explanatory variables or possible risk factors: "unmarried" marital status (1.076, 95% confidence interval: 0.550 to 2.104); age at menarche ≤ 12 years (1.08, 95% confidence interval: 0.64 to 1.84); history of abortion(s) (1.14, 95% confidence interval: 0.44 to 2.93); maternal weight < 50 kg (1.51, 95% confidence interval: 0.83 to 2.76); number of prenatal consultations ≤ 5 (1.86, 95% confidence interval: 0.94 to 3.66); maternal age ≥ 36 years (3.5, 95% confidence interval: 0.40 to 30.47); maternal age ≤ 19 years (3

  14. Quantile regression theory and applications

    CERN Document Server

    Davino, Cristina; Vistocco, Domenico

    2013-01-01

    A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

  15. Fungible weights in logistic regression.

    Science.gov (United States)

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Detection of high GS risk group prostate tumors by diffusion tensor imaging and logistic regression modelling.

    Science.gov (United States)

    Ertas, Gokhan

    2018-07-01

    To assess the value of joint evaluation of diffusion tensor imaging (DTI) measures by using logistic regression modelling to detect high GS risk group prostate tumors. Fifty tumors imaged using DTI on a 3 T MRI device were analyzed. Regions of interests focusing on the center of tumor foci and noncancerous tissue on the maps of mean diffusivity (MD) and fractional anisotropy (FA) were used to extract the minimum, the maximum and the mean measures. Measure ratio was computed by dividing tumor measure by noncancerous tissue measure. Logistic regression models were fitted for all possible pair combinations of the measures using 5-fold cross validation. Systematic differences are present for all MD measures and also for all FA measures in distinguishing the high risk tumors [GS ≥ 7(4 + 3)] from the low risk tumors [GS ≤ 7(3 + 4)] (P Logistic regression modelling provides a favorable solution for the joint evaluations easily adoptable in clinical practice. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Bayesian linear regression with skew-symmetric error distributions with applications to survival analysis

    KAUST Repository

    Rubio, Francisco J.

    2016-02-09

    We study Bayesian linear regression models with skew-symmetric scale mixtures of normal error distributions. These kinds of models can be used to capture departures from the usual assumption of normality of the errors in terms of heavy tails and asymmetry. We propose a general noninformative prior structure for these regression models and show that the corresponding posterior distribution is proper under mild conditions. We extend these propriety results to cases where the response variables are censored. The latter scenario is of interest in the context of accelerated failure time models, which are relevant in survival analysis. We present a simulation study that demonstrates good frequentist properties of the posterior credible intervals associated with the proposed priors. This study also sheds some light on the trade-off between increased model flexibility and the risk of over-fitting. We illustrate the performance of the proposed models with real data. Although we focus on models with univariate response variables, we also present some extensions to the multivariate case in the Supporting Information.

  18. Genetic, Physical and Comparative Mapping of the Powdery Mildew Resistance Gene Pm21 Originating from Dasypyrum villosum

    Directory of Open Access Journals (Sweden)

    Huagang He

    2017-11-01

    Full Text Available Pm21, originating from wheat wild relative Dasypyrum villosum, confers immunity to all known races of Blumeria graminis f. sp. tritici (Bgt and has been widely utilized in wheat breeding. However, little is known on the genetic basis of the Pm21 locus. In the present study, four seedling-susceptible D. villosum lines (DvSus-1 ∼ DvSus-4 were identified from different natural populations. Based on the collinearity among genomes of Brachypodium distachyon, Oryza, and Triticeae, a set of 25 gene-derived markers were developed declaring the polymorphisms between DvRes-1 carrying Pm21 and DvSus-1. Fine genetic mapping of Pm21 was conducted by using an extremely large F2 segregation population derived from the cross DvSus-1/DvRes-1. Then Pm21 was narrowed to a 0.01-cM genetic interval defined by the markers 6VS-08.4b and 6VS-10b. Three DNA markers, including a resistance gene analog marker, were confirmed to co-segregate with Pm21. Moreover, based on the susceptible deletion line Y18-S6 induced by ethyl methanesulfonate treatment conducted on Yangmai 18, Pm21 was physically mapped into a similar interval. Comparative analysis revealed that the orthologous regions of the interval carrying Pm21 were narrowed to a 112.5 kb genomic region harboring 18 genes in Brachypodium, and a 23.2 kb region harboring two genes in rice, respectively. This study provides a high-density integrated map of the Pm21 locus, which will contribute to map-based cloning of Pm21.

  19. Effect of Interval to Definitive Breast Surgery on Clinical Presentation and Survival in Early-Stage Invasive Breast Cancer

    International Nuclear Information System (INIS)

    Vujovic, Olga; Yu, Edward; Cherian, Anil; Perera, Francisco; Dar, A. Rashid; Stitt, Larry; Hammond, A.

    2009-01-01

    Purpose: To examine the effect of clinical presentation and interval to breast surgery on local recurrence and survival in early-stage breast cancer. Methods and Materials: The data from 397 patients with Stage T1-T2N0 breast carcinoma treated with conservative surgery and breast radiotherapy between 1985 and 1992 were reviewed at the London Regional Cancer Program. The clinical presentation consisted of a mammogram finding or a palpable lump. The intervals from clinical presentation to definitive breast surgery used for analysis were 0-4, >4-12, and >12 weeks. The Kaplan-Meier estimates of the time to local recurrence, disease-free survival, and cause-specific survival were determined for the three groups. Cox regression analysis was used to evaluate the effect of clinical presentation and interval to definitive surgery on survival. Results: The median follow-up was 11.2 years. No statistically significant difference was found in local recurrence as a function of the interval to definitive surgery (p = .424). A significant difference was noted in disease-free survival (p = .040) and cause-specific survival (p = .006) with an interval of >12 weeks to definitive breast surgery. However, the interval to definitive surgery was dependent on the presentation for cause-specific survival, with a substantial effect for patients with a mammographic presentation and a negligible effect for patients with a lump presentation (interaction p = .041). Conclusion: The results of this study suggest that an interval of >12 weeks to breast surgery might be associated with decreased survival for patients with a mammographic presentation, but it appeared to have no effect on survival for patients presenting with a palpable breast lump.

  20. Aneurysmal subarachnoid hemorrhage prognostic decision-making algorithm using classification and regression tree analysis.

    Science.gov (United States)

    Lo, Benjamin W Y; Fukuda, Hitoshi; Angle, Mark; Teitelbaum, Jeanne; Macdonald, R Loch; Farrokhyar, Forough; Thabane, Lehana; Levine, Mitchell A H

    2016-01-01

    Classification and regression tree analysis involves the creation of a decision tree by recursive partitioning of a dataset into more homogeneous subgroups. Thus far, there is scarce literature on using this technique to create clinical prediction tools for aneurysmal subarachnoid hemorrhage (SAH). The classification and regression tree analysis technique was applied to the multicenter Tirilazad database (3551 patients) in order to create the decision-making algorithm. In order to elucidate prognostic subgroups in aneurysmal SAH, neurologic, systemic, and demographic factors were taken into account. The dependent variable used for analysis was the dichotomized Glasgow Outcome Score at 3 months. Classification and regression tree analysis revealed seven prognostic subgroups. Neurological grade, occurrence of post-admission stroke, occurrence of post-admission fever, and age represented the explanatory nodes of this decision tree. Split sample validation revealed classification accuracy of 79% for the training dataset and 77% for the testing dataset. In addition, the occurrence of fever at 1-week post-aneurysmal SAH is associated with increased odds of post-admission stroke (odds ratio: 1.83, 95% confidence interval: 1.56-2.45, P tree was generated, which serves as a prediction tool to guide bedside prognostication and clinical treatment decision making. This prognostic decision-making algorithm also shed light on the complex interactions between a number of risk factors in determining outcome after aneurysmal SAH.

  1. Geometric Least Square Models for Deriving [0,1]-Valued Interval Weights from Interval Fuzzy Preference Relations Based on Multiplicative Transitivity

    Directory of Open Access Journals (Sweden)

    Xuan Yang

    2015-01-01

    Full Text Available This paper presents a geometric least square framework for deriving [0,1]-valued interval weights from interval fuzzy preference relations. By analyzing the relationship among [0,1]-valued interval weights, multiplicatively consistent interval judgments, and planes, a geometric least square model is developed to derive a normalized [0,1]-valued interval weight vector from an interval fuzzy preference relation. Based on the difference ratio between two interval fuzzy preference relations, a geometric average difference ratio between one interval fuzzy preference relation and the others is defined and employed to determine the relative importance weights for individual interval fuzzy preference relations. A geometric least square based approach is further put forward for solving group decision making problems. An individual decision numerical example and a group decision making problem with the selection of enterprise resource planning software products are furnished to illustrate the effectiveness and applicability of the proposed models.

  2. Using Vegetation Maps to Provide Information on Soil Distribution

    Science.gov (United States)

    José Ibáñez, Juan; Pérez-Gómez, Rufino; Brevik, Eric C.; Cerdà, Artemi

    2016-04-01

    Many different types of maps (geology, hydrology, soil, vegetation, etc.) are created to inventory natural resources. Each of these resources is mapped using a unique set of criteria, including scales and taxonomies. Past research has indicated that comparing the results of different but related maps (e.g., soil and geology maps) may aid in identifying deficiencies in those maps. Therefore, this study was undertaken in the Almería Province (Andalusia, Spain) to (i) compare the underlying map structures of soil and vegetation maps and (ii) to investigate if a vegetation map can provide useful soil information that was not shown on a soil map. To accomplish this soil and vegetation maps were imported into ArcGIS 10.1 for spatial analysis. Results of the spatial analysis were exported to Microsoft Excel worksheets for statistical analyses to evaluate fits to linear and power law regression models. Vegetative units were grouped according to the driving forces that determined their presence or absence (P/A): (i) climatophilous (climate is the only determinant of P/A) (ii); lithologic-climate (climate and parent material determine PNV P/A); and (iii) edaphophylous (soil features determine PNV P/A). The rank abundance plots for both the soil and vegetation maps conformed to Willis or Hollow Curves, meaning the underlying structures of both maps were the same. Edaphophylous map units, which represent 58.5% of the vegetation units in the study area, did not show a good correlation with the soil map. Further investigation revealed that 87% of the edaphohygrophylous units (which demand more soil water than is supplied by other soil types in the surrounding landscape) were found in ramblas, ephemeral riverbeds that are not typically classified and mapped as soils in modern systems, even though they meet the definition of soil given by the most commonly used and most modern soil taxonomic systems. Furthermore, these edaphophylous map units tend to be islands of biodiversity

  3. Delay-Dependent Guaranteed Cost Control of an Interval System with Interval Time-Varying Delay

    Directory of Open Access Journals (Sweden)

    Xiao Min

    2009-01-01

    Full Text Available This paper concerns the problem of the delay-dependent robust stability and guaranteed cost control for an interval system with time-varying delay. The interval system with matrix factorization is provided and leads to less conservative conclusions than solving a square root. The time-varying delay is assumed to belong to an interval and the derivative of the interval time-varying delay is not a restriction, which allows a fast time-varying delay; also its applicability is broad. Based on the Lyapunov-Ktasovskii approach, a delay-dependent criterion for the existence of a state feedback controller, which guarantees the closed-loop system stability, the upper bound of cost function, and disturbance attenuation lever for all admissible uncertainties as well as out perturbation, is proposed in terms of linear matrix inequalities (LMIs. The criterion is derived by free weighting matrices that can reduce the conservatism. The effectiveness has been verified in a number example and the compute results are presented to validate the proposed design method.

  4. Linear response formula for piecewise expanding unimodal maps

    International Nuclear Information System (INIS)

    Baladi, Viviane; Smania, Daniel

    2008-01-01

    The average R(t) = ∫φdμ t of a smooth function ψ with respect to the SRB measure μ t of a smooth one-parameter family f t of piecewise expanding interval maps is not always Lipschitz (Baladi 2007 Commun. Math. Phys. 275 839–59, Mazzolena 2007 Master's Thesis Rome 2, Tor Vergata). We prove that if f t is tangent to the topological class of f, and if ∂ t f t | t=0 = X circle f, then R(t) is differentiable at zero, and R'(0) coincides with the resummation proposed (Baladi 2007) of the (a priori divergent) series given by Ruelle's conjecture. In fact, we show that t map μ t is differentiable within Radon measures. Linear response is violated if and only if f t is transversal to the topological class of f

  5. Genomic Dissection of Leaf Angle in Maize (Zea mays L. Using a Four-Way Cross Mapping Population.

    Directory of Open Access Journals (Sweden)

    Junqiang Ding

    Full Text Available Increasing grain yield by the selection for optimal plant architecture has been the key focus in modern maize breeding. As a result, leaf angle, an important determinant of plant architecture, has been significantly improved to adapt to the ever-increasing plant density in maize production over the past several decades. To extend our understanding on the genetic mechanisms of leaf angle in maize, we developed the first four-way cross mapping population, consisting of 277 lines derived from four maize inbred lines with varied leaf angles. The four-way cross mapping population together with the four parental lines were evaluated for leaf angle in two environments. In this study, we reported linkage maps built in the population and quantitative trait loci (QTL on leaf angle detected by inclusive composite interval mapping (ICIM. ICIM applies a two-step strategy to effectively separate the cofactor selection from the interval mapping, which controls the background additive and dominant effects at the same time. A total of 14 leaf angle QTL were identified, four of which were further validated in near-isogenic lines (NILs. Seven of the 14 leaf angle QTL were found to overlap with the published leaf angle QTL or genes, and the remaining QTL were unique to the four-way population. This study represents the first example of QTL mapping using a four-way cross population in maize, and demonstrates that the use of specially designed four-way cross is effective in uncovering the basis of complex and polygenetic trait like leaf angle in maize.

  6. Comparative high-resolution mapping of the wax inhibitors Iw1 and Iw2 in hexaploid wheat.

    Directory of Open Access Journals (Sweden)

    Haibin Wu

    Full Text Available The wax (glaucousness on wheat leaves and stems is mainly controlled by two sets of genes: glaucousness loci (W1 and W2 and non-glaucousness loci (Iw1 and Iw2. The non-glaucousness (Iw loci act as inhibitors of the glaucousness loci (W. High-resolution comparative genetic linkage maps of the wax inhibitors Iw1 originating from Triticum dicoccoides, and Iw2 from Aegilops tauschii were developed by comparative genomics analyses of Brachypodium, sorghum and rice genomic sequences corresponding to the syntenic regions of the Iw loci in wheat. Eleven Iw1 and eight Iw2 linked EST markers were developed and mapped to linkage maps on the distal regions of chromosomes 2BS and 2DS, respectively. The Iw1 locus mapped within a 0.96 cM interval flanked by the BE498358 and CA499581 EST markers that are collinear with 122 kb, 202 kb, and 466 kb genomic regions in the Brachypodium 5S chromosome, the sorghum 6S chromosome and the rice 4S chromosome, respectively. The Iw2 locus was located in a 4.1 to 5.4-cM interval in chromosome 2DS that is flanked by the CJ886319 and CJ519831 EST markers, and this region is collinear with a 2.3 cM region spanning the Iw1 locus on chromosome 2BS. Both Iw1 and Iw2 co-segregated with the BF474014 and CJ876545 EST markers, indicating they are most likely orthologs on 2BS and 2DS. These high-resolution maps can serve as a framework for chromosome landing, physical mapping and map-based cloning of the wax inhibitors in wheat.

  7. A multiscale approach to mapping seabed sediments.

    Directory of Open Access Journals (Sweden)

    Benjamin Misiuk

    Full Text Available Benthic habitat maps, including maps of seabed sediments, have become critical spatial-decision support tools for marine ecological management and conservation. Despite the increasing recognition that environmental variables should be considered at multiple spatial scales, variables used in habitat mapping are often implemented at a single scale. The objective of this study was to evaluate the potential for using environmental variables at multiple scales for modelling and mapping seabed sediments. Sixteen environmental variables were derived from multibeam echosounder data collected near Qikiqtarjuaq, Nunavut, Canada at eight spatial scales ranging from 5 to 275 m, and were tested as predictor variables for modelling seabed sediment distributions. Using grain size data obtained from grab samples, we tested which scales of each predictor variable contributed most to sediment models. Results showed that the default scale was often not the best. Out of 129 potential scale-dependent variables, 11 were selected to model the additive log-ratio of mud and sand at five different scales, and 15 were selected to model the additive log-ratio of gravel and sand, also at five different scales. Boosted Regression Tree models that explained between 46.4 and 56.3% of statistical deviance produced multiscale predictions of mud, sand, and gravel that were correlated with cross-validated test data (Spearman's ρmud = 0.77, ρsand = 0.71, ρgravel = 0.58. Predictions of individual size fractions were classified to produce a map of seabed sediments that is useful for marine spatial planning. Based on the scale-dependence of variables in this study, we concluded that spatial scale consideration is at least as important as variable selection in seabed mapping.

  8. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    Science.gov (United States)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  9. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  10. Using Logistic Regression to Predict the Probability of Debris Flows in Areas Burned by Wildfires, Southern California, 2003-2006

    Science.gov (United States)

    Rupert, Michael G.; Cannon, Susan H.; Gartner, Joseph E.; Michael, John A.; Helsel, Dennis R.

    2008-01-01

    Logistic regression was used to develop statistical models that can be used to predict the probability of debris flows in areas recently burned by wildfires by using data from 14 wildfires that burned in southern California during 2003-2006. Twenty-eight independent variables describing the basin morphology, burn severity, rainfall, and soil properties of 306 drainage basins located within those burned areas were evaluated. The models were developed as follows: (1) Basins that did and did not produce debris flows soon after the 2003 to 2006 fires were delineated from data in the National Elevation Dataset using a geographic information system; (2) Data describing the basin morphology, burn severity, rainfall, and soil properties were compiled for each basin. These data were then input to a statistics software package for analysis using logistic regression; and (3) Relations between the occurrence or absence of debris flows and the basin morphology, burn severity, rainfall, and soil properties were evaluated, and five multivariate logistic regression models were constructed. All possible combinations of independent variables were evaluated to determine which combinations produced the most effective models, and the multivariate models that best predicted the occurrence of debris flows were identified. Percentage of high burn severity and 3-hour peak rainfall intensity were significant variables in all models. Soil organic matter content and soil clay content were significant variables in all models except Model 5. Soil slope was a significant variable in all models except Model 4. The most suitable model can be selected from these five models on the basis of the availability of independent variables in the particular area of interest and field checking of probability maps. The multivariate logistic regression models can be entered into a geographic information system, and maps showing the probability of debris flows can be constructed in recently burned areas of

  11. Correcting for Blood Arrival Time in Global Mean Regression Enhances Functional Connectivity Analysis of Resting State fMRI-BOLD Signals.

    Science.gov (United States)

    Erdoğan, Sinem B; Tong, Yunjie; Hocke, Lia M; Lindsey, Kimberly P; deB Frederick, Blaise

    2016-01-01

    Resting state functional connectivity analysis is a widely used method for mapping intrinsic functional organization of the brain. Global signal regression (GSR) is commonly employed for removing systemic global variance from resting state BOLD-fMRI data; however, recent studies have demonstrated that GSR may introduce spurious negative correlations within and between functional networks, calling into question the meaning of anticorrelations reported between some networks. In the present study, we propose that global signal from resting state fMRI is composed primarily of systemic low frequency oscillations (sLFOs) that propagate with cerebral blood circulation throughout the brain. We introduce a novel systemic noise removal strategy for resting state fMRI data, "dynamic global signal regression" (dGSR), which applies a voxel-specific optimal time delay to the global signal prior to regression from voxel-wise time series. We test our hypothesis on two functional systems that are suggested to be intrinsically organized into anticorrelated networks: the default mode network (DMN) and task positive network (TPN). We evaluate the efficacy of dGSR and compare its performance with the conventional "static" global regression (sGSR) method in terms of (i) explaining systemic variance in the data and (ii) enhancing specificity and sensitivity of functional connectivity measures. dGSR increases the amount of BOLD signal variance being modeled and removed relative to sGSR while reducing spurious negative correlations introduced in reference regions by sGSR, and attenuating inflated positive connectivity measures. We conclude that incorporating time delay information for sLFOs into global noise removal strategies is of crucial importance for optimal noise removal from resting state functional connectivity maps.

  12. Landslide susceptibility assessment using logistic regression and its comparison with a rock mass classification system, along a road section in the northern Himalayas (India)

    Science.gov (United States)

    Das, Iswar; Sahoo, Sashikant; van Westen, Cees; Stein, Alfred; Hack, Robert

    2010-02-01

    Landslide studies are commonly guided by ground knowledge and field measurements of rock strength and slope failure criteria. With increasing sophistication of GIS-based statistical methods, however, landslide susceptibility studies benefit from the integration of data collected from various sources and methods at different scales. This study presents a logistic regression method for landslide susceptibility mapping and verifies the result by comparing it with the geotechnical-based slope stability probability classification (SSPC) methodology. The study was carried out in a landslide-prone national highway road section in the northern Himalayas, India. Logistic regression model performance was assessed by the receiver operator characteristics (ROC) curve, showing an area under the curve equal to 0.83. Field validation of the SSPC results showed a correspondence of 72% between the high and very high susceptibility classes with present landslide occurrences. A spatial comparison of the two susceptibility maps revealed the significance of the geotechnical-based SSPC method as 90% of the area classified as high and very high susceptible zones by the logistic regression method corresponds to the high and very high class in the SSPC method. On the other hand, only 34% of the area classified as high and very high by the SSPC method falls in the high and very high classes of the logistic regression method. The underestimation by the logistic regression method can be attributed to the generalisation made by the statistical methods, so that a number of slopes existing in critical equilibrium condition might not be classified as high or very high susceptible zones.

  13. MR relaxometry in chronic liver diseases: Comparison of T1 mapping, T2 mapping, and diffusion-weighted imaging for assessing cirrhosis diagnosis and severity

    Energy Technology Data Exchange (ETDEWEB)

    Cassinotto, Christophe, E-mail: christophe.cassinotto@chu-bordeaux.fr [Department of Diagnostic and Interventional Imaging, Hôpital Haut-Lévêque, Centre Hospitalier Universitaire et Université de Bordeaux, 1 Avenue de Magellan, 33604 Pessac (France); INSERM U1053, Université Bordeaux, Bordeaux (France); Feldis, Matthieu, E-mail: matthieu.feldis@chu-bordeaux.fr [Department of Diagnostic and Interventional Imaging, Hôpital Haut-Lévêque, Centre Hospitalier Universitaire et Université de Bordeaux, 1 Avenue de Magellan, 33604 Pessac (France); Vergniol, Julien, E-mail: julien.vergniol@chu-bordeaux.fr [Centre D’investigation de la Fibrose Hépatique, Hôpital Haut-Lévêque, Centre Hospitalier Universitaire de Bordeaux, 1 Avenue de Magellan, 33604 Pessac (France); Mouries, Amaury, E-mail: amaury.mouries@chu-bordeaux.fr [Department of Diagnostic and Interventional Imaging, Hôpital Haut-Lévêque, Centre Hospitalier Universitaire et Université de Bordeaux, 1 Avenue de Magellan, 33604 Pessac (France); Cochet, Hubert, E-mail: hubert.cochet@chu-bordeaux.fr [Department of Diagnostic and Interventional Imaging, Hôpital Haut-Lévêque, Centre Hospitalier Universitaire et Université de Bordeaux, 1 Avenue de Magellan, 33604 Pessac (France); and others

    2015-08-15

    Highlights: • The use of MR to classify cirrhosis in different stages is a new interesting field. • We compared liver and spleen T1 mapping, T2 mapping and diffusion-weighted imaging. • MR relaxometry using liver T1 mapping is accurate for the diagnosis of cirrhosis. • Liver T1 mapping shows that values increase with the severity of cirrhosis. • Diffusion-weighted imaging is less accurate than T1 mapping while T2 mapping is not reliable. - Abstract: Background: MR relaxometry has been extensively studied in the field of cardiac diseases, but its contribution to liver imaging is unclear. We aimed to compare liver and spleen T1 mapping, T2 mapping, and diffusion-weighted MR imaging (DWI) for assessing the diagnosis and severity of cirrhosis. Methods: We prospectively included 129 patients with normal (n = 40) and cirrhotic livers (n = 89) from May to September 2014. Non-enhanced liver T1 mapping, splenic T2 mapping, and liver and splenic DWI were measured and compared for assessing cirrhosis severity using Child-Pugh score, MELD score, and presence or not of large esophageal varices (EVs) and liver stiffness measurements using Fibroscan{sup ®} as reference. Results: Liver T1 mapping was the only variable demonstrating significant differences between normal patients (500 ± 79 ms), Child-Pugh A patients (574 ± 84 ms) and Child-Pugh B/C patients (690 ± 147 ms; all p-values <0.00001). Liver T1 mapping had a significant correlation with Child-Pugh score (Pearson's correlation coefficient of 0.46), MEDL score (0.30), and liver stiffness measurement (0.52). Areas under the receiver operating characteristic curves of liver T1 mapping for the diagnosis of cirrhosis (O.85; 95% confidence intervals (CI), 0.77–0.91), Child-Pugh B/C cirrhosis (0.87; 95%CI, 0.76–0.93), and large EVs (0.75; 95%CI, 0.63–0.83) were greater than that of spleen T2 mapping, liver and spleen DWI (all p-values < 0.01). Conclusion: Liver T1 mapping is a promising new diagnostic

  14. MR relaxometry in chronic liver diseases: Comparison of T1 mapping, T2 mapping, and diffusion-weighted imaging for assessing cirrhosis diagnosis and severity

    International Nuclear Information System (INIS)

    Cassinotto, Christophe; Feldis, Matthieu; Vergniol, Julien; Mouries, Amaury; Cochet, Hubert

    2015-01-01

    Highlights: • The use of MR to classify cirrhosis in different stages is a new interesting field. • We compared liver and spleen T1 mapping, T2 mapping and diffusion-weighted imaging. • MR relaxometry using liver T1 mapping is accurate for the diagnosis of cirrhosis. • Liver T1 mapping shows that values increase with the severity of cirrhosis. • Diffusion-weighted imaging is less accurate than T1 mapping while T2 mapping is not reliable. - Abstract: Background: MR relaxometry has been extensively studied in the field of cardiac diseases, but its contribution to liver imaging is unclear. We aimed to compare liver and spleen T1 mapping, T2 mapping, and diffusion-weighted MR imaging (DWI) for assessing the diagnosis and severity of cirrhosis. Methods: We prospectively included 129 patients with normal (n = 40) and cirrhotic livers (n = 89) from May to September 2014. Non-enhanced liver T1 mapping, splenic T2 mapping, and liver and splenic DWI were measured and compared for assessing cirrhosis severity using Child-Pugh score, MELD score, and presence or not of large esophageal varices (EVs) and liver stiffness measurements using Fibroscan ® as reference. Results: Liver T1 mapping was the only variable demonstrating significant differences between normal patients (500 ± 79 ms), Child-Pugh A patients (574 ± 84 ms) and Child-Pugh B/C patients (690 ± 147 ms; all p-values <0.00001). Liver T1 mapping had a significant correlation with Child-Pugh score (Pearson's correlation coefficient of 0.46), MEDL score (0.30), and liver stiffness measurement (0.52). Areas under the receiver operating characteristic curves of liver T1 mapping for the diagnosis of cirrhosis (O.85; 95% confidence intervals (CI), 0.77–0.91), Child-Pugh B/C cirrhosis (0.87; 95%CI, 0.76–0.93), and large EVs (0.75; 95%CI, 0.63–0.83) were greater than that of spleen T2 mapping, liver and spleen DWI (all p-values < 0.01). Conclusion: Liver T1 mapping is a promising new diagnostic tool for

  15. High-resolution physical map for chromosome 16q12.1-q13, the Blau syndrome locus

    Directory of Open Access Journals (Sweden)

    Bonavita Gina

    2002-08-01

    Full Text Available Abstract Background The Blau syndrome (MIM 186580, an autosomal dominant granulomatous disease, was previously mapped to chromosome 16p12-q21. However, inconsistent physical maps of the region and consequently an unknown order of microsatellite markers, hampered us from further refining the genetic locus for the Blau syndrome. To address this problem, we constructed our own high-resolution physical map for the Blau susceptibility region. Results We generated a high-resolution physical map that provides more than 90% coverage of a refined Blau susceptibility region. The map consists of four contigs of sequence tagged site-based bacterial artificial chromosomes with a total of 124 bacterial artificial chromosomes, and spans approximately 7.5 Mbp; however, three gaps still exist in this map with sizes of 425, 530 and 375 kbp, respectively, estimated from radiation hybrid mapping. Conclusions Our high-resolution map will assist genetic studies of loci in the interval from D16S3080, near D16S409, and D16S408 (16q12.1 to 16q13.

  16. Recurrence interval analysis of trading volumes.

    Science.gov (United States)

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  17. Data-Driven Jump Detection Thresholds for Application in Jump Regressions

    Directory of Open Access Journals (Sweden)

    Robert Davies

    2018-03-01

    Full Text Available This paper develops a method to select the threshold in threshold-based jump detection methods. The method is motivated by an analysis of threshold-based jump detection methods in the context of jump-diffusion models. We show that over the range of sampling frequencies a researcher is most likely to encounter that the usual in-fill asymptotics provide a poor guide for selecting the jump threshold. Because of this we develop a sample-based method. Our method estimates the number of jumps over a grid of thresholds and selects the optimal threshold at what we term the ‘take-off’ point in the estimated number of jumps. We show that this method consistently estimates the jumps and their indices as the sampling interval goes to zero. In several Monte Carlo studies we evaluate the performance of our method based on its ability to accurately locate jumps and its ability to distinguish between true jumps and large diffusive moves. In one of these Monte Carlo studies we evaluate the performance of our method in a jump regression context. Finally, we apply our method in two empirical studies. In one we estimate the number of jumps and report the jump threshold our method selects for three commonly used market indices. In the other empirical application we perform a series of jump regressions using our method to select the jump threshold.

  18. Expressing Intervals in Automated Service Negotiation

    Science.gov (United States)

    Clark, Kassidy P.; Warnier, Martijn; van Splunter, Sander; Brazier, Frances M. T.

    During automated negotiation of services between autonomous agents, utility functions are used to evaluate the terms of negotiation. These terms often include intervals of values which are prone to misinterpretation. It is often unclear if an interval embodies a continuum of real numbers or a subset of natural numbers. Furthermore, it is often unclear if an agent is expected to choose only one value, multiple values, a sub-interval or even multiple sub-intervals. Additional semantics are needed to clarify these issues. Normally, these semantics are stored in a domain ontology. However, ontologies are typically domain specific and static in nature. For dynamic environments, in which autonomous agents negotiate resources whose attributes and relationships change rapidly, semantics should be made explicit in the service negotiation. This paper identifies issues that are prone to misinterpretation and proposes a notation for expressing intervals. This notation is illustrated using an example in WS-Agreement.

  19. QTL Mapping of Genome Regions Controlling Manganese Uptake in Lentil Seed

    Directory of Open Access Journals (Sweden)

    Duygu Ates

    2018-05-01

    Full Text Available This study evaluated Mn concentration in the seeds of 120 RILs of lentil developed from the cross “CDC Redberry” × “ILL7502”. Micronutrient analysis using atomic absorption spectrometry indicated mean seed manganese (Mn concentrations ranging from 8.5 to 26.8 mg/kg, based on replicated field trials grown at three locations in Turkey in 2012 and 2013. A linkage map of lentil was constructed and consisted of seven linkage groups with 5,385 DNA markers. The total map length was 973.1 cM, with an average distance between markers of 0.18 cM. A total of 6 QTL for Mn concentration were identified using composite interval mapping (CIM. All QTL were statistically significant and explained 15.3–24.1% of the phenotypic variation, with LOD scores ranging from 3.00 to 4.42. The high-density genetic map reported in this study will increase fundamental knowledge of the genome structure of lentil, and will be the basis for the development of micronutrient-enriched lentil genotypes to support biofortification efforts.

  20. QTL Mapping of Genome Regions Controlling Manganese Uptake in Lentil Seed.

    Science.gov (United States)

    Ates, Duygu; Aldemir, Secil; Yagmur, Bulent; Kahraman, Abdullah; Ozkan, Hakan; Vandenberg, Albert; Tanyolac, Muhammed Bahattin

    2018-05-04

    This study evaluated Mn concentration in the seeds of 120 RILs of lentil developed from the cross "CDC Redberry" × "ILL7502". Micronutrient analysis using atomic absorption spectrometry indicated mean seed manganese (Mn) concentrations ranging from 8.5 to 26.8 mg/kg, based on replicated field trials grown at three locations in Turkey in 2012 and 2013. A linkage map of lentil was constructed and consisted of seven linkage groups with 5,385 DNA markers. The total map length was 973.1 cM, with an average distance between markers of 0.18 cM. A total of 6 QTL for Mn concentration were identified using composite interval mapping (CIM). All QTL were statistically significant and explained 15.3-24.1% of the phenotypic variation, with LOD scores ranging from 3.00 to 4.42. The high-density genetic map reported in this study will increase fundamental knowledge of the genome structure of lentil, and will be the basis for the development of micronutrient-enriched lentil genotypes to support biofortification efforts. Copyright © 2018 Ates et al.