FBH1 Catalyzes Regression of Stalled Replication Forks
DEFF Research Database (Denmark)
Fugger, Kasper; Mistrik, Martin; Neelsen, Kai J
2015-01-01
DNA replication fork perturbation is a major challenge to the maintenance of genome integrity. It has been suggested that processing of stalled forks might involve fork regression, in which the fork reverses and the two nascent DNA strands anneal. Here, we show that FBH1 catalyzes regression...... a model whereby FBH1 promotes early checkpoint signaling by remodeling of stalled DNA replication forks....... of a model replication fork in vitro and promotes fork regression in vivo in response to replication perturbation. Cells respond to fork stalling by activating checkpoint responses requiring signaling through stress-activated protein kinases. Importantly, we show that FBH1, through its helicase activity...
Directory of Open Access Journals (Sweden)
George O Agogo
Full Text Available In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
FBH1 Catalyzes Regression of Stalled Replication Forks
Directory of Open Access Journals (Sweden)
Kasper Fugger
2015-03-01
Full Text Available DNA replication fork perturbation is a major challenge to the maintenance of genome integrity. It has been suggested that processing of stalled forks might involve fork regression, in which the fork reverses and the two nascent DNA strands anneal. Here, we show that FBH1 catalyzes regression of a model replication fork in vitro and promotes fork regression in vivo in response to replication perturbation. Cells respond to fork stalling by activating checkpoint responses requiring signaling through stress-activated protein kinases. Importantly, we show that FBH1, through its helicase activity, is required for early phosphorylation of ATM substrates such as CHK2 and CtIP as well as hyperphosphorylation of RPA. These phosphorylations occur prior to apparent DNA double-strand break formation. Furthermore, FBH1-dependent signaling promotes checkpoint control and preserves genome integrity. We propose a model whereby FBH1 promotes early checkpoint signaling by remodeling of stalled DNA replication forks.
Agogo, G.O.; Voet, van der H.; Veer, van 't P.; Ferrari, P.; Leenders, M.; Muller, D.C.; Sánchez-Cantalejo, E.; Bamia, C.; Braaten, T.; Knüppel, S.; Johansson, I.; Eeuwijk, van F.A.; Boshuizen, H.C.
2014-01-01
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference m
Agogo, George O; der Voet, Hilko van; Veer, Pieter Van't; Ferrari, Pietro; Leenders, Max; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek
2014-01-01
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference m
Logistic regression applied to natural hazards: rare event logistic regression with replications
Directory of Open Access Journals (Sweden)
M. Guns
2012-06-01
Full Text Available Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.
Machwe, Amrita; Karale, Rajashree; Xu, Xioahua; Liu, Yilun; Orren, David K
2011-08-16
Cells cope with blockage of replication fork progression in a manner that allows DNA synthesis to be completed and genomic instability minimized. Models for resolution of blocked replication involve fork regression to form Holliday junction structures. The human RecQ helicases WRN and BLM (deficient in Werner and Bloom syndromes, respectively) are critical for maintaining genomic stability and thought to function in accurate resolution of replication blockage. Consistent with this notion, WRN and BLM localize to sites of blocked replication after certain DNA-damaging treatments and exhibit enhanced activity on replication and recombination intermediates. Here we examine the actions of WRN and BLM on a special Holliday junction substrate reflective of a regressed replication fork. Our results demonstrate that, in reactions requiring ATP hydrolysis, both WRN and BLM convert this Holliday junction substrate primarily to a four-stranded replication fork structure, suggesting they target the Holliday junction to initiate branch migration. In agreement, the Holliday junction binding protein RuvA inhibits the WRN- and BLM-mediated conversion reactions. Importantly, this conversion product is suitable for replication with its leading daughter strand readily extended by DNA polymerases. Furthermore, binding to and conversion of this Holliday junction are optimal at low MgCl(2) concentrations, suggesting that WRN and BLM preferentially act on the square planar (open) conformation of Holliday junctions. Our findings suggest that, subsequent to fork regression events, WRN and/or BLM could re-establish functional replication forks to help overcome fork blockage. Such a function is highly consistent with phenotypes associated with WRN- and BLM-deficient cells.
Kalkavan, Halime; Sharma, Piyush; Kasper, Stefan; Helfrich, Iris; Pandyra, Aleksandra A.; Gassa, Asmae; Virchow, Isabel; Flatz, Lukas; Brandenburg, Tim; Namineni, Sukumar; Heikenwalder, Mathias; Höchst, Bastian; Knolle, Percy A.; Wollmann, Guido; von Laer, Dorothee; Drexler, Ingo; Rathbun, Jessica; Cannon, Paula M.; Scheu, Stefanie; Bauer, Jens; Chauhan, Jagat; Häussinger, Dieter; Willimsky, Gerald; Löhning, Max; Schadendorf, Dirk; Brandau, Sven; Schuler, Martin; Lang, Philipp A.; Lang, Karl S.
2017-01-01
Immune-mediated effector molecules can limit cancer growth, but lack of sustained immune activation in the tumour microenvironment restricts antitumour immunity. New therapeutic approaches that induce a strong and prolonged immune activation would represent a major immunotherapeutic advance. Here we show that the arenaviruses lymphocytic choriomeningitis virus (LCMV) and the clinically used Junin virus vaccine (Candid#1) preferentially replicate in tumour cells in a variety of murine and human cancer models. Viral replication leads to prolonged local immune activation, rapid regression of localized and metastatic cancers, and long-term disease control. Mechanistically, LCMV induces antitumour immunity, which depends on the recruitment of interferon-producing Ly6C+ monocytes and additionally enhances tumour-specific CD8+ T cells. In comparison with other clinically evaluated oncolytic viruses and to PD-1 blockade, LCMV treatment shows promising antitumoural benefits. In conclusion, therapeutically administered arenavirus replicates in cancer cells and induces tumour regression by enhancing local immune responses. PMID:28248314
Experimental design and priority PLS regression
DEFF Research Database (Denmark)
Høskuldsson, Agnar
1996-01-01
Rules, ideas and algorithms of the H-principle are used to analyse models that are derived from experimental design. Some of the basic ideas of experimental design are reviewed and related to the methodology of the H-principle. New methods of optimal response surfaces are developed....
Discovery and Replication of Gene Influences on Brain Structure Using LASSO Regression.
Kohannim, Omid; Hibar, Derrek P; Stein, Jason L; Jahanshad, Neda; Hua, Xue; Rajagopalan, Priya; Toga, Arthur W; Jack, Clifford R; Weiner, Michael W; de Zubicaray, Greig I; McMahon, Katie L; Hansell, Narelle K; Martin, Nicholas G; Wright, Margaret J; Thompson, Paul M
2012-01-01
We implemented least absolute shrinkage and selection operator (LASSO) regression to evaluate gene effects in genome-wide association studies (GWAS) of brain images, using an MRI-derived temporal lobe volume measure from 729 subjects scanned as part of the Alzheimer's Disease Neuroimaging Initiative (ADNI). Sparse groups of SNPs in individual genes were selected by LASSO, which identifies efficient sets of variants influencing the data. These SNPs were considered jointly when assessing their association with neuroimaging measures. We discovered 22 genes that passed genome-wide significance for influencing temporal lobe volume. This was a substantially greater number of significant genes compared to those found with standard, univariate GWAS. These top genes are all expressed in the brain and include genes previously related to brain function or neuropsychiatric disorders such as MACROD2, SORCS2, GRIN2B, MAGI2, NPAS3, CLSTN2, GABRG3, NRXN3, PRKAG2, GAS7, RBFOX1, ADARB2, CHD4, and CDH13. The top genes we identified with this method also displayed significant and widespread post hoc effects on voxelwise, tensor-based morphometry (TBM) maps of the temporal lobes. The most significantly associated gene was an autism susceptibility gene known as MACROD2. We were able to successfully replicate the effect of the MACROD2 gene in an independent cohort of 564 young, Australian healthy adult twins and siblings scanned with MRI (mean age: 23.8 ± 2.2 SD years). Our approach powerfully complements univariate techniques in detecting influences of genes on the living brain.
A Bayesian Nonparametric Causal Model for Regression Discontinuity Designs
Karabatsos, George; Walker, Stephen G.
2013-01-01
The regression discontinuity (RD) design (Thistlewaite & Campbell, 1960; Cook, 2008) provides a framework to identify and estimate causal effects from a non-randomized design. Each subject of a RD design is assigned to the treatment (versus assignment to a non-treatment) whenever her/his observed value of the assignment variable equals or…
Regression Discontinuity Designs with Multiple Rating-Score Variables
Reardon, Sean F.; Robinson, Joseph P.
2012-01-01
In the absence of a randomized control trial, regression discontinuity (RD) designs can produce plausible estimates of the treatment effect on an outcome for individuals near a cutoff score. In the standard RD design, individuals with rating scores higher than some exogenously determined cutoff score are assigned to one treatment condition; those…
DESIGN SAMPLING AND REPLICATION ASSIGNMENT UNDER FIXED COMPUTING BUDGET
Institute of Scientific and Technical Information of China (English)
Loo Hay LEE; Ek Peng CHEW
2005-01-01
For many real world problems, when the design space is huge and unstructured, and time consuming simulation is needed to estimate the performance measure, it is important to decide how many designs to sample and how long to run for each design alternative given that we have only a fixed amount of computing time. In this paper, we present a simulation study on how the distribution of the performance measures and distribution of the estimation errors/noises will affect the decision.From the analysis, it is observed that when the underlying distribution of the noise is bounded and if there is a high chance that we can get the smallest noise, then the decision will be to sample as many as possible, but if the noise is unbounded, then it will be important to reduce the noise level first by assigning more replications for each design. On the other hand, if the distribution of the performance measure indicates that we will have a high chance of getting good designs, the suggestion is also to reduce the noise level, otherwise, we need to sample more designs so as to increase the chances of getting good designs. For the special case when the distributions of both the performance measures and noise are normal, we are able to estimate the number of designs to sample, and the number of replications to run in order to obtain the best performance.
Regression analysis application for designing the vibration dampers
Directory of Open Access Journals (Sweden)
A. V. Ivanov
2014-01-01
Full Text Available Multi-frequency vibration dampers protect air power lines and fiber optic communication channels against Aeolian vibrations. To have a maximum efficiency the natural frequencies of dampers should be evenly distributed over the entire operating frequency range from 3 to 150 Hz. A traditional approach to damper design is to investigate damper features using the fullscale models. As a result, a conclusion on the damper capabilities is drawn, and design changes are made to achieve the required natural frequencies. The article describes a direct optimization method to design dampers.This method leads to a clear-cut definition of geometrical and mass parameters of dampers by their natural frequencies. The direct designing method is based on the active plan and design experiment.Based on regression analysis, a regression model is obtained as a second order polynomial to establish unique relation between the input (element dimensions, the weights of cargos and the output (natural frequencies design parameters. Different problems of designing dampers are considered using developed regression models.As a result, it has been found that a satisfactory accuracy of mathematical models, relating the input designing parameters to the output ones, is achieved. Depending on the number of input parameters and the nature of the restrictions a statement of designing purpose, including an optimization one, can be different when restrictions for design parameters are to meet the conflicting requirements.A proposed optimization method to solve a direct designing problem allows us to determine directly the damper element dimensions for any natural frequencies, and at the initial stage of the analysis, based on the methods of nonlinear programming, to disclose problems with no solution.The developed approach can be successfully applied to design various mechanical systems with complicated nonlinear interactions between the input and output parameters.
Simulation Experiments in Practice : Statistical Design and Regression Analysis
Kleijnen, J.P.C.
2007-01-01
In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. Statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic t
The Analysis of the Regression-Discontinuity Design in R
Thoemmes, Felix; Liao, Wang; Jin, Ze
2017-01-01
This article describes the analysis of regression-discontinuity designs (RDDs) using the R packages rdd, rdrobust, and rddtools. We discuss similarities and differences between these packages and provide directions on how to use them effectively. We use real data from the Carolina Abecedarian Project to show how an analysis of an RDD can be…
Chi, Olivia L.; Dow, Aaron W.
2014-01-01
This study focuses on how matching, a method of preprocessing data prior to estimation and analysis, can be used to reduce imbalance between treatment and control group in regression discontinuity design. To examine the effects of academic probation on student outcomes, researchers replicate and expand upon research conducted by Lindo, Sanders,…
Feest, Uljana
2016-08-01
This paper revisits the debate between Harry Collins and Allan Franklin, concerning the experimenters' regress. Focusing my attention on a case study from recent psychology (regarding experimental evidence for the existence of a Mozart Effect), I argue that Franklin is right to highlight the role of epistemological strategies in scientific practice, but that his account does not sufficiently appreciate Collins's point about the importance of tacit knowledge in experimental practice. In turn, Collins rightly highlights the epistemic uncertainty (and skepticism) surrounding much experimental research. However, I will argue that his analysis of tacit knowledge fails to elucidate the reasons why scientists often are (and should be) skeptical of other researchers' experimental results. I will present an analysis of tacit knowledge in experimental research that not only answers to this desideratum, but also shows how such skepticism can in fact be a vital enabling factor for the dynamic processes of experimental knowledge generation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Residuals and outliers in replicate design crossover studies.
Schall, Robert; Endrenyi, Laszlo; Ring, Arne
2010-07-01
Outliers in bioequivalence trials may arise through various mechanisms, requiring different interpretation and handling of such data points. For example, regulatory authorities might permit exclusion from analysis of outliers caused by product or process failure, while exclusion of outliers caused by subject-by-treatment interaction generally is not acceptable. In standard 2 x 2 crossover studies it is not possible to distinguish between relevant types of outliers based on statistical criteria alone. However, in replicate design (2-treatment, 4-period) crossover studies three types of outliers can be distinguished: (i) Subject outliers are usually unproblematic, at least regarding the analysis of bioequivalence, and may require no further action; (ii) Subject-by-formulation outliers may affect the outcome of the bioequivalence test but generally cannot simply be removed from analysis; and (iii) Removal of single-data-point outliers from analysis may be justified in certain cases. As a very simple but effective diagnostic tool for the identification and classification of outliers in replicate design crossover studies we propose to calculate and plot three types of residual corresponding to the three different types of outliers that can be distinguished. The residuals are obtained from four mutually orthogonal linear contrasts of the four data points associated with each subject. If preferred, outlier tests can be applied to the resulting sets of residuals after suitable standardization.
Torregrosa-Muñumer, Rubén; Goffart, Steffi; Haikonen, Juha A; Pohjoismäki, Jaakko L O
2015-11-15
Mitochondrial DNA is prone to damage by various intrinsic as well as environmental stressors. DNA damage can in turn cause problems for replication, resulting in replication stalling and double-strand breaks, which are suspected to be the leading cause of pathological mtDNA rearrangements. In this study, we exposed cells to subtle levels of oxidative stress or UV radiation and followed their effects on mtDNA maintenance. Although the damage did not influence mtDNA copy number, we detected a massive accumulation of RNA:DNA hybrid-containing replication intermediates, followed by an increase in cruciform DNA molecules, as well as in bidirectional replication initiation outside of the main replication origin, OH. Our results suggest that mitochondria maintain two different types of replication as an adaptation to different cellular environments; the RNA:DNA hybrid-involving replication mode maintains mtDNA integrity in tissues with low oxidative stress, and the potentially more error tolerant conventional strand-coupled replication operates when stress is high.
Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs
Directory of Open Access Journals (Sweden)
Faqir Muhammad
2007-01-01
Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.
Meaney, Christopher; Moineddin, Rahim
2014-01-24
In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the
Simulation Experiments in Practice : Statistical Design and Regression Analysis
Kleijnen, J.P.C.
2007-01-01
In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obta
Replication protocol analysis: a method for the study of real-world design thinking
DEFF Research Database (Denmark)
Galle, Per; Kovacs, L. B.
1996-01-01
’ is refined into a method called ‘replication protocol analysis’ (RPA), and discussed from a methodological perspective of design research. It is argued that for the study of real-world design thinking this method offers distinct advantages over traditional ‘design protocol analysis’, which seeks to capture......Given the brief of an architectural competition on site planning, and the design awarded the first prize, the first author (trained as an architect but not a participant in the competition) produced a line of reasoning that might have led from brief to design. In the paper, such ‘design replication...... the designer’s authentic line of reasoning. To illustrate how RPA can be used, the site planning case is briefly presented, and part of the replicated line of reasoning analysed. One result of the analysis is a glimpse of a ‘logic of design’; another is an insight which sheds new light on Darke’s classical...
Design and analysis of experiments classical and regression approaches with SAS
Onyiah, Leonard C
2008-01-01
Introductory Statistical Inference and Regression Analysis Elementary Statistical Inference Regression Analysis Experiments, the Completely Randomized Design (CRD)-Classical and Regression Approaches Experiments Experiments to Compare Treatments Some Basic Ideas Requirements of a Good Experiment One-Way Experimental Layout or the CRD: Design and Analysis Analysis of Experimental Data (Fixed Effects Model) Expected Values for the Sums of Squares The Analysis of Variance (ANOVA) Table Follow-Up Analysis to Check fo
Wong, Vivian C.; Steiner, Peter M.; Cook, Thomas D.
2013-01-01
In a traditional regression-discontinuity design (RDD), units are assigned to treatment on the basis of a cutoff score and a continuous assignment variable. The treatment effect is measured at a single cutoff location along the assignment variable. This article introduces the multivariate regression-discontinuity design (MRDD), where multiple…
Directory of Open Access Journals (Sweden)
Gerritsen Winald R
2008-01-01
Full Text Available Abstract Metastatic osteosarcoma (OS has a very poor prognosis. New treatments are therefore wanted. The conditionally replicative adenovirus Ad5-Δ24RGD has shown promising anti-tumor effects on local cancers, including OS. The purpose of this study was to determine whether intravenous administration of Ad5-Δ24RGD could suppress growth of human OS lung metastases. Mice bearing SaOs-lm7 OS lung metastases were treated with Ad5-Δ24RGD at weeks 1, 2 and 3 or weeks 5, 6 and 7 after tumor cell injection. Virus treatment at weeks 1–3 did not cause a statistically significant effect on lung weight and total body weight. However, the number of macroscopic lung tumor nodules was reduced from a median of >158 in PBS-treated control mice to 58 in Ad5-Δ24RGD-treated mice (p = 0.15. Moreover, mice treated at weeks 5–7 showed a significantly reduced lung weight (decrease of tumor mass, p 149, p = 0.12 compared to PBS treated control animals. Adenovirus hexon expression was detected in lung tumor nodules at sacrifice three weeks after the last intravenous adenovirus administration, suggesting ongoing viral infection. These findings suggest that systemic administration of Ad5-Δ24RGD might be a promising new treatment strategy for metastatic osteosarcoma.
A brief introduction to regression designs and mixed-effects modelling by a recent convert
Balling, Laura Winther
2008-01-01
This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable sele...
Shobo, Yetty; Wong, Jen D.; Bell, Angie
2014-01-01
Regression discontinuity (RD), an "as good as randomized," research design is increasingly prominent in education research in recent years; the design gets eligible quasi-experimental designs as close as possible to experimental designs by using a stated threshold on a continuous baseline variable to assign individuals to a…
A regression-based Kansei engineering system based on form feature lines for product form design
Directory of Open Access Journals (Sweden)
Yan Xiong
2016-06-01
Full Text Available When developing new products, it is important for a designer to understand users’ perceptions and develop product form with the corresponding perceptions. In order to establish the mapping between users’ perceptions and product design features effectively, in this study, we presented a regression-based Kansei engineering system based on form feature lines for product form design. First according to the characteristics of design concept representation, product form features–product form feature lines were defined. Second, Kansei words were chosen to describe image perceptions toward product samples. Then, multiple linear regression and support vector regression were used to construct the models, respectively, that predicted users’ image perceptions. Using mobile phones as experimental samples, Kansei prediction models were established based on the front view form feature lines of the samples. From the experimental results, these two predict models were of good adaptability. But in contrast to multiple linear regression, the predict performance of support vector regression model was better, and support vector regression is more suitable for form regression prediction. The results of the case showed that the proposed method provided an effective means for designers to manipulate product features as a whole, and it can optimize Kansei model and improve practical values.
Kim, Dal Young; Atasheva, Svetlana; McAuley, Alexander J; Plante, Jessica A; Frolova, Elena I; Beasley, David W C; Frolov, Ilya
2014-07-22
Since the development of infectious cDNA clones of viral RNA genomes and the means of delivery of the in vitro-synthesized RNA into cells, alphaviruses have become an attractive system for expression of heterologous genetic information. Alphaviruses replicate exclusively in the cytoplasm, and their genetic material cannot recombine with cellular DNA. Alphavirus genome-based, self-replicating RNAs (replicons) are widely used vectors for expression of heterologous proteins. Their current design relies on replacement of structural genes, encoded by subgenomic RNAs (SG RNA), with heterologous sequences of interest. The SG RNA is transcribed from a promoter located in the alphavirus-specific RNA replication intermediate and is not further amplified. In this study, we have applied the accumulated knowledge of the mechanism of alphavirus replication and promoter structures, in particular, to increase the expression level of heterologous proteins from Venezuelan equine encephalitis virus (VEEV)-based replicons. During VEEV infection, replication enzymes are produced in excess to RNA replication intermediates, and a large fraction of them are not involved in RNA synthesis. The newly designed constructs encode SG RNAs, which are not only transcribed from the SG promoter, but are additionally amplified by the previously underused VEEV replication enzymes. These replicons produce SG RNAs and encoded proteins of interest 10- to 50-fold more efficiently than those using a traditional design. A modified replicon encoding West Nile virus (WNV) premembrane and envelope proteins efficiently produced subviral particles and, after a single immunization, elicited high titers of neutralizing antibodies, which protected mice from lethal challenge with WNV.
Mandell, Marvin B.
2008-01-01
Both true experiments and regression discontinuity (RD) designs produce unbiased estimates of effects. However, true experiments are, of course, often criticized on equity grounds, whereas RD designs entail sacrifices in terms of statistical precision. In this article, a hybrid of true experiments and RD designs is considered. This hybrid entails…
A brief introduction to regression designs and mixed-effects modelling by a recent convert
DEFF Research Database (Denmark)
Balling, Laura Winther
2008-01-01
This article discusses the advantages of multiple regression designs over the factorial designs traditionally used in many psycholinguistic experiments. It is shown that regression designs are typically more informative, statistically more powerful and better suited to the analysis of naturalistic...... tasks. The advantages of including both fixed and random effects are demonstrated with reference to linear mixed-effects models, and problems of collinearity, variable distribution and variable selection are discussed. The advantages of these techniques are exemplified in an analysis of a word...
Antretter, Elfi; Dunkel, Dirk; Osvath, Peter; Voros, Viktor; Fekete, Sandor; Haring, Christian
2006-06-01
The prospective investigation of repetitive nonfatal suicidal behavior is associated with two methodological problems. Due to the commonly used definitions of nonfatal suicidal behavior, clinical samples usually consist of patients with a considerable between-person variability. Second, repeated nonfatal suicidal episodes of the same subjects are likely to be correlated. We examined three regression techniques to comparatively evaluate their efficiency in addressing the given methodological problems. Repeated episodes of nonfatal suicidal behavior were assessed in two independent patient samples during a 2-year follow-up period. The first regression design modeled repetitive nonfatal suicidal behavior as a summary measure. The second regression model treated repeated episodes of the same subject as independent events. The third regression model represented a hierarchical linear model. The estimated mean effects of the first model were likely to be nonrepresentative for a considerable part of the study subjects. The second regression design overemphasized the impact of the predictor variables. The hierarchical linear model most appropriately accounted for the heterogeneity of the samples and the correlated data structure. The nonhierarchical regression designs did not provide appropriate statistical models for the prospective investigation of repetitive nonfatal suicidal behavior. Multilevel modeling provides a convenient alternative.
Stahel-Donoho kernel estimation for fixed design nonparametric regression models
Institute of Scientific and Technical Information of China (English)
LIN; Lu
2006-01-01
This paper reports a robust kernel estimation for fixed design nonparametric regression models.A Stahel-Donoho kernel estimation is introduced,in which the weight functions depend on both the depths of data and the distances between the design points and the estimation points.Based on a local approximation,a computational technique is given to approximate to the incomputable depths of the errors.As a result the new estimator is computationally efficient.The proposed estimator attains a high breakdown point and has perfect asymptotic behaviors such as the asymptotic normality and convergence in the mean squared error.Unlike the depth-weighted estimator for parametric regression models,this depth-weighted nonparametric estimator has a simple variance structure and then we can compare its efficiency with the original one.Some simulations show that the new method can smooth the regression estimation and achieve some desirable balances between robustness and efficiency.
Fatigue design of a cellular phone folder using regression model-based multi-objective optimization
Kim, Young Gyun; Lee, Jongsoo
2016-08-01
In a folding cellular phone, the folding device is repeatedly opened and closed by the user, which eventually results in fatigue damage, particularly to the front of the folder. Hence, it is important to improve the safety and endurance of the folder while also reducing its weight. This article presents an optimal design for the folder front that maximizes its fatigue endurance while minimizing its thickness. Design data for analysis and optimization were obtained experimentally using a test jig. Multi-objective optimization was carried out using a nonlinear regression model. Three regression methods were employed: back-propagation neural networks, logistic regression and support vector machines. The AdaBoost ensemble technique was also used to improve the approximation. Two-objective Pareto-optimal solutions were identified using the non-dominated sorting genetic algorithm (NSGA-II). Finally, a numerically optimized solution was validated against experimental product data, in terms of both fatigue endurance and thickness index.
Agha, Salah R; Alnahhal, Mohammed J
2012-11-01
The current study investigates the possibility of obtaining the anthropometric dimensions, critical to school furniture design, without measuring all of them. The study first selects some anthropometric dimensions that are easy to measure. Two methods are then used to check if these easy-to-measure dimensions can predict the dimensions critical to the furniture design. These methods are multiple linear regression and neural networks. Each dimension that is deemed necessary to ergonomically design school furniture is expressed as a function of some other measured anthropometric dimensions. Results show that out of the five dimensions needed for chair design, four can be related to other dimensions that can be measured while children are standing. Therefore, the method suggested here would definitely save time and effort and avoid the difficulty of dealing with students while measuring these dimensions. In general, it was found that neural networks perform better than multiple linear regression in the current study.
Lipsey, Mark W.; Weiland, Christina; Yoshikawa, Hirokazu; Wilson, Sandra Jo; Hofer, Kerry G.
2015-01-01
Much of the currently available evidence on the causal effects of public prekindergarten programs on school readiness outcomes comes from studies that use a regression-discontinuity design (RDD) with the age cutoff to enter a program in a given year as the basis for assignment to treatment and control conditions. Because the RDD has high internal…
Kleijnen, J.P.C.
1995-01-01
This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for
Wong, Vivian C.; Steiner, Peter M.; Cook, Thomas D.
2009-01-01
This paper introduces a generalization of the regression-discontinuity design (RDD). Traditionally, RDD is considered in a two-dimensional framework, with a single assignment variable and cutoff. Treatment effects are measured at a single location along the assignment variable. However, this represents a specialized (and straight-forward)…
Wong, Vivian C.; Steiner, Peter M.; Cook, Thomas D.
2012-01-01
In a traditional regression-discontinuity design (RDD), units are assigned to treatment and comparison conditions solely on the basis of a single cutoff score on a continuous assignment variable. The discontinuity in the functional form of the outcome at the cutoff represents the treatment effect, or the average treatment effect at the cutoff.…
Ulrich, David; Parkhouse, Bonnie L.
1982-01-01
An alumni-based model is proposed as an alternative to sports management curriculum design procedures. The model relies on the assessment of curriculum by sport management alumni and uses performance ratings of employers and measures of satisfaction by alumni in a regression model to identify curriculum leading to increased work performance and…
Directory of Open Access Journals (Sweden)
Rachna Aggarwal
2014-12-01
Full Text Available This paper presents Reliability Based Design Optimization (RBDO model to deal with uncertainties involved in concrete mix design process. The optimization problem is formulated in such a way that probabilistic concrete mix input parameters showing random characteristics are determined by minimizing the cost of concrete subjected to concrete compressive strength constraint for a given target reliability. Linear and quadratic models based on Ordinary Least Square Regression (OLSR, Traditional Ridge Regression (TRR and Generalized Ridge Regression (GRR techniques have been explored to select the best model to explicitly represent compressive strength of concrete. The RBDO model is solved by Sequential Optimization and Reliability Assessment (SORA method using fully quadratic GRR model. Optimization results for a wide range of target compressive strength and reliability levels of 0.90, 0.95 and 0.99 have been reported. Also, safety factor based Deterministic Design Optimization (DDO designs for each case are obtained. It has been observed that deterministic optimal designs are cost effective but proposed RBDO model gives improved design performance.
Thrust estimator design based on least squares support vector regression machine
Institute of Scientific and Technical Information of China (English)
ZHAO Yong-ping; SUN Jian-guo
2010-01-01
In order to realize direct thrust control instead of traditional sensor-based control for nero-engines,it is indispensable to design a thrust estimator with high accuracy,so a scheme for thrust estimator design based on the least square support vector regression machine is proposed to solve this problem.Furthermore,numerical simulations confirm the effectiveness of our presented scheme.During the process of estimator design,a wrap per criterion that can not only reduce the computational complexity but also enhance the generalization performance is proposed to select variables as input variables for estimator.
Product Design Time Forecasting by Kernel-Based Regression with Gaussian Distribution Weights
Directory of Open Access Journals (Sweden)
Zhi-Gen Shang
2016-06-01
Full Text Available There exist problems of small samples and heteroscedastic noise in design time forecasts. To solve them, a kernel-based regression with Gaussian distribution weights (GDW-KR is proposed here. GDW-KR maintains a Gaussian distribution over weight vectors for the regression. It is applied to seek the least informative distribution from those that keep the target value within the confidence interval of the forecast value. GDW-KR inherits the benefits of Gaussian margin machines. By assuming a Gaussian distribution over weight vectors, it could simultaneously offer a point forecast and its confidence interval, thus providing more information about product design time. Our experiments with real examples verify the effectiveness and flexibility of GDW-KR.
Institute of Scientific and Technical Information of China (English)
郑力会; 王金凤; 李潇鹏; 张燕; 李都
2008-01-01
In order to optimize plastic viscosity of 18 mPa·s circulating micro-bubble drilling fluid formula,orthogonal and uniform experimental design methods were applied,and the plastic viscosities of 36 and 24 groups of agent were tested,respectively.It is found that these two experimental design methods show drawbacks,that is,the amount of agent is difficult to determine,and the results are not fully optimized.Therefore,multiple regression experimental method was used to design experimental formula.By randomly selecting arbitrary agent with the amount within the recommended range,17 groups of drilling fluid formula were designed,and the plastic viscosity of each experiment formula was measured.Set plastic viscosity as the objective function,through multiple regressions,then quadratic regression model is obtained,whose correlation coefficient meets the requirement.Set target values of plastic viscosity to be 18,20 and 22 mPa·s,respectively,with the trial method,5 drilling fluid formulas are obtained with accuracy of 0.000 3,0.000 1 and 0.000 3.Arbitrarily select target value of each of the two groups under the formula for experimental verification of drilling fluid,then the measurement errors between theoretical and tested plastic viscosity are less than 5%,confirming that regression model can be applied to optimizing the circulating of plastic-foam drilling fluid viscosity.In accordance with the precision of different formulations of drilling fluid for other constraints,the methods result in the optimization of the circulating micro-bubble drilling fluid parameters.
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
The thermal induced errors can account for as much as 70% of the dimensional errors on a workpiece. Accurate modeling of errors is an essential part of error compensation. Base on analyzing the existing approaches of the thermal error modeling for machine tools, a new approach of regression orthogonal design is proposed, which combines the statistic theory with machine structures, surrounding condition, engineering judgements, and experience in modeling. A whole computation and analysis procedure is given. ...
Linden, Ariel; Adams, John L; Roberts, Nancy
2006-04-01
Although disease management (DM) has been in existence for over a decade, there is still much uncertainty as to its effectiveness in improving health status and reducing medical cost. The main reason is that most programme evaluations typically follow weak observational study designs that are subject to bias, most notably selection bias and regression to the mean. The regression discontinuity (RD) design may be the best alternative to randomized studies for evaluating DM programme effectiveness. The most crucial element of the RD design is its use of a 'cut-off' score on a pre-test measure to determine assignment to intervention or control. A valuable feature of this technique is that the pre-test measure does not have to be the same as the outcome measure, thus maximizing the programme's ability to use research-based practice guidelines, survey instruments and other tools to identify those individuals in greatest need of the programme intervention. Similarly, the cut-off score can be based on clinical understanding of the disease process, empirically derived, or resource-based. In the RD design, programme effectiveness is determined by a change in the pre-post relationship at the cut-off point. While the RD design is uniquely suitable for DM programme evaluation, its success will depend, in large part, on fundamental changes being made in the way DM programmes identify and assign individuals to the programme intervention.
Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro
2016-01-01
The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.
Directory of Open Access Journals (Sweden)
Carlos Fernandez-Lozano
2016-12-01
Full Text Available The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.
Estimating Unbiased Treatment Effects in Education Using a Regression Discontinuity Design
Directory of Open Access Journals (Sweden)
William C. Smith
2014-08-01
Full Text Available The ability of regression discontinuity (RD designs to provide an unbiased treatment effect while overcoming the ethical concerns plagued by Random Control Trials (RCTs make it a valuable and useful approach in education evaluation. RD is the only explicitly recognized quasi-experimental approach identified by the Institute of Education Statistics to meet the prerequisites of a causal relationship. Unfortunately, the statistical complexity of the RD design has limited its application in education research. This article provides a less technical introduction to RD for education researchers and practitioners. Using visual analysis to aide conceptual understanding, the article walks readers through the essential steps of a Sharp RD design using hypothetical, but realistic, district intervention data and provides additional resources for further exploration.
Multi-Objective Optimization Algorithms Design based on Support Vector Regression Metamodeling
Directory of Open Access Journals (Sweden)
Qi Zhang
2013-11-01
Full Text Available In order to solve the multi-objective optimization problem in the complex engineering, in this paper a NSGA-II multi-objective optimization algorithms based on Support Vector Regression Metamodeling is presented. Appropriate design parameter samples are selected by experimental design theories, and the response samples are obtained from the experiments or numerical simulations, used the SVM method to establish the metamodels of the objective performance functions and constraints, and reconstructed the original optimal problem. The reconstructed metamodels was solved by NSGA-II algorithm and took the structure optimization of the microwave power divider as an example to illustrate the proposed methodology and solve themulti-objective optimization problem. The results show that this methodology is feasible and highly effective, and thus it can be used in the optimum design of engineering fields.
Directory of Open Access Journals (Sweden)
A.Muthukumaravel
2011-08-01
Full Text Available This paper presents implementation of locally weighted projection regression (LWPR network method for concurrency control while developing dial of a fork using Autodesk inventor 2008. The LWPR learns the objects and the type of transactions to be done based on which node in the output layer of the network exceeds a threshold value. Learning stops once all the objects are exposed to LWPR. During testing performance, metrics are analyzed. We have attempted to use LWPR for storing lock information when multi users are working on computer Aided Design (CAD. The memory requirements of the proposed method are minimal in processing locks during transaction.
Analysis of Wind Tunnel Polar Replicates Using the Modern Design of Experiments
Deloach, Richard; Micol, John R.
2010-01-01
The role of variance in a Modern Design of Experiments analysis of wind tunnel data is reviewed, with distinctions made between explained and unexplained variance. The partitioning of unexplained variance into systematic and random components is illustrated, with examples of the elusive systematic component provided for various types of real-world tests. The importance of detecting and defending against systematic unexplained variance in wind tunnel testing is discussed, and the random and systematic components of unexplained variance are examined for a representative wind tunnel data set acquired in a test in which a missile is used as a test article. The adverse impact of correlated (non-independent) experimental errors is described, and recommendations are offered for replication strategies that facilitate the quantification of random and systematic unexplained variance.
Dougherty, Andrew W.
Metal oxides are a staple of the sensor industry. The combination of their sensitivity to a number of gases, and the electrical nature of their sensing mechanism, make the particularly attractive in solid state devices. The high temperature stability of the ceramic material also make them ideal for detecting combustion byproducts where exhaust temperatures can be high. However, problems do exist with metal oxide sensors. They are not very selective as they all tend to be sensitive to a number of reduction and oxidation reactions on the oxide's surface. This makes sensors with large numbers of sensors interesting to study as a method for introducing orthogonality to the system. Also, the sensors tend to suffer from long term drift for a number of reasons. In this thesis I will develop a system for intelligently modeling metal oxide sensors and determining their suitability for use in large arrays designed to analyze exhaust gas streams. It will introduce prior knowledge of the metal oxide sensors' response mechanisms in order to produce a response function for each sensor from sparse training data. The system will use the same technique to model and remove any long term drift from the sensor response. It will also provide an efficient means for determining the orthogonality of the sensor to determine whether they are useful in gas sensing arrays. The system is based on least squares support vector regression using the reciprocal kernel. The reciprocal kernel is introduced along with a method of optimizing the free parameters of the reciprocal kernel support vector machine. The reciprocal kernel is shown to be simpler and to perform better than an earlier kernel, the modified reciprocal kernel. Least squares support vector regression is chosen as it uses all of the training points and an emphasis was placed throughout this research for extracting the maximum information from very sparse data. The reciprocal kernel is shown to be effective in modeling the sensor
Institute of Scientific and Technical Information of China (English)
HADJMOHAMMADI,M.R.; KAMEL,K.
2008-01-01
The chemometrics approach was applied to the optimization of separation of quinolines in micellar liquid tigated by means of multivariate analysis. The factors considered were the concentration of sodium dodecyl sulfate (SDS), the organic modifier concentration and the length of its alkyl chain, and pH of the mobile phase. The ex-periments were performed according to a face centered cube response surface experimental design. In order to op-timize the separation a Pareto-optimality method was employed. The models were verified, because a good agree-ment was observed between the predicted and experimental values of the chromatographic response function in the optimal condition. The obtained regression models were characterized by both descriptive and predictive ability (R2≥0.97 and R2cv≥0.92) and allowed the chromatographic separation of the quinolines with a good resolution and a total analysis time of 50 min.
Louie, Josephine; Rhoads, Christopher; Mark, June
2016-01-01
Interest in the regression discontinuity (RD) design as an alternative to randomized control trials (RCTs) has grown in recent years. There is little practical guidance, however, on conditions that would lead to a successful RD evaluation or the utility of studies with underpowered RD designs. This article describes the use of RD design to…
Smith, Justin D.; Handler, Leonard; Nash, Michael R.
2010-01-01
The Therapeutic Assessment (TA) model is a relatively new treatment approach that fuses assessment and psychotherapy. The study examines the efficacy of this model with preadolescent boys with oppositional defiant disorder and their families. A replicated single-case time-series design with daily measures is used to assess the effects of TA and to…
The Monitored Atherosclerosis Regression Study (MARS). Design, methods and baseline results.
Cashin-Hemphill, L; Kramsch, D M; Azen, S P; DeMets, D; DeBoer, L W; Hwang, I; Vailas, L; Hirsch, L J; Mack, W J; DeBoer, L
1992-10-23
The Monitored Atherosclerosis Regression Study (MARS) was designed to evaluate the effect of cholesterol lowering by monotherapy with an HMG-CoA reductase inhibitor on progression/regression of atherosclerosis in subjects with angiographically documented coronary artery disease. The purpose of this paper is to present the design, methods, and baseline results of MARS. MARS is a prospective, randomized, double-blind, placebo-controlled trial with baseline, 2-year, and 4-year coronary angiography as well as carotid, brachial, and popliteal ultrasonography. Outpatient clinics at the University of Southern California School of Medicine and the University of Wisconsin School of Medicine. Two hundred seventy participants of both sexes were recruited directly from the cardiac catheterization laboratory or by chart review of patients having undergone cardiac catheterization in the past. Subjects were considered eligible if they had angiographically demonstrable atherosclerosis in 2 or more coronary artery segments, unaltered by angioplasty, with at least 1 lesion > or = 50% but or = 500 mg/dL; premenopausal females; uncontrolled hypertension; diabetes mellitus; untreated thyroid disease; liver dysfunction; renal insufficiency; congestive heart failure; major arrhythmia; left ventricular conduction defects; or any life-threatening disease. Subjects were placed on a low-fat, low-cholesterol diet and either 40 mg b.i.d. lovastatin (Mevacor) or placebo. Randomization was stratified by sex, smoking status, and TC. Per-subject average change in %S as determined by quantitative coronary angiography (QCA) is the primary angiographic endpoint. Secondary endpoints are: categorical analyses of the proportion of subjects with progression; human panel reading of coronary angiograms; and change in minimum lumen diameter (MLD) in mm by QCA. Carotid, brachial, and popliteal ultrasonography is also being performed. The subjects randomized into MARS are 91.5% male with an age range of 37 to
Directory of Open Access Journals (Sweden)
Xian-Xia Zhang
2013-01-01
Full Text Available This paper presents a reference function based 3D FLC design methodology using support vector regression (SVR learning. The concept of reference function is introduced to 3D FLC for the generation of 3D membership functions (MF, which enhance the capability of the 3D FLC to cope with more kinds of MFs. The nonlinear mathematical expression of the reference function based 3D FLC is derived, and spatial fuzzy basis functions are defined. Via relating spatial fuzzy basis functions of a 3D FLC to kernel functions of an SVR, an equivalence relationship between a 3D FLC and an SVR is established. Therefore, a 3D FLC can be constructed using the learned results of an SVR. Furthermore, the universal approximation capability of the proposed 3D fuzzy system is proven in terms of the finite covering theorem. Finally, the proposed method is applied to a catalytic packed-bed reactor and simulation results have verified its effectiveness.
Rational design of human DNA ligase inhibitors that target cellular DNA replication and repair.
Chen, Xi; Zhong, Shijun; Zhu, Xiao; Dziegielewska, Barbara; Ellenberger, Tom; Wilson, Gerald M; MacKerell, Alexander D; Tomkinson, Alan E
2008-05-01
Based on the crystal structure of human DNA ligase I complexed with nicked DNA, computer-aided drug design was used to identify compounds in a database of 1.5 million commercially available low molecular weight chemicals that were predicted to bind to a DNA-binding pocket within the DNA-binding domain of DNA ligase I, thereby inhibiting DNA joining. Ten of 192 candidates specifically inhibited purified human DNA ligase I. Notably, a subset of these compounds was also active against the other human DNA ligases. Three compounds that differed in their specificity for the three human DNA ligases were analyzed further. L82 inhibited DNA ligase I, L67 inhibited DNA ligases I and III, and L189 inhibited DNA ligases I, III, and IV in DNA joining assays with purified proteins and in cell extract assays of DNA replication, base excision repair, and nonhomologous end-joining. L67 and L189 are simple competitive inhibitors with respect to nicked DNA, whereas L82 is an uncompetitive inhibitor that stabilized complex formation between DNA ligase I and nicked DNA. In cell culture assays, L82 was cytostatic whereas L67 and L189 were cytotoxic. Concordant with their ability to inhibit DNA repair in vitro, subtoxic concentrations of L67 and L189 significantly increased the cytotoxicity of DNA-damaging agents. Interestingly, the ligase inhibitors specifically sensitized cancer cells to DNA damage. Thus, these novel human DNA ligase inhibitors will not only provide insights into the cellular function of these enzymes but also serve as lead compounds for the development of anticancer agents.
Design and Evaluation of Dynamic Replication Strategies for a High—Performance Data Grid
Institute of Scientific and Technical Information of China (English)
KavithaRanganathan; IanFoster
2001-01-01
Physics experiments that generate large amounts of data need to be able to share it with researchers around the world .High performance grids facilitate the distribution of such data to geographically remote places.Dynamic replication can be used as a technique to reduce bandwidth consumption and access latency in accessuing these huge amounts of data.We describe a simulation framework that we have developed to model a grid scenario,which enables comparative studies of alternative dynamic replication strategies.We present preliminary results obtained with this simulator,in which we evaluate the performance of six different replication strategies for three different kinds of access patterns.The simulation results show that the best strategy has significant savings in latency and bandwidth consumption if the access patterns contain a moderate amount of gerographical locality.
Choice of seond-order response surface designs for logistic and Poisson regression models
Johnson, Rachel T.; Montgomery, Douglas C.
2009-01-01
Response surface methodology is widely used for process development and optimisation, product design, and as part of the modern framework for robust parameter design. For normally distributed responses, the standard second-order designs such as the central composite design and the Box-Behnken design have realtively high D and G efficiencies. In situations where these designs are inappropriate, standard computer software cen be used to construct D-optimal and I-optimal designs for fitting s...
Sørensen, By Ole H
2016-10-01
Organizational-level occupational health interventions have great potential to improve employees' health and well-being. However, they often compare unfavourably to individual-level interventions. This calls for improving methods for designing, implementing and evaluating organizational interventions. This paper presents and discusses the regression discontinuity design because, like the randomized control trial, it is a strong summative experimental design, but it typically fits organizational-level interventions better. The paper explores advantages and disadvantages of a regression discontinuity design with an embedded randomized control trial. It provides an example from an intervention study focusing on reducing sickness absence in 196 preschools. The paper demonstrates that such a design fits the organizational context, because it allows management to focus on organizations or workgroups with the most salient problems. In addition, organizations may accept an embedded randomized design because the organizations or groups with most salient needs receive obligatory treatment as part of the regression discontinuity design. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Analysis of designed experiments by stabilised PLS Regression and jack-knifing
DEFF Research Database (Denmark)
Martens, Harald; Høy, M.; Westad, F.
2001-01-01
Pragmatical, visually oriented methods for assessing and optimising bi-linear regression models are described, and applied to PLS Regression (PLSR) analysis of multi-response data from controlled experiments. The paper outlines some ways to stabilise the PLSR method to extend its range...... the reliability of the linear and bi-linear model parameter estimates. The paper illustrates how the obtained PLSR "significance" probabilities are similar to those from conventional factorial ANOVA, but the PLSR is shown to give important additional overview plots of the main relevant structures in the multi...
Leake, Meg; Lesik, Sally A.
2007-01-01
This paper presents an illustration of how the regression-discontinuity design can be used to obtain an unbiased estimate of the causal effect of participating in a university remedial English program on first-year grade point average. First-year grade point averages were collected for 197 first-time, full-time university students, along with the…
Melguizo, Tatiana; Bos, Johannes M.; Ngo, Federick; Mills, Nicholas; Prather, George
2016-01-01
This study evaluates the effectiveness of math placement policies for entering community college students on these students' academic success in math. We estimate the impact of placement decisions by using a discrete-time survival model within a regression discontinuity framework. The primary conclusion that emerges is that initial placement in a…
Maas, Iris L; Nolte, Sandra; Walter, Otto B; Berger, Thomas; Hautzinger, Martin; Hohagen, Fritz; Lutz, Wolfgang; Meyer, Björn; Schröder, Johanna; Späth, Christina; Klein, Jan Philipp; Moritz, Steffen; Rose, Matthias
2017-02-01
To compare treatment effect estimates obtained from a regression discontinuity (RD) design with results from an actual randomized controlled trial (RCT). Data from an RCT (EVIDENT), which studied the effect of an Internet intervention on depressive symptoms measured with the Patient Health Questionnaire (PHQ-9), were used to perform an RD analysis, in which treatment allocation was determined by a cutoff value at baseline (PHQ-9 = 10). A linear regression model was fitted to the data, selecting participants above the cutoff who had received the intervention (n = 317) and control participants below the cutoff (n = 187). Outcome was PHQ-9 sum score 12 weeks after baseline. Robustness of the effect estimate was studied; the estimate was compared with the RCT treatment effect. The final regression model showed a regression coefficient of -2.29 [95% confidence interval (CI): -3.72 to -.85] compared with a treatment effect found in the RCT of -1.57 (95% CI: -2.07 to -1.07). Although the estimates obtained from two designs are not equal, their confidence intervals overlap, suggesting that an RD design can be a valid alternative for RCTs. This finding is particularly important for situations where an RCT may not be feasible or ethical as is often the case in clinical research settings. Copyright © 2016 Elsevier Inc. All rights reserved.
Institute of Scientific and Technical Information of China (English)
Saeid Shokri; Mohammad Taghi Sadeghi; Mahdi Ahmadi Marvast; Shankar Narasimhan
2015-01-01
A novel method for developing a reliable data driven soft sensor to improve the prediction accuracy of sulfur content in hydrodesulfurization (HDS) process was proposed. Therefore, an integrated approach using support vector regression (SVR) based on wavelet transform (WT) and principal component analysis (PCA) was used. Experimental data from the HDS setup were employed to validate the proposed model. The results reveal that the integrated WT-PCA with SVR model was able to increase the prediction accuracy of SVR model. Implementation of the proposed model delivers the best satisfactory predicting performance (EAARE=0.058 andR2=0.97) in comparison with SVR. The obtained results indicate that the proposed model is more reliable and more precise than the multiple linear regression (MLR), SVR and PCA-SVR.
Price Sensitivity of Demand for Prescription Drugs: Exploiting a Regression Kink Design
DEFF Research Database (Denmark)
Simonsen, Marianne; Skipper, Lars; Skipper, Niels
This paper investigates price sensitivity of demand for prescription drugs using drug purchase records for at 20% random sample of the Danish population. We identify price responsiveness by exploiting exogenous variation in prices caused by kinked reimbursement schemes and implement a regression ...... education and income are, however, more responsive to the price. Also, essential drugs that prevent deterioration in health and prolong life have lower associated average price sensitivity....
Retro-priming, priming, and double testing: psi and replication in a test-retest design.
Rabeyron, Thomas
2014-01-01
Numerous experiments have been conducted in recent years on anomalous retroactive influences on cognition and affect (Bem, 2010), yet more data are needed to understand these processes precisely. For this purpose, we carried out an initial retro-priming study in which the response times of 162 participants were measured (Rabeyron and Watt, 2010). In the current paper, we present the results of a second study in which we selected those participants who demonstrated the strongest retro-priming effect during the first study, in order to see if we could replicate this effect and therefore select high scoring participants. An additional objective was to try to find correlations between psychological characteristics (anomalous experiences, mental health, mental boundaries, trauma, negative life events) and retro-priming results for the high scoring participants. The retro-priming effect was also compared with performance on a classical priming task. Twenty-eight participants returned to the laboratory for this new study. The results, for the whole group, on the retro-priming task, were negative and non-significant (es = -0.25, ns) and the results were significant on the priming task (es = 0.63, p priming results for all the sub-groups (students, male, female). Ten participants were found to have positive results on the two retro-priming studies, but no specific psychological variables were found for these participants compared to the others. Several hypotheses are considered in explaining these results, and the author provide some final thoughts concerning psi and replicability.
Directory of Open Access Journals (Sweden)
Chang Li
2014-01-01
Full Text Available Much of the previous work in D-optimal design for regression models with correlated errors focused on polynomial models with a single predictor variable, in large part because of the intractability of an analytic solution. In this paper, we present a modified, improved simulated annealing algorithm, providing practical approaches to specifications of the annealing cooling parameters, thresholds, and search neighborhoods for the perturbation scheme, which finds approximate D-optimal designs for 2-way and 3-way polynomial regression for a variety of specific correlation structures with a given correlation coefficient. Results in each correlated-errors case are compared with traditional simulated annealing algorithm, that is, the SA algorithm without our improvement. Our improved simulated annealing results had generally higher D-efficiency than traditional simulated annealing algorithm, especially when the correlation parameter was well away from 0.
Optimal Design for Two-Level Random Assignment and Regression Discontinuity Studies
Rhoads, Christopher H.; Dye, Charles
2016-01-01
An important concern when planning research studies is to obtain maximum precision of an estimate of a treatment effect given a budget constraint. When research designs have a "multilevel" or "hierarchical" structure changes in sample size at different levels of the design will impact precision differently. Furthermore, there…
Global Optimization Problems in Optimal Design of Experiments in Regression Models
Boer, E.P.J.; Hendrix, E.M.T.
2000-01-01
In this paper we show that optimal design of experiments, a specific topic in statistics, constitutes a challenging application field for global optimization. This paper shows how various structures in optimal design of experiments problems determine the structure of corresponding challenging global
Braess, Dietrich; Dette, Holger
2004-01-01
We consider maximin and Bayesian D -optimal designs for nonlinear regression models. The maximin criterion requires the specification of a region for the nonlinear parameters in the model, while the Bayesian optimality criterion assumes that a prior distribution for these parameters is available. It was observed empirically by many authors that an increase of uncertainty in the prior information (i.e. a larger range for the parameter space in the maximin criterion or a larger variance of the ...
A least angle regression method for fMRI activation detection in phase-encoded experimental designs.
Li, Xingfeng; Coyle, Damien; Maguire, Liam; McGinnity, Thomas M; Watson, David R; Benali, Habib
2010-10-01
This paper presents a new regression method for functional magnetic resonance imaging (fMRI) activation detection. Unlike general linear models (GLM), this method is based on selecting models for activation detection adaptively which overcomes the limitation of requiring a predefined design matrix in GLM. This limitation is because GLM designs assume that the response of the neuron populations will be the same for the same stimuli, which is often not the case. In this work, the fMRI hemodynamic response model is selected from a series of models constructed online by the least angle regression (LARS) method. The slow drift terms in the design matrix for the activation detection are determined adaptively according to the fMRI response in order to achieve the best fit for each fMRI response. The LARS method is then applied along with the Moore-Penrose pseudoinverse (PINV) and fast orthogonal search (FOS) algorithm for implementation of the selected model to include the drift effects in the design matrix. Comparisons with GLM were made using 11 normal subjects to test method superiority. This paper found that GLM with fixed design matrix was inferior compared to the described LARS method for fMRI activation detection in a phased-encoded experimental design. In addition, the proposed method has the advantage of increasing the degrees of freedom in the regression analysis. We conclude that the method described provides a new and novel approach to the detection of fMRI activation which is better than GLM based analyses.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Mapping of multiple quantitative trait loci by simple regression in half-sib designs
Koning, de D.J.; Schulman, N.; Elo, K.; Moisio, S.; Kinos, R.; Vilkki, J.; Maki-Tanila, A.
2001-01-01
Detection of QTL in outbred half-sib family structures has mainly been based on interval mapping of single QTL on individual chromosomes. Methods to account for linked and unlinked QTL have been developed, but most of them are only applicable in designs with inbred species or pose great demands on c
Institute of Scientific and Technical Information of China (English)
CHENG Jia; ZHU Yu; JI Linhong
2012-01-01
The geometry of an inductively coupled plasma (ICP) etcher is usually considered to be an important factor for determining both plasma and process uniformity over a large wafer. During the past few decades, these parameters were determined by the "trial and error" method, resulting in wastes of time and funds. In this paper, a new approach of regression orthogonal design with plasma simulation experiments is proposed to investigate the sensitivity of the structural parameters on the uniformity of plasma characteristics. The tool for simulating plasma is CFD-ACE+, which is commercial multi-physical modeling software that has been proven to be accurate for plasma simulation. The simulated experimental results are analyzed to get a regression equation on three structural parameters. Through this equation, engineers can compute the uniformity of the electron number density rapidly without modeling by CFD-ACE+. An optimization performed at the end produces good results.
Hierarchical design of a polymeric nanovehicle for efficient tumor regression and imaging
An, Jinxia; Guo, Qianqian; Zhang, Peng; Sinclair, Andrew; Zhao, Yu; Zhang, Xinge; Wu, Kan; Sun, Fang; Hung, Hsiang-Chieh; Li, Chaoxing; Jiang, Shaoyi
2016-04-01
Effective delivery of therapeutics to disease sites significantly contributes to drug efficacy, toxicity and clearance. Here we designed a hierarchical polymeric nanoparticle structure for anti-cancer chemotherapy delivery by utilizing state-of-the-art polymer chemistry and co-assembly techniques. This novel structural design combines the most desired merits for drug delivery in a single particle, including a long in vivo circulation time, inhibited non-specific cell uptake, enhanced tumor cell internalization, pH-controlled drug release and simultaneous imaging. This co-assembled nanoparticle showed exceptional stability in complex biological media. Benefiting from the synergistic effects of zwitterionic and multivalent galactose polymers, drug-loaded nanoparticles were selectively internalized by cancer cells rather than normal tissue cells. In addition, the pH-responsive core retained their cargo within their polymeric coating through hydrophobic interaction and released it under slightly acidic conditions. In vivo pharmacokinetic studies in mice showed minimal uptake of nanoparticles by the mononuclear phagocyte system and excellent blood circulation half-lives of 14.4 h. As a result, tumor growth was completely inhibited and no damage was observed for normal organ tissues. This newly developed drug nanovehicle has great potential in cancer therapy, and the hierarchical design principle should provide valuable information for the development of the next generation of drug delivery systems.Effective delivery of therapeutics to disease sites significantly contributes to drug efficacy, toxicity and clearance. Here we designed a hierarchical polymeric nanoparticle structure for anti-cancer chemotherapy delivery by utilizing state-of-the-art polymer chemistry and co-assembly techniques. This novel structural design combines the most desired merits for drug delivery in a single particle, including a long in vivo circulation time, inhibited non-specific cell uptake
Nonlinear decoupling controller design based on least squares support vector regression
Institute of Scientific and Technical Information of China (English)
WEN Xiang-jun; ZHANG Yu-nong; YAN Wei-wu; XU Xiao-ming
2006-01-01
Support Vector Machines (SVMs) have been widely used in pattern recognition and have also drawn considerable interest in control areas. Based on a method of least squares SVM (LS-SVM) for multivariate function estimation, a generalized inverse system is developed for the linearization and decoupling control ora general nonlinear continuous system. The approach of inverse modelling via LS-SVM and parameters optimization using the Bayesian evidence framework is discussed in detail. In this paper, complex high-order nonlinear system is decoupled into a number of pseudo-linear Single Input Single Output (SISO) subsystems with linear dynamic components. The poles of pseudo-linear subsystems can be configured to desired positions. The proposed method provides an effective alternative to the controller design of plants whose accurate mathematical model is unknown or state variables are difficult or impossible to measure. Simulation results showed the efficacy of the method.
Vermillion, James E.
The presence of artifactual bias in analysis of covariance (ANCOVA) and in matching nonequivalent control group (NECG) designs was empirically investigated. The data set was obtained from a study of the effects of a television program on children from three day care centers in Mexico in which the subjects had been randomly selected within centers.…
Manga, Punita; Klingeman, Dawn M; Lu, Tse-Yuan S; Mehlhorn, Tonia L; Pelletier, Dale A; Hauser, Loren J; Wilson, Charlotte M; Brown, Steven D
2016-01-01
RNA-seq is being used increasingly for gene expression studies and it is revolutionizing the fields of genomics and transcriptomics. However, the field of RNA-seq analysis is still evolving. Therefore, we specifically designed this study to contain large numbers of reads and four biological replicates per condition so we could alter these parameters and assess their impact on differential expression results. Bacillus thuringiensis strains ATCC10792 and CT43 were grown in two Luria broth medium lots on four dates and transcriptomics data were generated using one lane of sequence output from an Illumina HiSeq2000 instrument for each of the 32 samples, which were then analyzed using DESeq2. Genome coverages across samples ranged from 87 to 465X with medium lots and culture dates identified as major variation sources. Significantly differentially expressed genes (5% FDR, two-fold change) were detected for cultures grown using different medium lots and between different dates. The highly differentially expressed iron acquisition and metabolism genes, were a likely consequence of differing amounts of iron in the two media lots. Indeed, in this study RNA-seq was a tool for predictive biology since we hypothesized and confirmed the two LB medium lots had different iron contents (~two-fold difference). This study shows that the noise in data can be controlled and minimized with appropriate experimental design and by having the appropriate number of replicates and reads for the system being studied. We outline parameters for an efficient and cost effective microbial transcriptomics study.
Directory of Open Access Journals (Sweden)
Panatchai Chetchotisak
2015-09-01
Full Text Available Because of nonlinear strain distributions caused either by abrupt changes in geometry or in loading in deep beam, the approach for conventional beams is not applicable. Consequently, strut-and-tie model (STM has been applied as the most rational and simple method for strength prediction and design of reinforced concrete deep beams. A deep beam is idealized by the STM as a truss-like structure consisting of diagonal concrete struts and tension ties. There have been numerous works proposing the STMs for deep beams. However, uncertainty and complexity in shear strength computations of deep beams can be found in some STMs. Therefore, improvement of methods for predicting the shear strengths of deep beams are still needed. By means of a large experimental database of 406 deep beam test results covering a wide range of influencing parameters, several shapes and geometry of STM and six state-of-the-art formulation of the efficiency factors found in the design codes and literature, the new STMs for predicting the shear strength of simply supported reinforced concrete deep beams using multiple linear regression analysis is proposed in this paper. Furthermore, the regression diagnostics and the validation process are included in this study. Finally, two numerical examples are also provided for illustration.
Saleem, Muhammad Rizwan; Ali, Rizwan; Honkanen, Seppo; Turunen, Jari
2015-03-01
We demonstrated the design, fabrication and characterization of three Resonant Waveguide Gratings (RWGs) with different polymer substrate materials [polycarbonate (PC), cyclic-olefin-copolymer (COC) and Ormocomps). The RWGs are designed by Fourier Modal Method and fabricated by Electron Beam Lithography, Nanoimprinting and Atomic Layer Deposition. RWGs are investigated for athermal filtering device operation over a wide range of temperatures. Spectral shifts of RWGs are described in terms of thermal expansion and thermo-optic coefficients of the selected substrate and waveguide materials. Furthermore, the spectral shifts are explained on the basis of shrinkage strains, frozen-in stresses and the molecular chain orientation in polymeric materials. The thermal spectral stability of these filters was compared by theoretical calculations and experimental measurements. For PC gratings, there is a good agreement between calculated and measured results with a net spectral shift of 0.8 nm over 75 °C wide temperature range. Optical spectral characterization of COC and Ormocomp gratings showed larger red spectral shifts than predicted by theoretical calculations. The deviation (0-1.5 nm) for the COC grating may result in by high modulus and inherent stresses which were relaxed during heating and accompanied with the predominance of the thermal expansion coefficient. The Ormocomps gratings were subjected to UV-irradiation, causing the generation of compressive (shrinkage) strains, which were relieved on heating with a net result of expansion of material, demonstrated by thermal spectral shifts towards longer wavelengths (0-2.5 nm). The spectral shifts might also be caused partially by the reorientation and reconfiguration of the polymer chains.
Cheng, Yu-Huei
2015-01-01
Primers plays important role in polymerase chain reaction (PCR) experiments, thus it is necessary to select characteristic primers. Unfortunately, manual primer design manners are time-consuming and easy to get human negligence because many PCR constraints must be considered simultaneously. Automatic programs for primer design were developed urgently. In this study, the teaching-learning-based optimization (TLBO), a robust and free of algorithm-specific parameters method, is applied to screen primers conformed primer constraints. The optimal primer frequency (OPF) based on three known melting temperature formulas is estimated by 500 runs for primer design in each different number of generations. We selected optimal primers from fifty random nucleotide sequences of Homo sapiens at NCBI. The results indicate that the SantaLucia's formula is better coupled with the method to get higher optimal primer frequency and shorter CPU-time than the Wallace's formula and the Bolton and McCarthy's formula. Through the regression analysis, we also find the generations are significantly associated with the optimal primer frequency. The results are helpful for developing the novel TLBO-based computational method to design feasible primers.
Chaibub Neto, Elias; Bare, J Christopher; Margolin, Adam A
2014-01-01
New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where "omics" features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights.
Directory of Open Access Journals (Sweden)
Elias Chaibub Neto
Full Text Available New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where "omics" features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights.
Hao, Lingxin
2007-01-01
Quantile Regression, the first book of Hao and Naiman's two-book series, establishes the seldom recognized link between inequality studies and quantile regression models. Though separate methodological literature exists for each subject, the authors seek to explore the natural connections between this increasingly sought-after tool and research topics in the social sciences. Quantile regression as a method does not rely on assumptions as restrictive as those for the classical linear regression; though more traditional models such as least squares linear regression are more widely utilized, Hao
Directory of Open Access Journals (Sweden)
Weerapon Nuantong
2016-01-01
Full Text Available This research study was aimed to develop a new concept design of a very low head (VLH turbine using advanced optimization methodologies. A potentially local site was chosen for the turbine and based on its local conditions, such as the water head level of <2 meters and the flow rate of <5 m3/s. The study focused on the optimization of the turbine blade and guide vane profiles, because of their major impacts on the efficiency of the VLH axial flow turbine. The fluid flow simulation was firstly conducted for the axial turbine, followed by applying the regression analysis concept to develop a turbine mathematical model where the leading- and trailing-edge angles of the guide vanes and the turbine blades were related to the efficiency, total head and flow rate. The genetic algorithms (GA with multi-objective function was also used to locate the optimal blade angles. Thereafter, the refined design was re-simulated. Following this procedure the turbine efficiency was improved from 82.59% to 83.96% at a flow rate of 4.2 m3/s and total head of 2 meters.
Ventura, Cristina; Latino, Diogo A R S; Martins, Filomena
2013-01-01
The performance of two QSAR methodologies, namely Multiple Linear Regressions (MLR) and Neural Networks (NN), towards the modeling and prediction of antitubercular activity was evaluated and compared. A data set of 173 potentially active compounds belonging to the hydrazide family and represented by 96 descriptors was analyzed. Models were built with Multiple Linear Regressions (MLR), single Feed-Forward Neural Networks (FFNNs), ensembles of FFNNs and Associative Neural Networks (AsNNs) using four different data sets and different types of descriptors. The predictive ability of the different techniques used were assessed and discussed on the basis of different validation criteria and results show in general a better performance of AsNNs in terms of learning ability and prediction of antitubercular behaviors when compared with all other methods. MLR have, however, the advantage of pinpointing the most relevant molecular characteristics responsible for the behavior of these compounds against Mycobacterium tuberculosis. The best results for the larger data set (94 compounds in training set and 18 in test set) were obtained with AsNNs using seven descriptors (R(2) of 0.874 and RMSE of 0.437 against R(2) of 0.845 and RMSE of 0.472 in MLRs, for test set). Counter-Propagation Neural Networks (CPNNs) were trained with the same data sets and descriptors. From the scrutiny of the weight levels in each CPNN and the information retrieved from MLRs, a rational design of potentially active compounds was attempted. Two new compounds were synthesized and tested against M. tuberculosis showing an activity close to that predicted by the majority of the models.
Kemme, Bettina
2010-01-01
Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and
Directory of Open Access Journals (Sweden)
Greg Francis
2016-09-01
Full Text Available In response to concerns about the validity of empirical findings in psychology, some scientists use replication studies as a way to validate good science and to identify poor science. Such efforts are resource intensive and are sometimes controversial (with accusations of researcher incompetence when a replication fails to show a previous result. An alternative approach is to examine the statistical properties of the reported literature to identify some cases of poor science. This review discusses some details of this process for prominent findings about racial bias, where a set of studies seems too good to be true. This kind of analysis is based on the original studies, so it avoids criticism from the original authors about the validity of replication studies. The analysis is also much easier to perform than a new empirical study. A variation of the analysis can also be used to explore whether it makes sense to run a replication study. As demonstrated here, there are situations where the existing data suggest that a direct replication of a set of studies is not worth the effort. Such a conclusion should motivate scientists to generate alternative experimental designs that better test theoretical ideas.
Hren, Darko; Marušić, Matko; Marušić, Ana
2011-03-30
Moral reasoning is important for developing medical professionalism but current evidence for the relationship between education and moral reasoning does not clearly apply to medical students. We used a combined study design to test the effect of clinical teaching on moral reasoning. We used the Defining Issues Test-2 as a measure of moral judgment, with 3 general moral schemas: Personal Interest, Maintaining Norms, and Postconventional Schema. The test was applied to 3 consecutive cohorts of second year students in 2002 (n = 207), 2003 (n = 192), and 2004 (n = 139), and to 707 students of all 6 study years in 2004 cross-sectional study. We also tested 298 age-matched controls without university education. In the cross-sectional study, there was significant main effect of the study year for Postconventional (F(5,679) = 3.67, P = 0.003) and Personal Interest scores (F(5,679) = 3.38, P = 0.005). There was no effect of the study year for Maintaining Norms scores. 3(rd) year medical students scored higher on Postconventional schema score than all other study years (pregressed from Postconventional to Maintaining Norms schema-based reasoning after entering the clinical part of the curriculum. Our study demonstrated direct causative relationship between the regression in moral reasoning development and clinical teaching during medical curriculum. The reasons may include hierarchical organization of clinical practice, specific nature of moral dilemmas faced by medical students, and hidden medical curriculum.
Institute of Scientific and Technical Information of China (English)
Jinhong YOU; CHEN Min; Gemai CHEN
2004-01-01
Consider a semiparametric regression model with linear time series errors Yκ = x′κβ + g(tκ) + εκ,1 ≤ k ≤ n, where Yκ's are responses, xκ= (xκ1,xκ2,…,xκp)′and tκ ∈ T( ) R are fixed design points, β = (β1,β2,…… ,βp)′ is an unknown parameter vector, g(.) is an unknown bounded real-valued function defined on a compact subset T of the real line R, and εκ is a linear process given by εκ = ∑∞j=0 ψjeκ-j, ψ0 = 1, where ∑∞j=0 |ψj| ＜∞, and ej, j = 0,±1,±2,…, are I.I.d, random variables. In this paper we establish the asymptotic normality of the least squares estimator ofβ, a smooth estimator of g(·), and estimators of the autocovariance and autocorrelation functions of the linear process εκ.
Kahane, Leo H
2007-01-01
Using a friendly, nontechnical approach, the Second Edition of Regression Basics introduces readers to the fundamentals of regression. Accessible to anyone with an introductory statistics background, this book builds from a simple two-variable model to a model of greater complexity. Author Leo H. Kahane weaves four engaging examples throughout the text to illustrate not only the techniques of regression but also how this empirical tool can be applied in creative ways to consider a broad array of topics. New to the Second Edition Offers greater coverage of simple panel-data estimation:
DEFF Research Database (Denmark)
Pop, Paul; Izosimov, Viacheslav; Eles, Petru;
2009-01-01
We present an approach to the synthesis of fault-tolerant hard real-time systems for safety-critical applications. We use checkpointing with rollback recovery and active replication for tolerating transient faults. Processes and communications are statically scheduled. Our synthesis approach deci...
Directory of Open Access Journals (Sweden)
Ahmet DEMIR
2015-07-01
Full Text Available Artificial neural network models have been already used on many different fields successfully. However, many researches show that ANN models provide better optimum results than other competitive models in most of the researches. But does it provide optimum solutions in case ANN is proposed as hybrid model? The answer of this question is given in this research by using these models on modelling a forecast for GDP growth of Japan. Multiple regression models utilized as competitive models versus hybrid ANN (ANN + multiple regression models. Results have shown that hybrid model gives better responds than multiple regression models. However, variables, which were significantly affecting GDP growth, were determined and some of the variables, which were assumed to be affecting GDP growth of Japan, were eliminated statistically.
Steenbergen-Hu, Saiying; Olszewski-Kubilius, Paula
2016-01-01
The article by Davis, Engberg, Epple, Sieg, and Zimmer (2010) represents one of the recent research efforts from economists in evaluating the impact of gifted programs. It can serve as a worked example of the implementation of the regression discontinuity (RD) design method in gifted education research. In this commentary, we first illustrate the…
Matson, Johnny L.; Kozlowski, Alison M.
2010-01-01
Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…
Nick, Todd G; Campbell, Kathleen M
2007-01-01
The Medical Subject Headings (MeSH) thesaurus used by the National Library of Medicine defines logistic regression models as "statistical models which describe the relationship between a qualitative dependent variable (that is, one which can take only certain discrete values, such as the presence or absence of a disease) and an independent variable." Logistic regression models are used to study effects of predictor variables on categorical outcomes and normally the outcome is binary, such as presence or absence of disease (e.g., non-Hodgkin's lymphoma), in which case the model is called a binary logistic model. When there are multiple predictors (e.g., risk factors and treatments) the model is referred to as a multiple or multivariable logistic regression model and is one of the most frequently used statistical model in medical journals. In this chapter, we examine both simple and multiple binary logistic regression models and present related issues, including interaction, categorical predictor variables, continuous predictor variables, and goodness of fit.
Directory of Open Access Journals (Sweden)
Anton V Bryksin
Full Text Available BACKGROUND: Most plasmids replicate only within a particular genus or family. METHODOLOGY/PRINCIPAL FINDINGS: Here we describe an engineered high copy number expression vector, pBAV1K-T5, that produces varying quantities of active reporter proteins in Escherichia coli, Acinetobacter baylyi ADP1, Agrobacterium tumefaciens, (all gram-negative, Streptococcus pneumoniae, Leifsonia shinshuensis, Peanibacillus sp. S18-36 and Bacillus subtilis (gram-positive. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate the efficiency of pBAV1K-T5 replication in different bacterial species, thereby facilitating the study of proteins that don't fold well in E. coli and pathogens not amenable to existing genetic tools.
Archfield, Stacey A.; Pugliese, Alessio; Castellarin, Attilio; Skøien, Jon O.; Kiang, Julie E.
2013-01-01
In the United States, estimation of flood frequency quantiles at ungauged locations has been largely based on regional regression techniques that relate measurable catchment descriptors to flood quantiles. More recently, spatial interpolation techniques of point data have been shown to be effective for predicting streamflow statistics (i.e., flood flows and low-flow indices) in ungauged catchments. Literature reports successful applications of two techniques, canonical kriging, CK (or physiographical-space-based interpolation, PSBI), and topological kriging, TK (or top-kriging). CK performs the spatial interpolation of the streamflow statistic of interest in the two-dimensional space of catchment descriptors. TK predicts the streamflow statistic along river networks taking both the catchment area and nested nature of catchments into account. It is of interest to understand how these spatial interpolation methods compare with generalized least squares (GLS) regression, one of the most common approaches to estimate flood quantiles at ungauged locations. By means of a leave-one-out cross-validation procedure, the performance of CK and TK was compared to GLS regression equations developed for the prediction of 10, 50, 100 and 500 yr floods for 61 streamgauges in the southeast United States. TK substantially outperforms GLS and CK for the study area, particularly for large catchments. The performance of TK over GLS highlights an important distinction between the treatments of spatial correlation when using regression-based or spatial interpolation methods to estimate flood quantiles at ungauged locations. The analysis also shows that coupling TK with CK slightly improves the performance of TK; however, the improvement is marginal when compared to the improvement in performance over GLS.
Directory of Open Access Journals (Sweden)
S. A. Archfield
2013-04-01
Full Text Available In the United States, estimation of flood frequency quantiles at ungauged locations has been largely based on regional regression techniques that relate measurable catchment descriptors to flood quantiles. More recently, spatial interpolation techniques of point data have been shown to be effective for predicting streamflow statistics (i.e., flood flows and low-flow indices in ungauged catchments. Literature reports successful applications of two techniques, canonical kriging, CK (or physiographical-space-based interpolation, PSBI, and topological kriging, TK (or top-kriging. CK performs the spatial interpolation of the streamflow statistic of interest in the two-dimensional space of catchment descriptors. TK predicts the streamflow statistic along river networks taking both the catchment area and nested nature of catchments into account. It is of interest to understand how these spatial interpolation methods compare with generalized least squares (GLS regression, one of the most common approaches to estimate flood quantiles at ungauged locations. By means of a leave-one-out cross-validation procedure, the performance of CK and TK was compared to GLS regression equations developed for the prediction of 10, 50, 100 and 500 yr floods for 61 streamgauges in the southeast United States. TK substantially outperforms GLS and CK for the study area, particularly for large catchments. The performance of TK over GLS highlights an important distinction between the treatments of spatial correlation when using regression-based or spatial interpolation methods to estimate flood quantiles at ungauged locations. The analysis also shows that coupling TK with CK slightly improves the performance of TK; however, the improvement is marginal when compared to the improvement in performance over GLS.
Idkaidek, Nasir M; Al-Ghazawi, Ahmad; Najib, Naji M
2004-12-01
The purpose of this study was to apply a replicate design approach to a bioequivalence study of amoxicillin/clavulanic acid combination following a 250/125 mg oral dose to 23 subjects, and to compare the analysis of individual bioequivalence with average bioequivalence. This was conducted as a 2-treatment 2-sequence 4-period crossover study. Average bioequivalence was shown, while the results from the individual bioequivalence approach had no success in showing bioequivalence. In conclusion, the individual bioequivalence approach is a strong statistical tool to test for intra-subject variances and also subject-by-formulation interaction variance compared with the average bioequivalence approach.
Freund, Rudolf J; Sa, Ping
2006-01-01
The book provides complete coverage of the classical methods of statistical analysis. It is designed to give students an understanding of the purpose of statistical analyses, to allow the student to determine, at least to some degree, the correct type of statistical analyses to be performed in a given situation, and have some appreciation of what constitutes good experimental design
Razoumny, Yury N.
2016-12-01
Basing on the theory results considered in the previous papers of the series for traditional one-tiered constellation formed on the orbits with the same values of altitudes and inclinations for all the satellites of the constellation, the method for constellation design using compound satellite structures on orbits with different altitudes and inclinations and synchronized nodal regression is developed. Compound, multi-tiered, satellite structures (constellations) are based on orbits with different values of altitude and inclination providing nodal regression synchronization. It is shown that using compound satellite constellations for Earth periodic coverage makes it possible to sufficiently improve the Earth coverage, as compared to the traditional constellations based on the orbits with common altitude and inclination for all the satellites of the constellation, and, as a consequence, to get new opportunities for the satellite constellation design for different types of prospective space systems regarding increasing the quality of observations or minimization of the number of the satellites required.
Directory of Open Access Journals (Sweden)
Halil Ibrahim Cebeci
2009-12-01
Full Text Available This study explores the relationship between the student performance and instructional design. The research was conducted at the E-Learning School at a university in Turkey. A list of design factors that had potential influence on student success was created through a review of the literature and interviews with relevant experts. From this, the five most import design factors were chosen. The experts scored 25 university courses on the extent to which they demonstrated the chosen design factors. Multiple-regression and supervised artificial neural network (ANN models were used to examine the relationship between student grade point averages and the scores on the five design factors. The results indicated that there is no statistical difference between the two models. Both models identified the use of examples and applications as the most influential factor. The ANN model provided more information and was used to predict the course-specific factor values required for a desired level of success.
Xu, Wangli; Zhou, Haibo
2012-09-01
Two-stage design is a well-known cost-effective way for conducting biomedical studies when the exposure variable is expensive or difficult to measure. Recent research development further allowed one or both stages of the two-stage design to be outcome dependent on a continuous outcome variable. This outcome-dependent sampling feature enables further efficiency gain in parameter estimation and overall cost reduction of the study (e.g. Wang, X. and Zhou, H., 2010. Design and inference for cancer biomarker study with an outcome and auxiliary-dependent subsampling. Biometrics 66, 502-511; Zhou, H., Song, R., Wu, Y. and Qin, J., 2011. Statistical inference for a two-stage outcome-dependent sampling design with a continuous outcome. Biometrics 67, 194-202). In this paper, we develop a semiparametric mixed effect regression model for data from a two-stage design where the second-stage data are sampled with an outcome-auxiliary-dependent sample (OADS) scheme. Our method allows the cluster- or center-effects of the study subjects to be accounted for. We propose an estimated likelihood function to estimate the regression parameters. Simulation study indicates that greater study efficiency gains can be achieved under the proposed two-stage OADS design with center-effects when compared with other alternative sampling schemes. We illustrate the proposed method by analyzing a dataset from the Collaborative Perinatal Project.
Lulla, Valeria; Lulla, Aleksei; Wernike, Kerstin; Aebischer, Andrea; Beer, Martin
2016-01-01
ABSTRACT African horse sickness virus (AHSV), an orbivirus in the Reoviridae family with nine different serotypes, causes devastating disease in equids. The virion particle is composed of seven proteins organized in three concentric layers, an outer layer made of VP2 and VP5, a middle layer made of VP7, and inner layer made of VP3 that encloses a replicase complex of VP1, VP4, and VP6 and a genome of 10 double-stranded RNA segments. In this study, we sought to develop highly efficacious candidate vaccines against all AHSV serotypes, taking into account not only immunogenic and safety properties but also virus productivity and stability parameters, which are essential criteria for vaccine candidates. To achieve this goal, we first established a highly efficient reverse genetics (RG) system for AHSV serotype 1 (AHSV1) and, subsequently, a VP6-defective AHSV1 strain in combination with in trans complementation of VP6. This was then used to generate defective particles of all nine serotypes, which required the exchange of two to five RNA segments to achieve equivalent titers of particles. All reassortant-defective viruses could be amplified and propagated to high titers in cells complemented with VP6 but were totally incompetent in any other cells. Furthermore, these replication-incompetent AHSV particles were demonstrated to be highly protective against homologous virulent virus challenges in type I interferon receptor (IFNAR)-knockout mice. Thus, these defective viruses have the potential to be used for the development of safe and stable vaccine candidates. The RG system also provides a powerful tool for the study of the role of individual AHSV proteins in virus assembly, morphogenesis, and pathogenesis. IMPORTANCE African horse sickness virus is transmitted by biting midges and causes African horse sickness in equids, with mortality reaching up to 95% in naive horses. Therefore, the development of efficient vaccines is extremely important due to major economic
Directory of Open Access Journals (Sweden)
Sankaran Venugopal
2011-10-01
Full Text Available This paper presents the design of a hybrid rocket motor and the experiments carried out for investigation of hybrid combustion and regression rates for a combination of liquid oxidiser red fuming nitric acid with solid fuel hydroxyl-terminated Polybutadiene. The regression rate is enhanced with the addition of small quantity of solid oxidiser ammonium perchlorate in the fuel. The characteristics of the combustion products were calculated using the NASA CEA Code and were used in a ballistic code developed for predicting the performance of the hybrid rocket motor. A lab-scale motor was designed and the oxidiser mass flow requirements of the hybrid motor for the above combination of fuel and oxidiser have been calculated using the developed ballistic code. A static rocket motor testing facility has been realised for conducting the hybrid experiments. A series of tests were conducted and proper ignition with stable combustion in the hybrid mode has been established. The regression rate correlations were obtained as a function of the oxidiser mass flux and chamber pressure from the experiments for the various combinations.Defence Science Journal, 2011, 61(6, pp.515-522, DOI:http://dx.doi.org/10.14429/dsj.61.873
Louie, Josephine; Rhoads, Christopher; Mark, June
2014-01-01
Recent federal legislation, such as the "No Child Left Behind Act" of 2001 and the "Education Sciences Reform Act (ESRA)" of 2002, has insisted that educational evaluations use rigorous research designs with quantitative outcome measures. In particular, the Institute of Education Sciences (IES) at the U.S. Department of…
Transductive Ordinal Regression
Seah, Chun-Wei; Ong, Yew-Soon
2011-01-01
Ordinal regression is commonly formulated as a multi-class problem with ordinal constraints. The challenge of designing accurate classifiers for ordinal regression generally increases with the number of classes involved, due to the large number of labeled patterns that are needed. The availability of ordinal class labels, however, are often costly to calibrate or difficult to obtain. Unlabeled patterns, on the other hand, often exist in much greater abundance and are freely available. To take benefits from the abundance of unlabeled patterns, we present a novel transductive learning paradigm for ordinal regression in this paper, namely Transductive Ordinal Regression (TOR). The key challenge of the present study lies in the precise estimation of both the ordinal class label of the unlabeled data and the decision functions of the ordinal classes, simultaneously. The core elements of the proposed TOR include an objective function that caters to several commonly used loss functions casted in transductive setting...
LHCb experience with LFC replication
Bonifazi, F; Perez, E D; D'Apice, A; dell'Agnello, L; Düllmann, D; Girone, M; Re, G L; Martelli, B; Peco, G; Ricci, P P; Sapunenko, V; Vagnoni, V; Vitlacil, D
2008-01-01
Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.
LHCb experience with LFC replication
Carbone, Angelo; Dafonte Perez, Eva; D'Apice, Antimo; dell'Agnello, Luca; Duellmann, Dirk; Girone, Maria; Lo Re, Giuseppe; Martelli, Barbara; Peco, Gianluca; Ricci, Pier Paolo; Sapunenko, Vladimir; Vagnoni, Vincenzo; Vitlacil, Dejan
2007-01-01
Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informations (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.
Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay
2016-04-01
Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford
Kelman, Lori M; Kelman, Zvi
2014-01-01
DNA replication is essential for all life forms. Although the process is fundamentally conserved in the three domains of life, bioinformatic, biochemical, structural, and genetic studies have demonstrated that the process and the proteins involved in archaeal DNA replication are more similar to those in eukaryal DNA replication than in bacterial DNA replication, but have some archaeal-specific features. The archaeal replication system, however, is not monolithic, and there are some differences in the replication process between different species. In this review, the current knowledge of the mechanisms governing DNA replication in Archaea is summarized. The general features of the replication process as well as some of the differences are discussed.
Replication forks reverse at high frequency upon replication stress in Physarum polycephalum.
Maric, Chrystelle; Bénard, Marianne
2014-12-01
The addition of hydroxyurea after the onset of S phase allows replication to start and permits the successive detecting of replication-dependent joint DNA molecules and chicken foot structures in the synchronous nuclei of Physarum polycephalum. We find evidence for a very high frequency of reversed replication forks upon replication stress. The formation of these reversed forks is dependent on the presence of joint DNA molecules, the impediment of the replication fork progression by hydroxyurea, and likely on the propensity of some replication origins to reinitiate replication to counteract the action of this compound. As hydroxyurea treatment enables us to successively detect the appearance of joint DNA molecules and then of reversed replication forks, we propose that chicken foot structures are formed both from the regression of hydroxyurea-frozen joint DNA molecules and from hydroxyurea-stalled replication forks. These experiments underscore the transient nature of replication fork regression, which becomes detectable due to the hydroxyurea-induced slowing down of replication fork progression.
Directory of Open Access Journals (Sweden)
Pratima Kunwar
Full Text Available A successful HIV vaccine will likely induce both humoral and cell-mediated immunity, however, the enormous diversity of HIV has hampered the development of a vaccine that effectively elicits both arms of the adaptive immune response. To tackle the problem of viral diversity, T cell-based vaccine approaches have focused on two main strategies (i increasing the breadth of vaccine-induced responses or (ii increasing vaccine-induced responses targeting only conserved regions of the virus. The relative extent to which set-point viremia is impacted by epitope-conservation of CD8(+ T cell responses elicited during early HIV-infection is unknown but has important implications for vaccine design. To address this question, we comprehensively mapped HIV-1 CD8(+ T cell epitope-specificities in 23 ART-naïve individuals during early infection and computed their conservation score (CS by three different methods (prevalence, entropy and conseq on clade-B and group-M sequence alignments. The majority of CD8(+ T cell responses were directed against variable epitopes (p<0.01. Interestingly, increasing breadth of CD8(+ T cell responses specifically recognizing conserved epitopes was associated with lower set-point viremia (r = - 0.65, p = 0.009. Moreover, subjects possessing CD8(+ T cells recognizing at least one conserved epitope had 1.4 log10 lower set-point viremia compared to those recognizing only variable epitopes (p = 0.021. The association between viral control and the breadth of conserved CD8(+ T cell responses may be influenced by the method of CS definition and sequences used to determine conservation levels. Strikingly, targeting variable versus conserved epitopes was independent of HLA type (p = 0.215. The associations with viral control were independent of functional avidity of CD8(+ T cell responses elicited during early infection. Taken together, these data suggest that the next-generation of T-cell based HIV-1 vaccines should focus
Pedrini, D. T.; Pedrini, Bonnie C.
Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…
Pedrini, D. T.; Pedrini, Bonnie C.
Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…
Karim, Aziz; Zhao, Zhen; Slater, Margaret; Bradford, Dawn; Schuster, Jennifer; Laurent, Aziz
2007-07-01
An open-label, randomized, 2-sequence, 4-period crossover (7-day washout period between treatment), replicate design study was conducted in 37 healthy subjects to assess intersubject and intrasubject variabilities in the peak (Cmax) and total (AUC) exposures to 2 oral antidiabetic drugs, pioglitazone and glimepiride, after single doses of 30 mg pioglitazone and 4 mg glimepiride, given under fasted state, as commercial tablets coadministered or as a single fixed-dose combination tablet. Variabilities for AUC(infinity) for coadministered and fixed-dose combination treatments were similar: 16% to 19% (intra) and 23% to 25% (inter) for pioglitazone and 18% to 19% (intra) and 29% to 30% for glimepiride (inter, excluding 1 poor metabolizer). Fixed-dose combination/coadministered least squares mean ratios of >or=0.86 and the 90% confidence intervals of these ratios for pioglitazone and glimepiride of between 0.80 and 1.25 for Cmax, AUC(lqc), and AUC(infinity) met the bioequivalency standards. Gender analysis showed that women showed mean of 16% and 30% higher exposure than men for glimepiride (excluding 1 poor metabolizer) and pioglitazone, respectively. There was considerable overlapping in the AUC(infinity) values, making gender-dependent dosing unnecessary. Patients taking pioglitazone and glimepiride as cotherapy may replace their medication with a single fixed-dose combination tablet containing these 2 oral antidiabetic drugs.
Cosbey, Joanna; Muldoon, Deirdre
2017-01-01
This study evaluated the effectiveness of a family-centered feeding intervention, Easing Anxiety Together with Understanding and Perseverance (EAT-UP™), for promoting food acceptance of children with autism spectrum disorder at home. A concurrent multiple-baseline design was used with systematic replication across three families. Baseline was…
Aoki, Shigeru; Uchiyama, Jumpei; Ito, Manabu
2014-06-01
Differences between laboratory and commercial tablet presses are frequently observed during scale-up of tableting process. These scale-up issues result from the differences in total compression time that is the sum of consolidation and dwell times. When a lubricated blend is compressed into tablets, the tablet thickness produced by the commercial tablet press is often thicker than that by a laboratory tablet press. A new punch shape design, designated as shape adjusted for scale-up (SAS), was developed and used to demonstrate the ability to replicate scale-up issues in commercial-scale tableting processes. It was found that the consolidation time can be slightly shortened by changing the vertical curvature of the conventional punch head rim. However, this approach is not enough to replicate the consolidation time. A secondary two-stage SAS punch design and an embossed punch head was designed to replicate the consolidation and dwell times on a laboratory tablet press to match those of a commercial tablet press. The resulting tablet thickness using this second SAS punch on a laboratory tablet press was thicker than when using a conventional punch in the same laboratory tablet press. The secondary SAS punches are more useful tools for replicating and understanding potential scale-up issues. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
基于Delphi的二次正交回归实验软件设计%Software Design of Quadratic Orthogonal Regression Experiment Based on Delphi
Institute of Scientific and Technical Information of China (English)
吴笛
2011-01-01
Orthogonal regression experiment is the important and efficient method of scientific experiments in the fields of materials science,chemical engineering and life science engineering with its especial advantages. However,its shortages are data processing complex,large computation,calculation process error and low efficiency. In this paper,through the experimental optimization of hot-dip aluminizing process parameters ,Delphi language is used to design two factors orthogonal regression experiment software. The software greatly improves the efficiency and accuracy of data processing,reduces errors and provides a reliable data processing platform for the similar experiment.%正交回归实验法以其独特的优势已成为包括材料科学、化学工程及生命科学工程领域在内的重要的、高效的科学实验方法.但是,正交回归实验法数据处理过程复杂,计算量大,计算过程极容易出错,效率低.通过热浸镀铝工艺参数的优化实验设计了基于Delphi语言的两因素二次正交回归实验软件,软件大幅度提高了数据处理效率和准确性,减少了误差,为类似实验提供了可靠的数据处理平台.
Liu, Yun; Hu, Chaoying; Liu, Gangyi; Jia, Jingying; Yu, Chen; Zhu, Jianmin; Zheng, Qingsi; Zhang, Kanyin E
2014-10-01
Artesun-Plus is a fixed-dose combination antimalarial agent containing artesunate and amodiaquine. The current study was conducted to compare the pharmacokinetic and safety profiles of Artesun-Plus and the WHO-designated comparator product Artesunate Amodiaquine Winthrop. To overcome the high intrasubject variability of artesunate, the study applied a two-sequence and four-period crossover (2 by 4), replicate study design to assess bioequivalence between the two products in 31 healthy male Chinese volunteers under fasting conditions. The results showed that the values of the geometric mean ratios of maximum concentration of drug in plasma (Cmax) and area under the concentration-time curve from time zero to the last blood sample collection (AUC0-last) for the artesunate component in the test and reference products were 95.9% and 93.9%, respectively, and that the corresponding 90% confidence intervals were 84.5% to 108.7% and 87.2% to 101.1%, while the geometric mean ratios for the amodiaquine component in the test and reference products were 95.0% and 100.0%, respectively, and the corresponding 90% confidence intervals were 86.7% to 104.1% and 93.5% to 107.0%. In conclusion, bioequivalence between the two artesunate and amodiaquine fixed-dose combination products was demonstrated for both components. The study also confirmed high intrasubject variability, especially for artesunate: the coefficients of variation (CV) of Cmax values for the test and reference products were 39.2% and 43.7%, respectively, while those for amodiaquine were 30.6% and 30.2%, respectively.
Heat shock and heat shock protein 70i enhance the oncolytic effect of replicative adenovirus.
Haviv, Y S; Blackwell, J L; Li, H; Wang, M; Lei, X; Curiel, D T
2001-12-01
Replication-competent viruses are currently being evaluated for their cancer cell-killing properties. These vectors are designed to induce tumor regression after selective viral propagation within the tumor. However, replication-competent viruses have not resulted heretofore in complete tumor eradication in the clinical setting. Recently, heat shock has been reported to partially alleviate replication restriction on an avian adenovirus (Ad) in a human lung cancer cell line. Therefore, we hypothesized that heat shock and overexpression of heat shock protein (hsp) would support the oncolytic effect of a replication-competent human Ad. To this end, we tested the oncolytic and burst kinetics of a replication-competent Ad after exposure to heat shock or to inducible hsp 70 overexpression by a replication-deficient Ad (Adhsp 70i). Heat-shock resulted in augmentation of Ad burst and oncolysis while decreasing total intracellular Ad DNA. Overexpression of hsp 70i also enhanced Ad-mediated oncolysis but did not decrease intracellular Ad DNA levels. We conclude that heat shock and Adhsp 70i enhance the Ad cell-killing potential via distinct mechanisms. A potential therapeutic implication would be the use of local hyperthermia to augment oncolysis by increasing the burst of replication-competent Ad. The role of hsp in Ad-mediated oncolysis should be additionally explored.
Ghaedi, M; Dashtian, K; Ghaedi, A M; Dehghanian, N
2016-05-11
The aim of this work is the study of the predictive ability of a hybrid model of support vector regression with genetic algorithm optimization (GA-SVR) for the adsorption of malachite green (MG) onto multi-walled carbon nanotubes (MWCNTs). Various factors were investigated by central composite design and optimum conditions was set as: pH 8, 0.018 g MWCNTs, 8 mg L(-1) dye mixed with 50 mL solution thoroughly for 10 min. The Langmuir, Freundlich, Temkin and D-R isothermal models are applied to fitting the experimental data, and the data was well explained by the Langmuir model with a maximum adsorption capacity of 62.11-80.64 mg g(-1) in a short time at 25 °C. Kinetic studies at various adsorbent dosages and the initial MG concentration show that maximum MG removal was achieved within 10 min of the start of every experiment under most conditions. The adsorption obeys the pseudo-second-order rate equation in addition to the intraparticle diffusion model. The optimal parameters (C of 0.2509, σ(2) of 0.1288 and ε of 0.2018) for the SVR model were obtained based on the GA. For the testing data set, MSE values of 0.0034 and the coefficient of determination (R(2)) values of 0.9195 were achieved.
Kim, Junghoon; Lee, Junwye; Hamada, Shogo; Murata, Satoshi; Ha Park, Sung
2015-06-01
Biology provides numerous examples of self-replicating machines, but artificially engineering such complex systems remains a formidable challenge. In particular, although simple artificial self-replicating systems including wooden blocks, magnetic systems, modular robots and synthetic molecular systems have been devised, such kinematic self-replicators are rare compared with examples of theoretical cellular self-replication. One of the principal reasons for this is the amount of complexity that arises when you try to incorporate self-replication into a physical medium. In this regard, DNA is a prime candidate material for constructing self-replicating systems due to its ability to self-assemble through molecular recognition. Here, we show that DNA T-motifs, which self-assemble into ring structures, can be designed to self-replicate through toehold-mediated strand displacement reactions. The inherent design of these rings allows the population dynamics of the systems to be controlled. We also analyse the replication scheme within a universal framework of self-replication and derive a quantitative metric of the self-replicability of the rings.
Tang, Yang; Cook, Thomas D.; Kisbu-Sakarya, Yasemin
2015-01-01
Regression discontinuity design (RD) has been widely used to produce reliable causal estimates. Researchers have validated the accuracy of RD design using within study comparisons (Cook, Shadish & Wong, 2008; Cook & Steiner, 2010; Shadish et al, 2011). Within study comparisons examines the validity of a quasi-experiment by comparing its…
Replication Restart in Bacteria.
Michel, Bénédicte; Sandler, Steven J
2017-07-01
In bacteria, replication forks assembled at a replication origin travel to the terminus, often a few megabases away. They may encounter obstacles that trigger replisome disassembly, rendering replication restart from abandoned forks crucial for cell viability. During the past 25 years, the genes that encode replication restart proteins have been identified and genetically characterized. In parallel, the enzymes were purified and analyzed in vitro, where they can catalyze replication initiation in a sequence-independent manner from fork-like DNA structures. This work also revealed a close link between replication and homologous recombination, as replication restart from recombination intermediates is an essential step of DNA double-strand break repair in bacteria and, conversely, arrested replication forks can be acted upon by recombination proteins and converted into various recombination substrates. In this review, we summarize this intense period of research that led to the characterization of the ubiquitous replication restart protein PriA and its partners, to the definition of several replication restart pathways in vivo, and to the description of tight links between replication and homologous recombination, responsible for the importance of replication restart in the maintenance of genome stability. Copyright © 2017 American Society for Microbiology.
Regression analysis by example
National Research Council Canada - National Science Library
Chatterjee, Samprit; Hadi, Ali S
2012-01-01
.... The emphasis continues to be on exploratory data analysis rather than statistical theory. The coverage offers in-depth treatment of regression diagnostics, transformation, multicollinearity, logistic regression, and robust regression...
Replicator dynamics in value chains
DEFF Research Database (Denmark)
Cantner, Uwe; Savin, Ivan; Vannuccini, Simone
2016-01-01
The pure model of replicator dynamics though providing important insights in the evolution of markets has not found much of empirical support. This paper extends the model to the case of firms vertically integrated in value chains. We show that i) by taking value chains into account, the replicator...... dynamics may revert its effect. In these regressive developments of market selection, firms with low fitness expand because of being integrated with highly fit partners, and the other way around; ii) allowing partner's switching within a value chain illustrates that periods of instability in the early...... stage of industry life-cycle may be the result of an 'optimization' of partners within a value chain providing a novel and simple explanation to the evidence discussed by Mazzucato (1998); iii) there are distinct differences in the contribution to market selection between the layers of a value chain...
DEFF Research Database (Denmark)
Johansen, Søren
2008-01-01
The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating e...
DEFF Research Database (Denmark)
Boyer, Anne-Sophie; Walter, David; Sørensen, Claus Storgaard
2016-01-01
A dividing cell has to duplicate its DNA precisely once during the cell cycle to preserve genome integrity avoiding the accumulation of genetic aberrations that promote diseases such as cancer. A large number of endogenous impacts can challenge DNA replication and cells harbor a battery of pathways...... causing DNA replication stress and genome instability. Further, we describe cellular and systemic responses to these insults with a focus on DNA replication restart pathways. Finally, we discuss the therapeutic potential of exploiting intrinsic replicative stress in cancer cells for targeted therapy....
Regression analysis by example
Chatterjee, Samprit
2012-01-01
Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded
Unitary Response Regression Models
Lipovetsky, S.
2007-01-01
The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…
Flexible survival regression modelling
DEFF Research Database (Denmark)
Cortese, Giuliana; Scheike, Thomas H; Martinussen, Torben
2009-01-01
Regression analysis of survival data, and more generally event history data, is typically based on Cox's regression model. We here review some recent methodology, focusing on the limitations of Cox's regression model. The key limitation is that the model is not well suited to represent time-varyi...
DEFF Research Database (Denmark)
Fitzenberger, Bernd; Wilke, Ralf Andreas
2015-01-01
Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights by m...... treatment of the topic is based on the perspective of applied researchers using quantile regression in their empirical work....
Replicating animal mitochondrial DNA
Directory of Open Access Journals (Sweden)
Emily A. McKinney
2013-01-01
Full Text Available The field of mitochondrial DNA (mtDNA replication has been experiencing incredible progress in recent years, and yet little is certain about the mechanism(s used by animal cells to replicate this plasmid-like genome. The long-standing strand-displacement model of mammalian mtDNA replication (for which single-stranded DNA intermediates are a hallmark has been intensively challenged by a new set of data, which suggests that replication proceeds via coupled leading-and lagging-strand synthesis (resembling bacterial genome replication and/or via long stretches of RNA intermediates laid on the mtDNA lagging-strand (the so called RITOLS. The set of proteins required for mtDNA replication is small and includes the catalytic and accessory subunits of DNA polymerase y, the mtDNA helicase Twinkle, the mitochondrial single-stranded DNA-binding protein, and the mitochondrial RNA polymerase (which most likely functions as the mtDNA primase. Mutations in the genes coding for the first three proteins are associated with human diseases and premature aging, justifying the research interest in the genetic, biochemical and structural properties of the mtDNA replication machinery. Here we summarize these properties and discuss the current models of mtDNA replication in animal cells.
Logistic regression: a brief primer.
Stoltzfus, Jill C
2011-10-01
Regression techniques are versatile in their application to medical research because they can measure associations, predict outcomes, and control for confounding variable effects. As one such technique, logistic regression is an efficient and powerful way to analyze the effect of a group of independent variables on a binary outcome by quantifying each independent variable's unique contribution. Using components of linear regression reflected in the logit scale, logistic regression iteratively identifies the strongest linear combination of variables with the greatest probability of detecting the observed outcome. Important considerations when conducting logistic regression include selecting independent variables, ensuring that relevant assumptions are met, and choosing an appropriate model building strategy. For independent variable selection, one should be guided by such factors as accepted theory, previous empirical investigations, clinical considerations, and univariate statistical analyses, with acknowledgement of potential confounding variables that should be accounted for. Basic assumptions that must be met for logistic regression include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers. Additionally, there should be an adequate number of events per independent variable to avoid an overfit model, with commonly recommended minimum "rules of thumb" ranging from 10 to 20 events per covariate. Regarding model building strategies, the three general types are direct/standard, sequential/hierarchical, and stepwise/statistical, with each having a different emphasis and purpose. Before reaching definitive conclusions from the results of any of these methods, one should formally quantify the model's internal validity (i.e., replicability within the same data set) and external validity (i.e., generalizability beyond the current sample). The resulting logistic regression model
Naghshpour, Shahdad
2012-01-01
Regression analysis is the most commonly used statistical method in the world. Although few would characterize this technique as simple, regression is in fact both simple and elegant. The complexity that many attribute to regression analysis is often a reflection of their lack of familiarity with the language of mathematics. But regression analysis can be understood even without a mastery of sophisticated mathematical concepts. This book provides the foundation and will help demystify regression analysis using examples from economics and with real data to show the applications of the method. T
Cactus: An Introduction to Regression
Hyde, Hartley
2008-01-01
When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…
Replication NAND gate with light as input and output.
Samiappan, Manickasundaram; Dadon, Zehavit; Ashkenasy, Gonen
2011-01-14
Logic operations can highlight information transfer within complex molecular networks. We describe here the design of a peptide-based replication system that can be detected by following its fluorescence quenching. This process is used to negate the signal of light-activated replication, and thus to prepare the first replication NAND gate.
The Replication Recipe: What makes for a convincing replication?
Brandt, M.J.; IJzerman, H.; Dijksterhuis, A.J.; Farach, F.J.; Geller, J.; Giner-Sorolla, R.; Grange, J.A.; Perugini, M.; Spies, J.R.; Veer, A. van 't
2014-01-01
Psychological scientists have recently started to reconsider the importance of close replications in building a cumulative knowledge base; however, there is no consensus about what constitutes a convincing close replication study. To facilitate convincing close replication attempts we have developed
Autistic epileptiform regression.
Canitano, Roberto; Zappella, Michele
2006-01-01
Autistic regression is a well known condition that occurs in one third of children with pervasive developmental disorders, who, after normal development in the first year of life, undergo a global regression during the second year that encompasses language, social skills and play. In a portion of these subjects, epileptiform abnormalities are present with or without seizures, resembling, in some respects, other epileptiform regressions of language and behaviour such as Landau-Kleffner syndrome. In these cases, for a more accurate definition of the clinical entity, the term autistic epileptifom regression has been suggested. As in other epileptic syndromes with regression, the relationships between EEG abnormalities, language and behaviour, in autism, are still unclear. We describe two cases of autistic epileptiform regression selected from a larger group of children with autistic spectrum disorders, with the aim of discussing the clinical features of the condition, the therapeutic approach and the outcome.
Scaled Sparse Linear Regression
Sun, Tingni
2011-01-01
Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model. It chooses an equilibrium with a sparse regression method by iteratively estimating the noise level via the mean residual squares and scaling the penalty in proportion to the estimated noise level. The iterative algorithm costs nearly nothing beyond the computation of a path of the sparse regression estimator for penalty levels above a threshold. For the scaled Lasso, the algorithm is a gradient descent in a convex minimization of a penalized joint loss function for the regression coefficients and noise level. Under mild regularity conditions, we prove that the method yields simultaneously an estimator for the noise level and an estimated coefficient vector in the Lasso path satisfying certain oracle inequalities for the estimation of the noise level, prediction, and the estimation of regression coefficients. These oracle inequalities provide sufficient conditions for the consistency and asymptotic...
Rolling Regressions with Stata
Kit Baum
2004-01-01
This talk will describe some work underway to add a "rolling regression" capability to Stata's suite of time series features. Although commands such as "statsby" permit analysis of non-overlapping subsamples in the time domain, they are not suited to the analysis of overlapping (e.g. "moving window") samples. Both moving-window and widening-window techniques are often used to judge the stability of time series regression relationships. We will present an implementation of a rolling regression...
Bennett, Joan
1998-01-01
Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)
Eukaryotic DNA Replication Fork.
Burgers, Peter M J; Kunkel, Thomas A
2017-06-20
This review focuses on the biogenesis and composition of the eukaryotic DNA replication fork, with an emphasis on the enzymes that synthesize DNA and repair discontinuities on the lagging strand of the replication fork. Physical and genetic methodologies aimed at understanding these processes are discussed. The preponderance of evidence supports a model in which DNA polymerase ε (Pol ε) carries out the bulk of leading strand DNA synthesis at an undisturbed replication fork. DNA polymerases α and δ carry out the initiation of Okazaki fragment synthesis and its elongation and maturation, respectively. This review also discusses alternative proposals, including cellular processes during which alternative forks may be utilized, and new biochemical studies with purified proteins that are aimed at reconstituting leading and lagging strand DNA synthesis separately and as an integrated replication fork.
Institute of Scientific and Technical Information of China (English)
Guijun YANG; Lu LIN; Runchu ZHANG
2007-01-01
Quasi-regression, motivated by the problems arising in the computer experiments, focuses mainly on speeding up evaluation. However, its theoretical properties are unexplored systemically. This paper shows that quasi-regression is unbiased, strong convergent and asymptotic normal for parameter estimations but it is biased for the fitting of curve. Furthermore, a new method called unbiased quasi-regression is proposed. In addition to retaining the above asymptotic behaviors of parameter estimations, unbiased quasi-regression is unbiased for the fitting of curve.
Introduction to regression graphics
Cook, R Dennis
2009-01-01
Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava
Weisberg, Sanford
2005-01-01
Master linear regression techniques with a new edition of a classic text Reviews of the Second Edition: ""I found it enjoyable reading and so full of interesting material that even the well-informed reader will probably find something new . . . a necessity for all of those who do linear regression."" -Technometrics, February 1987 ""Overall, I feel that the book is a valuable addition to the now considerable list of texts on applied linear regression. It should be a strong contender as the leading text for a first serious course in regression analysis."" -American Scientist, May-June 1987
Meyer, Adam J; Ellefson, Jared W; Ellington, Andrew D
2012-12-18
The key to the origins of life is the replication of information. Linear polymers such as nucleic acids that both carry information and can be replicated are currently what we consider to be the basis of living systems. However, these two properties are not necessarily coupled. The ability to mutate in a discrete or quantized way, without frequent reversion, may be an additional requirement for Darwinian evolution, in which case the notion that Darwinian evolution defines life may be less of a tautology than previously thought. In this Account, we examine a variety of in vitro systems of increasing complexity, from simple chemical replicators up to complex systems based on in vitro transcription and translation. Comparing and contrasting these systems provides an interesting window onto the molecular origins of life. For nucleic acids, the story likely begins with simple chemical replication, perhaps of the form A + B → T, in which T serves as a template for the joining of A and B. Molecular variants capable of faster replication would come to dominate a population, and the development of cycles in which templates could foster one another's replication would have led to increasingly complex replicators and from thence to the initial genomes. The initial genomes may have been propagated by RNA replicases, ribozymes capable of joining oligonucleotides and eventually polymerizing mononucleotide substrates. As ribozymes were added to the genome to fill gaps in the chemistry necessary for replication, the backbone of a putative RNA world would have emerged. It is likely that such replicators would have been plagued by molecular parasites, which would have been passively replicated by the RNA world machinery without contributing to it. These molecular parasites would have been a major driver for the development of compartmentalization/cellularization, as more robust compartments could have outcompeted parasite-ridden compartments. The eventual outsourcing of metabolic
Hoeben, Rob C.; Uil, Taco G.
2013-01-01
Adenoviruses have attracted much attention as probes to study biological processes such as DNA replication, transcription, splicing, and cellular transformation. More recently these viruses have been used as gene-transfer vectors and oncolytic agents. On the other hand, adenoviruses are notorious pathogens in people with compromised immune functions. This article will briefly summarize the basic replication strategy of adenoviruses and the key proteins involved and will deal with the new deve...
Cebeci, Halil Ibrahim; Yazgan, Harun Resit; Geyik, Abdulkadir
2009-01-01
This study explores the relationship between the student performance and instructional design. The research was conducted at the E-Learning School at a university in Turkey. A list of design factors that had potential influence on student success was created through a review of the literature and interviews with relevant experts. From this, the…
Energy Technology Data Exchange (ETDEWEB)
Gerber, Samuel [Univ. of Utah, Salt Lake City, UT (United States); Rubel, Oliver [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bremer, Peer -Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Whitaker, Ross T. [Univ. of Utah, Salt Lake City, UT (United States)
2012-01-19
This paper introduces a novel partition-based regression approach that incorporates topological information. Partition-based regression typically introduces a quality-of-fit-driven decomposition of the domain. The emphasis in this work is on a topologically meaningful segmentation. Thus, the proposed regression approach is based on a segmentation induced by a discrete approximation of the Morse–Smale complex. This yields a segmentation with partitions corresponding to regions of the function with a single minimum and maximum that are often well approximated by a linear model. This approach yields regression models that are amenable to interpretation and have good predictive capacity. Typically, regression estimates are quantified by their geometrical accuracy. For the proposed regression, an important aspect is the quality of the segmentation itself. Thus, this article introduces a new criterion that measures the topological accuracy of the estimate. The topological accuracy provides a complementary measure to the classical geometrical error measures and is very sensitive to overfitting. The Morse–Smale regression is compared to state-of-the-art approaches in terms of geometry and topology and yields comparable or improved fits in many cases. Finally, a detailed study on climate-simulation data demonstrates the application of the Morse–Smale regression. Supplementary Materials are available online and contain an implementation of the proposed approach in the R package msr, an analysis and simulations on the stability of the Morse–Smale complex approximation, and additional tables for the climate-simulation study.
DEFF Research Database (Denmark)
Bordacconi, Mats Joe; Larsen, Martin Vinæs
2014-01-01
Humans are fundamentally primed for making causal attributions based on correlations. This implies that researchers must be careful to present their results in a manner that inhibits unwarranted causal attribution. In this paper, we present the results of an experiment that suggests regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...
Krude, T; Knippers, R
1994-08-19
Single-stranded circular DNA, containing the SV40 origin sequence, was used as a template for complementary DNA strand synthesis in cytosolic extracts from HeLa cells. In the presence of the replication-dependent chromatin assembly factor CAF-1, defined numbers of nucleosomes were assembled during complementary DNA strand synthesis. These minichromosomes were then induced to semiconservatively replicate by the addition of the SV40 initiator protein T antigen (re-replication). The results indicate that re-replication of minichromosomes appears to be inhibited by two independent mechanisms. One acts at the initiation of minichromosome re-replication, and the other affects replicative chain elongation. To directly demonstrate the inhibitory effect of replicatively assembled nucleosomes, two types of minichromosomes were prepared: (i) post-replicative minichromosomes were assembled in a reaction coupled to replication as above; (ii) pre-replicative minichromosomes were assembled independently of replication on double-stranded DNA. Both types of minichromosomes were used as templates for DNA replication under identical conditions. Replicative fork movement was found to be impeded only on post-replicative minichromosome templates. In contrast, pre-replicative minichromosomes allowed one unconstrained replication cycle, but re-replication was inhibited due to a block in fork movement. Thus, replicatively assembled chromatin may have a profound influence on the re-replication of DNA.
Investigating variation in replicability: A "Many Labs" replication project
Klein, R.A.; Ratliff, K.A.; Vianello, M.; Adams, R.B.; Bahnik, S.; Bernstein, M.J.; Bocian, K.; Brandt, M.J.; Brooks, B.; Brumbaugh, C.C.; Cemalcilar, Z.; Chandler, J.; Cheong, W.; Davis, W.E.; Devos, T.; Eisner, M.; Frankowska, N.; Furrow, D.; Galliani, E.M.; Hasselman, F.W.; Hicks, J.A.; Hovermale, J.F.; Hunt, S.J.; Huntsinger, J.R.; IJzerman, H.; John, M.S.; Joy-Gaba, J.A.; Kappes, H.B.; Krueger, L.E.; Kurtz, J.; Levitan, C.A.; Mallett, R.K.; Morris, W.L.; Nelson, A.J.; Nier, J.A.; Packard, G.; Pilati, R.; Rutchick, A.M.; Schmidt, K.; Skorinko, J.L.M.; Smith, R.; Steiner, T.G.; Storbeck, J.; Van Swol, L.M.; Thompson, D.; Veer, A.E. van 't; Vaughn, L.A.; Vranka, M.; Wichman, A.L.; Woodzicka, J.A.; Nosek, B.A.
2014-01-01
Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of 13 classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, 10 effects replicated consistently.
Directory of Open Access Journals (Sweden)
Matthias Schmid
Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Hepadnaviruses, including human hepatitis B virus (HBV), replicate through reverse transcription of an RNA intermediate, the pregenomic RNA (pgRNA). Despite this kinship to retroviruses, there are fundamental differences beyond the fact that hepadnavirions contain DNA instead of RNA. Most peculiar is the initiation of reverse transcription: it occurs by protein-priming, is strictly committed to using an RNA hairpin on the pgRNA,ε, as template, and depends on cellular chaperones;moreover, proper replication can apparently occur only in the specialized environment of intact nucleocapsids.This complexity has hampered an in-depth mechanistic understanding. The recent successful reconstitution in the test tube of active replication initiation complexes from purified components, for duck HBV (DHBV),now allows for the analysis of the biochemistry of hepadnaviral replication at the molecular level. Here we review the current state of knowledge at all steps of the hepadnaviral genome replication cycle, with emphasis on new insights that turned up by the use of such cellfree systems. At this time, they can, unfortunately,not be complemented by three-dimensional structural information on the involved components. However, at least for the s RNA element such information is emerging,raising expectations that combining biophysics with biochemistry and genetics will soon provide a powerful integrated approach for solving the many outstanding questions. The ultimate, though most challenging goal,will be to visualize the hepadnaviral reverse transcriptase in the act of synthesizing DNA, which will also have strong implications for drug development.
Hosmer, David W; Sturdivant, Rodney X
2013-01-01
A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-
Weisberg, Sanford
2013-01-01
Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus
Psychology, replication & beyond.
Laws, Keith R
2016-06-01
Modern psychology is apparently in crisis and the prevailing view is that this partly reflects an inability to replicate past findings. If a crisis does exists, then it is some kind of 'chronic' crisis, as psychologists have been censuring themselves over replicability for decades. While the debate in psychology is not new, the lack of progress across the decades is disappointing. Recently though, we have seen a veritable surfeit of debate alongside multiple orchestrated and well-publicised replication initiatives. The spotlight is being shone on certain areas and although not everyone agrees on how we should interpret the outcomes, the debate is happening and impassioned. The issue of reproducibility occupies a central place in our whig history of psychology.
Handley, Zöe
2014-01-01
This paper argues that the goal of Computer-Assisted Language Learning (CALL) research should be to construct a reliable evidence-base with "engineering power" and generality upon which the design of future CALL software and activities can be based. In order to establish such an evidence base for future CALL design, it suggests that CALL…
Nonparametric Predictive Regression
Ioannis Kasparis; Elena Andreou; Phillips, Peter C.B.
2012-01-01
A unifying framework for inference is developed in predictive regressions where the predictor has unknown integration properties and may be stationary or nonstationary. Two easily implemented nonparametric F-tests are proposed. The test statistics are related to those of Kasparis and Phillips (2012) and are obtained by kernel regression. The limit distribution of these predictive tests holds for a wide range of predictors including stationary as well as non-stationary fractional and near unit...
Directory of Open Access Journals (Sweden)
V Haridas
Full Text Available BACKGROUND: Japanese encephalitis virus (JEV is a major cause of viral encephalitis in South and South-East Asia. Lack of antivirals and non-availability of affordable vaccines in these endemic areas are a major setback in combating JEV and other closely related viruses such as West Nile virus and dengue virus. Protein secondary structure mimetics are excellent candidates for inhibiting the protein-protein interactions and therefore serve as an attractive tool in drug development. We synthesized derivatives containing the backbone of naturally occurring lupin alkaloid, sparteine, which act as protein secondary structure mimetics and show that these compounds exhibit antiviral properties. METHODOLOGY/PRINCIPAL FINDINGS: In this study we have identified 3,7-diazabicyclo[3.3.1]nonane, commonly called bispidine, as a privileged scaffold to synthesize effective antiviral agents. We have synthesized derivatives of bispidine conjugated with amino acids and found that hydrophobic amino acid residues showed antiviral properties against JEV. We identified a tryptophan derivative, Bisp-W, which at 5 µM concentration inhibited JEV infection in neuroblastoma cells by more than 100-fold. Viral inhibition was at a stage post-entry and prior to viral protein translation possibly at viral RNA replication. We show that similar concentration of Bisp-W was capable of inhibiting viral infection of two other encephalitic viruses namely, West Nile virus and Chandipura virus. CONCLUSIONS/SIGNIFICANCE: We have demonstrated that the amino-acid conjugates of 3,7-diazabicyclo[3.3.1]nonane can serve as a molecular scaffold for development of potent antivirals against encephalitic viruses. Our findings will provide a novel platform to develop effective inhibitors of JEV and perhaps other RNA viruses causing encephalitis.
DNA replication origins in archaea
Zhenfang eWu; Jingfang eLiu; Haibo eYang; Hua eXiang
2014-01-01
DNA replication initiation, which starts at specific chromosomal site (known as replication origins), is the key regulatory stage of chromosome replication. Archaea, the third domain of life, use a single or multiple origin(s) to initiate replication of their circular chromosomes. The basic structure of replication origins is conserved among archaea, typically including an AT-rich unwinding region flanked by several conserved repeats (origin recognition box, ORB) that are located adjacent to ...
Lopez, M Veronica; Rivera, Angel A; Viale, Diego L; Benedetti, Lorena; Cuneo, Nicasio; Kimball, Kristopher J; Wang, Minghui; Douglas, Joanne T; Zhu, Zeng B; Bravo, Alicia I; Gidekel, Manuel; Alvarez, Ronald D; Curiel, David T; Podhajcer, Osvaldo L
2012-01-01
Targeting the tumor stroma in addition to the malignant cell compartment is of paramount importance to achieve complete tumor regression. In this work, we modified a previously designed tumor stroma-targeted conditionally replicative adenovirus (CRAd) based on the SPARC promoter by introducing a mutated E1A unable to bind pRB and pseudotyped with a chimeric Ad5/3 fiber (Ad F512v1), and assessed its replication/lytic capacity in ovary cancer in vitro and in vivo. AdF512v1 was able to replicate in fresh samples obtained from patients: (i) with primary human ovary cancer; (ii) that underwent neoadjuvant treatment; (iii) with metastatic disease. In addition, we show that four intraperitoneal (i.p.) injections of 5 × 1010 v.p. eliminated 50% of xenografted human ovary tumors disseminated in nude mice. Moreover, AdF512v1 replication in tumor models was enhanced 15–40-fold when the tumor contained a mix of malignant and SPARC-expressing stromal cells (fibroblasts and endothelial cells). Contrary to the wild-type virus, AdF512v1 was unable to replicate in normal human ovary samples while the wild-type virus can replicate. This study provides evidence on the lytic capacity of this CRAd and highlights the importance of targeting the stromal tissue in addition to the malignant cell compartment to achieve tumor regression. PMID:22948673
Replication studies in longevity
DEFF Research Database (Denmark)
Varcasia, O; Garasto, S; Rizza, T
2001-01-01
In Danes we replicated the 3'APOB-VNTR gene/longevity association study previously carried out in Italians, by which the Small alleles (less than 35 repeats) had been identified as frailty alleles for longevity. In Danes, neither genotype nor allele frequencies differed between centenarians and 20...
Duderstadt, Karl E.; Reyes-Lamothe, Rodrigo; van Oijen, Antoine M.; Sherratt, David J.
2014-01-01
The proliferation of all organisms depends on the coordination of enzymatic events within large multiprotein replisomes that duplicate chromosomes. Whereas the structure and function of many core replisome components have been clarified, the timing and order of molecular events during replication re
Coronavirus Attachment and Replication
1988-03-28
synthesis during RNA replication of vesicular stomatitis virus. J. Virol. 49:303-309. Pedersen, N.C. 1976a. Feline infectious peritonitis: Something old...receptors on intestinal brush border membranes from normal host species were developed for canine (CCV), feline (FIPV), porcine (TGEV), human (HCV...gastroenteritis receptor on pig BBMs ...... ................. ... 114 Feline infectious peritonitis virus receptor on cat BBMs ... .............. 117 Human
Optimal Allocation of Replicates for Measurement Evaluation Studies
Institute of Scientific and Technical Information of China (English)
Stanislav O.Zakharkin; Kyoungmi Kim; Alfred A.Bartolucci; Grier P.Page; David B.Allison
2006-01-01
Optimal experimental design is important for the efficient use of modern highthroughput technologies such as microarrays and proteomics. Multiple factors including the reliability of measurement system, which itself must be estimated from prior experimental work, could influence design decisions. In this study, we describe how the optimal number of replicate measures (technical replicates) for each biological sample (biological replicate) can be determined. Different allocations of biological and technical replicates were evaluated by minimizing the variance of the ratio of technical variance (measurement error) to the total variance (sum of sampling error and measurement error). We demonstrate that if the number of biological replicates and the number of technical replicates per biological sample are variable, while the total number of available measures is fixed, then the optimal allocation of replicates for measurement evaluation experiments requires two technical replicates for each biological replicate. Therefore, it is recommended to use two technical replicates for each biological replicate if the goal is to evaluate the reproducibility of measurements.
[Understanding logistic regression].
El Sanharawi, M; Naudet, F
2013-10-01
Logistic regression is one of the most common multivariate analysis models utilized in epidemiology. It allows the measurement of the association between the occurrence of an event (qualitative dependent variable) and factors susceptible to influence it (explicative variables). The choice of explicative variables that should be included in the logistic regression model is based on prior knowledge of the disease physiopathology and the statistical association between the variable and the event, as measured by the odds ratio. The main steps for the procedure, the conditions of application, and the essential tools for its interpretation are discussed concisely. We also discuss the importance of the choice of variables that must be included and retained in the regression model in order to avoid the omission of important confounding factors. Finally, by way of illustration, we provide an example from the literature, which should help the reader test his or her knowledge.
Constrained Sparse Galerkin Regression
Loiseau, Jean-Christophe
2016-01-01
In this work, we demonstrate the use of sparse regression techniques from machine learning to identify nonlinear low-order models of a fluid system purely from measurement data. In particular, we extend the sparse identification of nonlinear dynamics (SINDy) algorithm to enforce physical constraints in the regression, leading to energy conservation. The resulting models are closely related to Galerkin projection models, but the present method does not require the use of a full-order or high-fidelity Navier-Stokes solver to project onto basis modes. Instead, the most parsimonious nonlinear model is determined that is consistent with observed measurement data and satisfies necessary constraints. The constrained Galerkin regression algorithm is implemented on the fluid flow past a circular cylinder, demonstrating the ability to accurately construct models from data.
Practical Session: Logistic Regression
Clausel, M.; Grégoire, G.
2014-12-01
An exercise is proposed to illustrate the logistic regression. One investigates the different risk factors in the apparition of coronary heart disease. It has been proposed in Chapter 5 of the book of D.G. Kleinbaum and M. Klein, "Logistic Regression", Statistics for Biology and Health, Springer Science Business Media, LLC (2010) and also by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr341.pdf). This example is based on data given in the file evans.txt coming from http://www.sph.emory.edu/dkleinb/logreg3.htm#data.
DEFF Research Database (Denmark)
Bache, Stefan Holst
A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....
Institute of Scientific and Technical Information of China (English)
赵文芝; 田铮; 夏志明
2009-01-01
A wavelet method of detection and estimation of change points in nonparametric regression models under random design is proposed.The confidence bound of our test is derived by using the test statistics based on empirical wavelet coefficients as obtained by wavelet transformation of the data which is observed with noise.Moreover,the consistence of the test is proved while the rate of convergence is given.The method turns out to be effective after being tested on simulated examples and applied to IBM stock market data.
Anouzla, Abdelkader; Abrouki, Younes; Souabi, Salah; Safi, Mohammed; Rhbal, Hicham
2009-07-30
The investigation presented here focussed on the steel industrial wastewater (SIWW) FeCl(3) rich as an original coagulant to remove the synthetic textile wastewater. Response surface methodology was used to study the cumulative effect of the various parameters namely, coagulant dosage, initial pH of dye solution, dye concentration and to optimize the process conditions for the decolourization and COD reduction of disperse blue 79 solution. For obtaining the mutual interaction between the variables and optimizing these variables, a 2(3) full factorial central composite rotatable design using response surface methodology was employed. The efficiencies of decolourization and COD reduction for disperse blue 79 solution were accomplished at optimum conditions as 99% and 94%, respectively.
Vectors, a tool in statistical regression theory
Corsten, L.C.A.
1958-01-01
Using linear algebra this thesis developed linear regression analysis including analysis of variance, covariance analysis, special experimental designs, linear and fertility adjustments, analysis of experiments at different places and times. The determination of the orthogonal projection, yielding e
Reversible Switching of Cooperating Replicators
Urtel, Georg C.; Rind, Thomas; Braun, Dieter
2017-02-01
How can molecules with short lifetimes preserve their information over millions of years? For evolution to occur, information-carrying molecules have to replicate before they degrade. Our experiments reveal a robust, reversible cooperation mechanism in oligonucleotide replication. Two inherently slow replicating hairpin molecules can transfer their information to fast crossbreed replicators that outgrow the hairpins. The reverse is also possible. When one replication initiation site is missing, single hairpins reemerge from the crossbreed. With this mechanism, interacting replicators can switch between the hairpin and crossbreed mode, revealing a flexible adaptation to different boundary conditions.
Ritz, Christian; Parmigiani, Giovanni
2009-01-01
R is a rapidly evolving lingua franca of graphical display and statistical analysis of experiments from the applied sciences. This book provides a coherent treatment of nonlinear regression with R by means of examples from a diversity of applied sciences such as biology, chemistry, engineering, medicine and toxicology.
Multiple linear regression analysis
Edwards, T. R.
1980-01-01
Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.
Adaptive metric kernel regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
2000-01-01
regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...
Software Regression Verification
2013-12-11
of recursive procedures. Acta Informatica , 45(6):403 – 439, 2008. [GS11] Benny Godlin and Ofer Strichman. Regression verifica- tion. Technical Report...functions. Therefore, we need to rede - fine m-term. – Mutual termination. If either function f or function f ′ (or both) is non- deterministic, then their
Seber, George A F
2012-01-01
Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.
Ruiz, M Esperanza; Fagiolino, Pietro; de Buschiazzo, Perla M; Volonté, M Guillermina
2011-12-01
The aim of the present study was to evaluate the suitability of saliva as a biological fluid in relative bioavailability (RBA) studies, with the focus on the statistical design and data variability. A randomized, open-label, four periods and two sequences (4 × 2) crossover RBA study in saliva of two phenytoin (PHT) 100 mg immediate-release capsules was performed. PHT is a narrow therapeutic index drug that has been widely used for epilepsy treatment for many years. Published information regarding its bioavailability is available, but plasma assessed. This study was designed and performed using saliva as the biological fluid and the simplest conditions that produce coherent results with previously published plasma studies. Pharmacokinetic parameters (C (max), T (max), AUC(0-t ), AUC(0-inf), C (max)/AUC(0-t ), K (e), and t (1/2)) for each volunteer at each period were calculated. Four different BE calculations were performed: individual bioequivalence, by the method of moments, and three average bioequivalence with data averaged over the two administrations and with data of periods 1-2 and 3-4. ANOVA calculation showed no significant subject-by-formulation interaction, period and sequence effects. The intra-subject variabilities were at least 20-fold lower than the inter-subject ones for C (max), AUC(0-t ) and AUC(0-inf). In all four BE calculations, the 90% CIs for the T/R ratios of studied pharmacokinetics parameters fell within the 80-125% range proposed by most regulatory agencies.
Chromatin replication and epigenome maintenance
DEFF Research Database (Denmark)
Alabert, Constance; Groth, Anja
2012-01-01
initiates, whereas the replication process itself disrupts chromatin and challenges established patterns of genome regulation. Specialized replication-coupled mechanisms assemble new DNA into chromatin, but epigenome maintenance is a continuous process taking place throughout the cell cycle. If DNA...
Chromatin replication and epigenome maintenance
DEFF Research Database (Denmark)
Alabert, Constance; Groth, Anja
2012-01-01
initiates, whereas the replication process itself disrupts chromatin and challenges established patterns of genome regulation. Specialized replication-coupled mechanisms assemble new DNA into chromatin, but epigenome maintenance is a continuous process taking place throughout the cell cycle. If DNA...
Low rank Multivariate regression
Giraud, Christophe
2010-01-01
We consider in this paper the multivariate regression problem, when the target regression matrix $A$ is close to a low rank matrix. Our primary interest in on the practical case where the variance of the noise is unknown. Our main contribution is to propose in this setting a criterion to select among a family of low rank estimators and prove a non-asymptotic oracle inequality for the resulting estimator. We also investigate the easier case where the variance of the noise is known and outline that the penalties appearing in our criterions are minimal (in some sense). These penalties involve the expected value of the Ky-Fan quasi-norm of some random matrices. These quantities can be evaluated easily in practice and upper-bounds can be derived from recent results in random matrix theory.
Subset selection in regression
Miller, Alan
2002-01-01
Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...
Initiation of adenovirus DNA replication.
Reiter, T; Fütterer, J; Weingärtner, B; Winnacker, E L
1980-01-01
In an attempt to study the mechanism of initiation of adenovirus DNA replication, an assay was developed to investigate the pattern of DNA synthesis in early replicative intermediates of adenovirus DNA. By using wild-type virus-infected cells, it was possible to place the origin of adenovirus type 2 DNA replication within the terminal 350 to 500 base pairs from either of the two molecular termini. In addition, a variety of parameters characteristic of adenovirus DNA replication were compared ...
Chromatin replication and epigenome maintenance
DEFF Research Database (Denmark)
Alabert, Constance; Groth, Anja
2012-01-01
Stability and function of eukaryotic genomes are closely linked to chromatin structure and organization. During cell division the entire genome must be accurately replicated and the chromatin landscape reproduced on new DNA. Chromatin and nuclear structure influence where and when DNA replication...... initiates, whereas the replication process itself disrupts chromatin and challenges established patterns of genome regulation. Specialized replication-coupled mechanisms assemble new DNA into chromatin, but epigenome maintenance is a continuous process taking place throughout the cell cycle. If DNA...
Classification and regression trees
Breiman, Leo; Olshen, Richard A; Stone, Charles J
1984-01-01
The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.
DEFF Research Database (Denmark)
Hansen, Henrik; Tarp, Finn
2001-01-01
. There are, however, decreasing returns to aid, and the estimated effectiveness of aid is highly sensitive to the choice of estimator and the set of control variables. When investment and human capital are controlled for, no positive effect of aid is found. Yet, aid continues to impact on growth via...... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes....
Robust Nonstationary Regression
1993-01-01
This paper provides a robust statistical approach to nonstationary time series regression and inference. Fully modified extensions of traditional robust statistical procedures are developed which allow for endogeneities in the nonstationary regressors and serial dependence in the shocks that drive the regressors and the errors that appear in the equation being estimated. The suggested estimators involve semiparametric corrections to accommodate these possibilities and they belong to the same ...
McVey, Gail L; Davis, Ron; Tweed, Stacey; Shaw, Brian F
2004-07-01
The purpose of the current study was to evaluate the effectiveness of a life-skills promotion program designed to improve body image satisfaction and global self-esteem, while reducing negative eating attitudes and behaviors and feelings of perfectionism, all of which have been identified as predisposing factors to disordered eating. A total of 258 girls with a mean age of 11.8 years (intervention group = 182 and control group = 76) completed questionnaires before, and 1 week after, the six-session school-based program, and again 6 and 12 months later. The intervention was successful in improving body image satisfaction and global self-esteem and in reducing dieting attitude scores at post intervention only. The gains were not maintained at the 12-month follow-up. The need to assess the influence of health promotion programs on predisposing risk factors, compared with problem-based outcome measures, is discussed. Copyright 2004 by Wiley Periodicals, Inc.
TWO REGRESSION CREDIBILITY MODELS
Directory of Open Access Journals (Sweden)
Constanţa-Nicoleta BODEA
2010-03-01
Full Text Available In this communication we will discuss two regression credibility models from Non – Life Insurance Mathematics that can be solved by means of matrix theory. In the first regression credibility model, starting from a well-known representation formula of the inverse for a special class of matrices a risk premium will be calculated for a contract with risk parameter θ. In the next regression credibility model, we will obtain a credibility solution in the form of a linear combination of the individual estimate (based on the data of a particular state and the collective estimate (based on aggregate USA data. To illustrate the solution with the properties mentioned above, we shall need the well-known representation theorem for a special class of matrices, the properties of the trace for a square matrix, the scalar product of two vectors, the norm with respect to a positive definite matrix given in advance and the complicated mathematical properties of conditional expectations and of conditional covariances.
Replication Research and Special Education
Travers, Jason C.; Cook, Bryan G.; Therrien, William J.; Coyne, Michael D.
2016-01-01
Replicating previously reported empirical research is a necessary aspect of an evidence-based field of special education, but little formal investigation into the prevalence of replication research in the special education research literature has been conducted. Various factors may explain the lack of attention to replication of special education…
Replication Research and Special Education
Travers, Jason C.; Cook, Bryan G.; Therrien, William J.; Coyne, Michael D.
2016-01-01
Replicating previously reported empirical research is a necessary aspect of an evidence-based field of special education, but little formal investigation into the prevalence of replication research in the special education research literature has been conducted. Various factors may explain the lack of attention to replication of special education…
Replication data collection highlights value in diversity of replication attempts
DeSoto, K. Andrew; Schweinsberg, Martin
2017-01-01
Researchers agree that replicability and reproducibility are key aspects of science. A collection of Data Descriptors published in Scientific Data presents data obtained in the process of attempting to replicate previously published research. These new replication data describe published and unpublished projects. The different papers in this collection highlight the many ways that scientific replications can be conducted, and they reveal the benefits and challenges of crucial replication research. The organizers of this collection encourage scientists to reuse the data contained in the collection for their own work, and also believe that these replication examples can serve as educational resources for students, early-career researchers, and experienced scientists alike who are interested in learning more about the process of replication. PMID:28291224
The replication of expansive production knowledge
DEFF Research Database (Denmark)
Wæhrens, Brian Vejrum; Yang, Cheng; Madsen, Erik Skov
2012-01-01
. Design/methodology/approach – Two case studies are introduced. Empirical data were collected over a period of two years based on interviews and participating observations. Findings – The findings show that (1) knowledge transfer within the replication of a production line is a stepwise expansive process......Purpose – With the aim to support offshore production line replication, this paper specifically aims to explore the use of templates and principles to transfer expansive productive knowledge embedded in a production line and understand the contingencies that influence the mix of these approaches...... and principles to transfer productive knowledge in a specific context, which, in this paper, is a production line....
Introduction to the use of regression models in epidemiology.
Bender, Ralf
2009-01-01
Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.
Anatomy of Mammalian Replication Domains
Takebayashi, Shin-ichiro; Ogata, Masato; Okumura, Katsuzumi
2017-01-01
Genetic information is faithfully copied by DNA replication through many rounds of cell division. In mammals, DNA is replicated in Mb-sized chromosomal units called “replication domains.” While genome-wide maps in multiple cell types and disease states have uncovered both dynamic and static properties of replication domains, we are still in the process of understanding the mechanisms that give rise to these properties. A better understanding of the molecular basis of replication domain regulation will bring new insights into chromosome structure and function. PMID:28350365
Content replication and placement in mobile networks
La, Chi-Anh; Casetti, Claudio; Chiasserini, Carla-Fabiana; Fiore, Marco
2011-01-01
Performance and reliability of content access in mobile networks is conditioned by the number and location of content replicas deployed at the network nodes. Location theory has been the traditional, centralized approach to study content replication: computing the number and placement of replicas in a static network can be cast as a facility location problem. The endeavor of this work is to design a practical solution to the above joint optimization problem that is suitable for mobile wireless environments. We thus seek a replication algorithm that is lightweight, distributed, and reactive to network dynamics. We devise a solution that lets nodes (i) share the burden of storing and providing content, so as to achieve load balancing, and (ii) autonomously decide whether to replicate or drop the information, so as to adapt the content availability to dynamic demands and time-varying network topologies. We evaluate our mechanism through simulation, by exploring a wide range of settings, including different node ...
Tax Evasion, Information Reporting, and the Regressive Bias Prediction
DEFF Research Database (Denmark)
Boserup, Simon Halphen; Pinje, Jori Veng
2013-01-01
Models of rational tax evasion and optimal enforcement invariably predict a regressive bias in the effective tax system, which reduces redistribution in the economy. Using Danish administrative data, we show that a calibrated structural model of this type replicates moments and correlations of ta...... agency's use of information reports and revenue-maximizing disposition of audit resources....
Self-replication with magnetic dipolar colloids.
Dempster, Joshua M; Zhang, Rui; Olvera de la Cruz, Monica
2015-10-01
Colloidal self-replication represents an exciting research frontier in soft matter physics. Currently, all reported self-replication schemes involve coating colloidal particles with stimuli-responsive molecules to allow switchable interactions. In this paper, we introduce a scheme using ferromagnetic dipolar colloids and preprogrammed external magnetic fields to create an autonomous self-replication system. Interparticle dipole-dipole forces and periodically varying weak-strong magnetic fields cooperate to drive colloid monomers from the solute onto templates, bind them into replicas, and dissolve template complexes. We present three general design principles for autonomous linear replicators, derived from a focused study of a minimalist sphere-dimer magnetic system in which single binding sites allow formation of dimeric templates. We show via statistical models and computer simulations that our system exhibits nonlinear growth of templates and produces nearly exponential growth (low error rate) upon adding an optimized competing electrostatic potential. We devise experimental strategies for constructing the required magnetic colloids based on documented laboratory techniques. We also present qualitative ideas about building more complex self-replicating structures utilizing magnetic colloids.
Spacetime replication of continuous variable quantum information
Hayden, Patrick; Nezami, Sepehr; Salton, Grant; Sanders, Barry C.
2016-08-01
The theory of relativity requires that no information travel faster than light, whereas the unitarity of quantum mechanics ensures that quantum information cannot be cloned. These conditions provide the basic constraints that appear in information replication tasks, which formalize aspects of the behavior of information in relativistic quantum mechanics. In this article, we provide continuous variable (CV) strategies for spacetime quantum information replication that are directly amenable to optical or mechanical implementation. We use a new class of homologically constructed CV quantum error correcting codes to provide efficient solutions for the general case of information replication. As compared to schemes encoding qubits, our CV solution requires half as many shares per encoded system. We also provide an optimized five-mode strategy for replicating quantum information in a particular configuration of four spacetime regions designed not to be reducible to previously performed experiments. For this optimized strategy, we provide detailed encoding and decoding procedures using standard optical apparatus and calculate the recovery fidelity when finite squeezing is used. As such we provide a scheme for experimentally realizing quantum information replication using quantum optics.
Institute of Scientific and Technical Information of China (English)
裴晓换; 郭鹏江
2012-01-01
Aim To obtain the property of consistency for the estimates of β and g(·) in semiparametric regression model with missing data under fixed design. Methods Using the lemma, some inequality and the given conditions. Results The property of strong consistency for the estimates of β and g(·) is proved. Conclusion In the semiparametric regression model with missing data, the estimates of β and g(·)have the property of strong consistency.%目的 在随机缺失情况下证明固定设计半参数回归模型的强相合性.方法 利用引理,一些不等式及已给条件进行证明.结果 证明了参数β的最小二乘估计和未知函数g(·)的非参数核估计是强相合的.结论随机缺失下半参数回归模型中β的参数估计和非参数函数g(·)的估计量是强相合的.
Modeling inhomogeneous DNA replication kinetics.
Directory of Open Access Journals (Sweden)
Michel G Gauthier
Full Text Available In eukaryotic organisms, DNA replication is initiated at a series of chromosomal locations called origins, where replication forks are assembled proceeding bidirectionally to replicate the genome. The distribution and firing rate of these origins, in conjunction with the velocity at which forks progress, dictate the program of the replication process. Previous attempts at modeling DNA replication in eukaryotes have focused on cases where the firing rate and the velocity of replication forks are homogeneous, or uniform, across the genome. However, it is now known that there are large variations in origin activity along the genome and variations in fork velocities can also take place. Here, we generalize previous approaches to modeling replication, to allow for arbitrary spatial variation of initiation rates and fork velocities. We derive rate equations for left- and right-moving forks and for replication probability over time that can be solved numerically to obtain the mean-field replication program. This method accurately reproduces the results of DNA replication simulation. We also successfully adapted our approach to the inverse problem of fitting measurements of DNA replication performed on single DNA molecules. Since such measurements are performed on specified portion of the genome, the examined DNA molecules may be replicated by forks that originate either within the studied molecule or outside of it. This problem was solved by using an effective flux of incoming replication forks at the model boundaries to represent the origin activity outside the studied region. Using this approach, we show that reliable inferences can be made about the replication of specific portions of the genome even if the amount of data that can be obtained from single-molecule experiments is generally limited.
Dunlop, Joanna Leigh; Vandal, Alain Charles; de Zoysa, Janak Rashme; Gabriel, Ruvin Sampath; Haloob, Imad Adbi; Hood, Christopher John; Matheson, Philip James; McGregor, David Owen Ross; Rabindranath, Kannaiyan Samuel; Semple, David John; Marshall, Mark Roger
2015-08-01
After the publication of our paper Dunlop et al. "Rationale and design of the Sodium Lowering In Dialysate (SoLID) trial: a randomised controlled trial of low versus standard dialysate sodium concentration during hemodialysis for regression of left ventricular mass", we became aware of further data correlating left ventricular (LV) mass index at baseline and their corresponding mass at 12 months, using cardiac magnetic resonance imaging (MRI) in patients on hemodialysis. The original published sample size for the SoLID trial of 118 was a conservative estimate, calculated using analysis of covariance and a within person Pearson's correlation for LV mass index of 0.75. New data communicated to the SoLID trial group has resulted in re-calcuation of the sample size, based upon a within person Pearson's correlation of 0.8 but otherwise unchanged assumptions. As a result, the SoLID trial will now recruit 96 participants.
DEFF Research Database (Denmark)
Calaon, M.; Tosello, G.; Garnaes, J.
The present study investigates the capabilities of the two employed processes, injection molding (IM) and injection compression molding (ICM) on replicating different channel cross sections. Statistical design of experiment was adopted to optimize replication quality of produced polymer parts wit...
Replicated Spectrographs in Astronomy
Hill, Gary J
2014-01-01
As telescope apertures increase, the challenge of scaling spectrographic astronomical instruments becomes acute. The next generation of extremely large telescopes (ELTs) strain the availability of glass blanks for optics and engineering to provide sufficient mechanical stability. While breaking the relationship between telescope diameter and instrument pupil size by adaptive optics is a clear path for small fields of view, survey instruments exploiting multiplex advantages will be pressed to find cost-effective solutions. In this review we argue that exploiting the full potential of ELTs will require the barrier of the cost and engineering difficulty of monolithic instruments to be broken by the use of large-scale replication of spectrographs. The first steps in this direction have already been taken with the soon to be commissioned MUSE and VIRUS instruments for the Very Large Telescope and the Hobby-Eberly Telescope, respectively. MUSE employs 24 spectrograph channels, while VIRUS has 150 channels. We compa...
Modified Regression Correlation Coefficient for Poisson Regression Model
Kaengthong, Nattacha; Domthong, Uthumporn
2017-09-01
This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).
Directory of Open Access Journals (Sweden)
Karim Hardani*
2012-05-01
Full Text Available A 10-month-old baby presented with developmental delay. He had flaccid paralysis on physical examination.An MRI of the spine revealed malformation of the ninth and tenth thoracic vertebral bodies with complete agenesis of the rest of the spine down that level. The thoracic spinal cord ends at the level of the fifth thoracic vertebra with agenesis of the posterior arches of the eighth, ninth and tenth thoracic vertebral bodies. The roots of the cauda equina appear tightened down and backward and ended into a subdermal fibrous fatty tissue at the level of the ninth and tenth thoracic vertebral bodies (closed meningocele. These findings are consistent with caudal regression syndrome.
Energy Technology Data Exchange (ETDEWEB)
Chang, Pei-Ching [Institute of Microbiology and Immunology, National Yang-Ming University, Taipei 112, Taiwan (China); Kung, Hsing-Jien, E-mail: hkung@nhri.org.tw [Institute for Translational Medicine, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan (China); Department of Biochemistry and Molecular Medicine, University of California, Davis, CA 95616 (United States); UC Davis Cancer Center, University of California, Davis, CA 95616 (United States); Division of Molecular and Genomic Medicine, National Health Research Institutes, 35 Keyan Road, Zhunan, Miaoli County 35053, Taiwan (China)
2014-09-29
Small Ubiquitin-related MOdifier (SUMO) modification was initially identified as a reversible post-translational modification that affects the regulation of diverse cellular processes, including signal transduction, protein trafficking, chromosome segregation, and DNA repair. Increasing evidence suggests that the SUMO system also plays an important role in regulating chromatin organization and transcription. It is thus not surprising that double-stranded DNA viruses, such as Kaposi’s sarcoma-associated herpesvirus (KSHV), have exploited SUMO modification as a means of modulating viral chromatin remodeling during the latent-lytic switch. In addition, SUMO regulation allows the disassembly and assembly of promyelocytic leukemia protein-nuclear bodies (PML-NBs), an intrinsic antiviral host defense, during the viral replication cycle. Overcoming PML-NB-mediated cellular intrinsic immunity is essential to allow the initial transcription and replication of the herpesvirus genome after de novo infection. As a consequence, KSHV has evolved a way as to produce multiple SUMO regulatory viral proteins to modulate the cellular SUMO environment in a dynamic way during its life cycle. Remarkably, KSHV encodes one gene product (K-bZIP) with SUMO-ligase activities and one gene product (K-Rta) that exhibits SUMO-targeting ubiquitin ligase (STUbL) activity. In addition, at least two viral products are sumoylated that have functional importance. Furthermore, sumoylation can be modulated by other viral gene products, such as the viral protein kinase Orf36. Interference with the sumoylation of specific viral targets represents a potential therapeutic strategy when treating KSHV, as well as other oncogenic herpesviruses. Here, we summarize the different ways KSHV exploits and manipulates the cellular SUMO system and explore the multi-faceted functions of SUMO during KSHV’s life cycle and pathogenesis.
Optical tweezers reveal how proteins alter replication
Chaurasiya, Kathy
acids. We use single molecule DNA stretching to show that the nucleocapsid protein (NC) of the yeast retrotransposon Ty3, which is likely to be an ancestor of HIV NC, has optimal nucleic acid chaperone activity with only a single zinc finger. We also show that the chaperone activity of the ORF1 protein is responsible for successful replication of the mouse LINE-1 retrotransposon. LINE-1 is also 17% of the human genome, where it generates insertion mutations and alters gene expression. Retrotransposons such as LINE-1 and Ty3 are likely to be ancestors of retroviruses such as HIV. Human APOBEC3G (A3G) inhibits HIV-1 replication via cytidine deamination of the viral ssDNA genome, as well as via a distinct deamination-independent mechanism. Efficient deamination requires rapid on-off binding kinetics, but a slow dissociation rate is required for the proposed deaminase-independent mechanism. We resolve this apparent contradiction with a new quantitative single molecule method, which shows that A3G initially binds ssDNA with fast on-off rates and subsequently converts to a slow binding mode. This suggests that oligomerization transforms A3G from a fast enzyme to a slow binding protein, which is the biophysical mechanism that allows A3G to inhibit HIV replication. A complete understanding of the mechanism of A3G-mediated antiviral activity is required to design drugs that disrupt the viral response to A3G, enhance A3G packaging inside the viral core, and other potential strategies for long-term treatment of HIV infection. We use single molecule biophysics to explore the function of proteins involved in bacterial DNA replication, endogenous retrotransposition of retroelements in eukaryotic hosts such yeast and mice, and HIV replication in human cells. Our quantitative results provide insight into protein function in a range of complex biological systems and have wide-ranging implications for human health.
Efficient usage of Adabas replication
Storr, Dieter W
2011-01-01
In today's IT organization replication becomes more and more an essential technology. This makes Software AG's Event Replicator for Adabas an important part of your data processing. Setting the right parameters and establishing the best network communication, as well as selecting efficient target components, is essential for successfully implementing replication. This book provides comprehensive information and unique best-practice experience in the field of Event Replicator for Adabas. It also includes sample codes and configurations making your start very easy. It describes all components ne
Solving the Telomere Replication Problem
Maestroni, Laetitia; Matmati, Samah; Coulon, Stéphane
2017-01-01
Telomeres are complex nucleoprotein structures that protect the extremities of linear chromosomes. Telomere replication is a major challenge because many obstacles to the progression of the replication fork are concentrated at the ends of the chromosomes. This is known as the telomere replication problem. In this article, different and new aspects of telomere replication, that can threaten the integrity of telomeres, will be reviewed. In particular, we will focus on the functions of shelterin and the replisome for the preservation of telomere integrity. PMID:28146113
Impact of replicate types on proteomic expression analysis.
Karp, Natasha A; Spencer, Matthew; Lindsay, Helen; O'Dell, Kevin; Lilley, Kathryn S
2005-01-01
In expression proteomics, the samples utilized within an experimental design may include technical, biological, or pooled replicates. This manuscript discusses various experimental designs and the conclusions that can be drawn from them. Specifically, it addresses the impact of mixing replicate types on the statistical analysis which can be performed. This study focuses on difference gel electrophoresis (DiGE), but the issues are equally applicable to all quantitative methodologies assessing relative changes in protein expression.
Recursive Algorithm For Linear Regression
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
Charter School Replication. Policy Guide
Rhim, Lauren Morando
2009-01-01
"Replication" is the practice of a single charter school board or management organization opening several more schools that are each based on the same school model. The most rapid strategy to increase the number of new high-quality charter schools available to children is to encourage the replication of existing quality schools. This policy guide…
Replication and Inhibitors of Enteroviruses and Parechoviruses
Directory of Open Access Journals (Sweden)
Lonneke van der Linden
2015-08-01
Full Text Available The Enterovirus (EV and Parechovirus genera of the picornavirus family include many important human pathogens, including poliovirus, rhinovirus, EV-A71, EV-D68, and human parechoviruses (HPeV. They cause a wide variety of diseases, ranging from a simple common cold to life-threatening diseases such as encephalitis and myocarditis. At the moment, no antiviral therapy is available against these viruses and it is not feasible to develop vaccines against all EVs and HPeVs due to the great number of serotypes. Therefore, a lot of effort is being invested in the development of antiviral drugs. Both viral proteins and host proteins essential for virus replication can be used as targets for virus inhibitors. As such, a good understanding of the complex process of virus replication is pivotal in the design of antiviral strategies goes hand in hand with a good understanding of the complex process of virus replication. In this review, we will give an overview of the current state of knowledge of EV and HPeV replication and how this can be inhibited by small-molecule inhibitors.
DATABASE REPLICATION IN HETEROGENOUS PLATFORM
Directory of Open Access Journals (Sweden)
Hendro Nindito
2014-01-01
Full Text Available The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MySQL running on Linux as the destination. The method applied in this research is prototyping in which the processes of development and testing can be done interactively and repeatedly. The key result of this research is that the replication technology applied, which is called Oracle GoldenGate, can successfully manage to do its task in replicating data in real-time and heterogeneous platforms.
Relative risk regression analysis of epidemiologic data.
Prentice, R L
1985-11-01
Relative risk regression methods are described. These methods provide a unified approach to a range of data analysis problems in environmental risk assessment and in the study of disease risk factors more generally. Relative risk regression methods are most readily viewed as an outgrowth of Cox's regression and life model. They can also be viewed as a regression generalization of more classical epidemiologic procedures, such as that due to Mantel and Haenszel. In the context of an epidemiologic cohort study, relative risk regression methods extend conventional survival data methods and binary response (e.g., logistic) regression models by taking explicit account of the time to disease occurrence while allowing arbitrary baseline disease rates, general censorship, and time-varying risk factors. This latter feature is particularly relevant to many environmental risk assessment problems wherein one wishes to relate disease rates at a particular point in time to aspects of a preceding risk factor history. Relative risk regression methods also adapt readily to time-matched case-control studies and to certain less standard designs. The uses of relative risk regression methods are illustrated and the state of development of these procedures is discussed. It is argued that asymptotic partial likelihood estimation techniques are now well developed in the important special case in which the disease rates of interest have interpretations as counting process intensity functions. Estimation of relative risks processes corresponding to disease rates falling outside this class has, however, received limited attention. The general area of relative risk regression model criticism has, as yet, not been thoroughly studied, though a number of statistical groups are studying such features as tests of fit, residuals, diagnostics and graphical procedures. Most such studies have been restricted to exponential form relative risks as have simulation studies of relative risk estimation
Regression in autistic spectrum disorders.
Stefanatos, Gerry A
2008-12-01
A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.
Combining Alphas via Bounded Regression
Directory of Open Access Journals (Sweden)
Zura Kakushadze
2015-11-01
Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.
O'Neill, Paul
2010-01-01
One of the most important and high-profile issues in public education reform today is the replication of successful public charter school programs. With more than 5,000 failing public schools in the United States, there is a tremendous need for strong alternatives for parents and students. Replicating successful charter school models is an…
Institute of Scientific and Technical Information of China (English)
李正农; 梁笑寒; 吴卫祥; 王志峰
2012-01-01
The purpose of this study is to save materials by optimizing the related poles. In this study, the plane and space truss's rod sizes were taken as the design parameters, displacement and stress as the constraints. Uniform Design Method was adopted to approximately optimize a heliostat under wind load, and then approximation functions between objective (displacement, stress and consumption of steel) and parameters were obtained. Finally, this study gave the rod specifications under different conditions. Compared with the existing structure, the optimized structure saves 13. 8% of steel. This whole process is simple and reliable.%为节约材料,对定日镜结构进行了杆件优化设计研究.以定日镜平面和空间桁架的杆件尺寸为设计参数,以位移及应力为约束,采用均匀设计法对某定日镜进行风荷载作用下的近似优化,得到目标(位移、应力及用钢量)与参数之间的近似函数关系,并详细给出了不同条件下优化后推荐使用的杆件规格.结果表明:优化后的用钢量比现有结构节省13.8％.整个优化过程简单易行,优化结果可靠.
Linear regression in astronomy. I
Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh
1990-01-01
Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.
Exponential self-replication enabled through a fibre elongation/breakage mechanism
Colomb-Delsuc, Mathieu; Mattia, Elio; Sadownik, Jan W; Otto, Sijbren
2015-01-01
Self-replicating molecules are likely to have played a central role in the origin of life. Most scenarios of Darwinian evolution at the molecular level require self-replicators capable of exponential growth, yet only very few exponential replicators have been reported to date and general design crit
Epistasis analysis for quantitative traits by functional regression model.
Zhang, Futao; Boerwinkle, Eric; Xiong, Momiao
2014-06-01
The critical barrier in interaction analysis for rare variants is that most traditional statistical methods for testing interactions were originally designed for testing the interaction between common variants and are difficult to apply to rare variants because of their prohibitive computational time and poor ability. The great challenges for successful detection of interactions with next-generation sequencing (NGS) data are (1) lack of methods for interaction analysis with rare variants, (2) severe multiple testing, and (3) time-consuming computations. To meet these challenges, we shift the paradigm of interaction analysis between two loci to interaction analysis between two sets of loci or genomic regions and collectively test interactions between all possible pairs of SNPs within two genomic regions. In other words, we take a genome region as a basic unit of interaction analysis and use high-dimensional data reduction and functional data analysis techniques to develop a novel functional regression model to collectively test interactions between all possible pairs of single nucleotide polymorphisms (SNPs) within two genome regions. By intensive simulations, we demonstrate that the functional regression models for interaction analysis of the quantitative trait have the correct type 1 error rates and a much better ability to detect interactions than the current pairwise interaction analysis. The proposed method was applied to exome sequence data from the NHLBI's Exome Sequencing Project (ESP) and CHARGE-S study. We discovered 27 pairs of genes showing significant interactions after applying the Bonferroni correction (P-values < 4.58 × 10(-10)) in the ESP, and 11 were replicated in the CHARGE-S study.
International Expansion through Flexible Replication
DEFF Research Database (Denmark)
Jonsson, Anna; Foss, Nicolai Juul
2011-01-01
to local environments and under the impact of new learning. To illuminate these issues, we draw on a longitudinal in-depth study of Swedish home furnishing giant IKEA, involving more than 70 interviews. We find that IKEA has developed organizational mechanisms that support an ongoing learning process aimed......, etc.) are replicated in a uniform manner across stores, and change only very slowly (if at all) in response to learning (“flexible replication”). We conclude by discussing the factors that influence the approach to replication adopted by an international replicator....
The Psychology of Replication and Replication in Psychology.
Francis, Gregory
2012-11-01
Like other scientists, psychologists believe experimental replication to be the final arbiter for determining the validity of an empirical finding. Reports in psychology journals often attempt to prove the validity of a hypothesis or theory with multiple experiments that replicate a finding. Unfortunately, these efforts are sometimes misguided because in a field like experimental psychology, ever more successful replication does not necessarily ensure the validity of an empirical finding. When psychological experiments are analyzed with statistics, the rules of probability dictate that random samples should sometimes be selected that do not reject the null hypothesis, even if an effect is real. As a result, it is possible for a set of experiments to have too many successful replications. When there are too many successful replications for a given set of experiments, a skeptical scientist should be suspicious that null or negative findings have been suppressed, the experiments were run improperly, or the experiments were analyzed improperly. This article describes the implications of this observation and demonstrates how to test for too much successful replication by using a set of experiments from a recent research paper.
Regulation of Replication Recovery and Genome Integrity
DEFF Research Database (Denmark)
Colding, Camilla Skettrup
Preserving genome integrity is essential for cell survival. To this end, mechanisms that supervise DNA replication and respond to replication perturbations have evolved. One such mechanism is the replication checkpoint, which responds to DNA replication stress and acts to ensure replication pausing...
Novel algorithm for constructing support vector machine regression ensemble
Institute of Scientific and Technical Information of China (English)
Li Bo; Li Xinjun; Zhao Zhiyan
2006-01-01
A novel algorithm for constructing support vector machine regression ensemble is proposed. As to regression prediction, support vector machine regression(SVMR) ensemble is proposed by resampling from given training data sets repeatedly and aggregating several independent SVMRs, each of which is trained to use a replicated training set. After training, several independently trained SVMRs need to be aggregated in an appropriate combination manner. Generally, the linear weighting is usually used like expert weighting score in Boosting Regression and it is without optimization capacity. Three combination techniques are proposed, including simple arithmetic mean,linear least square error weighting and nonlinear hierarchical combining that uses another upper-layer SVMR to combine several lower-layer SVMRs. Finally, simulation experiments demonstrate the accuracy and validity of the presented algorithm.
Experimental Replication of an Aeroengine Combustion Instability
Cohen, J. M.; Hibshman, J. R.; Proscia, W.; Rosfjord, T. J.; Wake, B. E.; McVey, J. B.; Lovett, J.; Ondas, M.; DeLaat, J.; Breisacher, K.
2000-01-01
Combustion instabilities in gas turbine engines are most frequently encountered during the late phases of engine development, at which point they are difficult and expensive to fix. The ability to replicate an engine-traceable combustion instability in a laboratory-scale experiment offers the opportunity to economically diagnose the problem (to determine the root cause), and to investigate solutions to the problem, such as active control. The development and validation of active combustion instability control requires that the causal dynamic processes be reproduced in experimental test facilities which can be used as a test bed for control system evaluation. This paper discusses the process through which a laboratory-scale experiment was designed to replicate an instability observed in a developmental engine. The scaling process used physically-based analyses to preserve the relevant geometric, acoustic and thermo-fluid features. The process increases the probability that results achieved in the single-nozzle experiment will be scalable to the engine.
Time-adaptive quantile regression
DEFF Research Database (Denmark)
Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik
2008-01-01
An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....
Linear regression in astronomy. II
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
Tax Evasion, Information Reporting, and the Regressive Bias Prediction
DEFF Research Database (Denmark)
Boserup, Simon Halphen; Pinje, Jori Veng
2013-01-01
evasion and audit probabilities once we account for information reporting in the tax compliance game. When conditioning on information reporting, we find that both reduced-form evidence and simulations exhibit the predicted regressive bias. However, in the overall economy, this bias is negated by the tax......Models of rational tax evasion and optimal enforcement invariably predict a regressive bias in the effective tax system, which reduces redistribution in the economy. Using Danish administrative data, we show that a calibrated structural model of this type replicates moments and correlations of tax...... agency's use of information reports and revenue-maximizing disposition of audit resources....
Polynomial Regression on Riemannian Manifolds
Hinkle, Jacob; Fletcher, P Thomas; Joshi, Sarang
2012-01-01
In this paper we develop the theory of parametric polynomial regression in Riemannian manifolds and Lie groups. We show application of Riemannian polynomial regression to shape analysis in Kendall shape space. Results are presented, showing the power of polynomial regression on the classic rat skull growth data of Bookstein as well as the analysis of the shape changes associated with aging of the corpus callosum from the OASIS Alzheimer's study.
Biomarkers of replicative senescence revisited
DEFF Research Database (Denmark)
Nehlin, Jan
2016-01-01
Biomarkers of replicative senescence can be defined as those ultrastructural and physiological variations as well as molecules whose changes in expression, activity or function correlate with aging, as a result of the gradual exhaustion of replicative potential and a state of permanent cell cycle...... with their chronological age and present health status, help define their current rate of aging and contribute to establish personalized therapy plans to reduce, counteract or even avoid the appearance of aging biomarkers....
Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…
Quantile regression theory and applications
Davino, Cristina; Vistocco, Domenico
2013-01-01
A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and
Business applications of multiple regression
Richardson, Ronny
2015-01-01
This second edition of Business Applications of Multiple Regression describes the use of the statistical procedure called multiple regression in business situations, including forecasting and understanding the relationships between variables. The book assumes a basic understanding of statistics but reviews correlation analysis and simple regression to prepare the reader to understand and use multiple regression. The techniques described in the book are illustrated using both Microsoft Excel and a professional statistical program. Along the way, several real-world data sets are analyzed in deta
Nucleotide Metabolism and DNA Replication.
Warner, Digby F; Evans, Joanna C; Mizrahi, Valerie
2014-10-01
The development and application of a highly versatile suite of tools for mycobacterial genetics, coupled with widespread use of "omics" approaches to elucidate the structure, function, and regulation of mycobacterial proteins, has led to spectacular advances in our understanding of the metabolism and physiology of mycobacteria. In this article, we provide an update on nucleotide metabolism and DNA replication in mycobacteria, highlighting key findings from the past 10 to 15 years. In the first section, we focus on nucleotide metabolism, ranging from the biosynthesis, salvage, and interconversion of purine and pyrimidine ribonucleotides to the formation of deoxyribonucleotides. The second part of the article is devoted to DNA replication, with a focus on replication initiation and elongation, as well as DNA unwinding. We provide an overview of replication fidelity and mutation rates in mycobacteria and summarize evidence suggesting that DNA replication occurs during states of low metabolic activity, and conclude by suggesting directions for future research to address key outstanding questions. Although this article focuses primarily on observations from Mycobacterium tuberculosis, it is interspersed, where appropriate, with insights from, and comparisons with, other mycobacterial species as well as better characterized bacterial models such as Escherichia coli. Finally, a common theme underlying almost all studies of mycobacterial metabolism is the potential to identify and validate functions or pathways that can be exploited for tuberculosis drug discovery. In this context, we have specifically highlighted those processes in mycobacterial DNA replication that might satisfy this critical requirement.
Plasmid Rolling-Circle Replication.
Ruiz-Masó, J A; MachóN, C; Bordanaba-Ruiseco, L; Espinosa, M; Coll, M; Del Solar, G
2015-02-01
Plasmids are DNA entities that undergo controlled replication independent of the chromosomal DNA, a crucial step that guarantees the prevalence of the plasmid in its host. DNA replication has to cope with the incapacity of the DNA polymerases to start de novo DNA synthesis, and different replication mechanisms offer diverse solutions to this problem. Rolling-circle replication (RCR) is a mechanism adopted by certain plasmids, among other genetic elements, that represents one of the simplest initiation strategies, that is, the nicking by a replication initiator protein on one parental strand to generate the primer for leading-strand initiation and a single priming site for lagging-strand synthesis. All RCR plasmid genomes consist of a number of basic elements: leading strand initiation and control, lagging strand origin, phenotypic determinants, and mobilization, generally in that order of frequency. RCR has been mainly characterized in Gram-positive bacterial plasmids, although it has also been described in Gram-negative bacterial or archaeal plasmids. Here we aim to provide an overview of the RCR plasmids' lifestyle, with emphasis on their characteristic traits, promiscuity, stability, utility as vectors, etc. While RCR is one of the best-characterized plasmid replication mechanisms, there are still many questions left unanswered, which will be pointed out along the way in this review.
Institute of Scientific and Technical Information of China (English)
战峰; 李超; 韩泳平
2011-01-01
OBJECTIVE To study the optimal extraction condition of crocin from Crocus sativus L. METHODS Taking the total extraction ratio of crocin from Crocus sativus L. As an index, the quadratic orthogonal regression design which was based on the single factor method including the concentration of ethanol, extraction time and extraction temperature was employed to build the equation and to optimize the extraction technique. RESULTS The optimized condition was as follows: the concentration of ethanol was 0.52%, extracted 30 minutes at the temperature of 30 'C, and the extraction ratio reached 0.643 5 mg·g-1. CONCLUSION Under the optimal extraction condition, the total extraction ratio of crocin from Crocus sativus L. Is much higher.%目的 研究从西红花中提取西红花苷的最佳工艺条件.方法 以西红花苷的提取率为指标,在使用单因素试验方法考察乙醇浓度、提取时间、提取温度对西红花中西红花苷提取率的基础上,应用回归正交方法建立3元2次方程优化西红花苷的提取工艺.结果 最佳的提取工艺是:乙醇浓度52％,提取时间30 min,提取温度30℃.此条件下,提取率为0.643 5 mg·g-1.结论 在优化的提取工艺下,西红花中西红花苷提取率较高.
Testing discontinuities in nonparametric regression
Dai, Wenlin
2017-01-19
In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100
Logistic Regression: Concept and Application
Cokluk, Omay
2010-01-01
The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…
Fungible weights in logistic regression.
Jones, Jeff A; Waller, Niels G
2016-06-01
In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record
Regression Testing Cost Reduction Suite
Directory of Open Access Journals (Sweden)
Mohamed Alaa El-Din
2014-08-01
Full Text Available The estimated cost of software maintenance exceeds 70 percent of total software costs [1], and large portion of this maintenance expenses is devoted to regression testing. Regression testing is an expensive and frequently executed maintenance activity used to revalidate the modified software. Any reduction in the cost of regression testing would help to reduce the software maintenance cost. Test suites once developed are reused and updated frequently as the software evolves. As a result, some test cases in the test suite may become redundant when the software is modified over time since the requirements covered by them are also covered by other test cases. Due to the resource and time constraints for re-executing large test suites, it is important to develop techniques to minimize available test suites by removing redundant test cases. In general, the test suite minimization problem is NP complete. This paper focuses on proposing an effective approach for reducing the cost of regression testing process. The proposed approach is applied on real-time case study. It was found that the reduction in cost of regression testing for each regression testing cycle is ranging highly improved in the case of programs containing high number of selected statements which in turn maximize the benefits of using it in regression testing of complex software systems. The reduction in the regression test suite size will reduce the effort and time required by the testing teams to execute the regression test suite. Since regression testing is done more frequently in software maintenance phase, the overall software maintenance cost can be reduced considerably by applying the proposed approach.
The molecular biology of Bluetongue virus replication.
Patel, Avnish; Roy, Polly
2014-03-01
The members of Orbivirus genus within the Reoviridae family are arthropod-borne viruses which are responsible for high morbidity and mortality in ruminants. Bluetongue virus (BTV) which causes disease in livestock (sheep, goat, cattle) has been in the forefront of molecular studies for the last three decades and now represents the best understood orbivirus at a molecular and structural level. The complex nature of the virion structure has been well characterised at high resolution along with the definition of the virus encoded enzymes required for RNA replication; the ordered assembly of the capsid shell as well as the protein and genome sequestration required for it; and the role of host proteins in virus entry and virus release. More recent developments of Reverse Genetics and Cell-Free Assembly systems have allowed integration of the accumulated structural and molecular knowledge to be tested at meticulous level, yielding higher insight into basic molecular virology, from which the rational design of safe efficacious vaccines has been possible. This article is centred on the molecular dissection of BTV with a view to understanding the role of each protein in the virus replication cycle. These areas are important in themselves for BTV replication but they also indicate the pathways that related viruses, which includes viruses that are pathogenic to man and animals, might also use providing an informed starting point for intervention or prevention.
Can Coloring Mandalas Reduce Anxiety? A Replication Study
van der Vennet, Renee; Serice, Susan
2012-01-01
This experimental study replicated Curry and Kasser's (2005) research that tested whether coloring a mandala would reduce anxiety. After inducing an anxious mood via a writing activity, participants were randomly assigned to three groups that colored either on a mandala design, on a plaid design, or on a blank paper. Anxiety level was measured…
Can Coloring Mandalas Reduce Anxiety? A Replication Study
van der Vennet, Renee; Serice, Susan
2012-01-01
This experimental study replicated Curry and Kasser's (2005) research that tested whether coloring a mandala would reduce anxiety. After inducing an anxious mood via a writing activity, participants were randomly assigned to three groups that colored either on a mandala design, on a plaid design, or on a blank paper. Anxiety level was measured…
Rank regression: an alternative regression approach for data with outliers.
Chen, Tian; Tang, Wan; Lu, Ying; Tu, Xin
2014-10-01
Linear regression models are widely used in mental health and related health services research. However, the classic linear regression analysis assumes that the data are normally distributed, an assumption that is not met by the data obtained in many studies. One method of dealing with this problem is to use semi-parametric models, which do not require that the data be normally distributed. But semi-parametric models are quite sensitive to outlying observations, so the generated estimates are unreliable when study data includes outliers. In this situation, some researchers trim the extreme values prior to conducting the analysis, but the ad-hoc rules used for data trimming are based on subjective criteria so different methods of adjustment can yield different results. Rank regression provides a more objective approach to dealing with non-normal data that includes outliers. This paper uses simulated and real data to illustrate this useful regression approach for dealing with outliers and compares it to the results generated using classical regression models and semi-parametric regression models.
Directory of Open Access Journals (Sweden)
Yingxin Gu
2016-11-01
Full Text Available Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD between the predicted and actual NDVI (scaled NDVI, value from 0–200 and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4, which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.
Gu, Yingxin; Wylie, Bruce K.; Boyte, Stephen; Picotte, Joshua J.; Howard, Danny; Smith, Kelcy; Nelson, Kurtis
2016-01-01
Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data) may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI) were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD) between the predicted and actual NDVI (scaled NDVI, value from 0–200) and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4), which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.
Defects of mitochondrial DNA replication.
Copeland, William C
2014-09-01
Mitochondrial DNA is replicated by DNA polymerase γ in concert with accessory proteins such as the mitochondrial DNA helicase, single-stranded DNA binding protein, topoisomerase, and initiating factors. Defects in mitochondrial DNA replication or nucleotide metabolism can cause mitochondrial genetic diseases due to mitochondrial DNA deletions, point mutations, or depletion, which ultimately cause loss of oxidative phosphorylation. These genetic diseases include mitochondrial DNA depletion syndromes such as Alpers or early infantile hepatocerebral syndromes, and mitochondrial DNA deletion disorders, such as progressive external ophthalmoplegia, ataxia-neuropathy, or mitochondrial neurogastrointestinal encephalomyopathy. This review focuses on our current knowledge of genetic defects of mitochondrial DNA replication (POLG, POLG2, C10orf2, and MGME1) that cause instability of mitochondrial DNA and mitochondrial disease.
Regulation of beta cell replication
DEFF Research Database (Denmark)
Lee, Ying C; Nielsen, Jens Høiriis
2008-01-01
Beta cell mass, at any given time, is governed by cell differentiation, neogenesis, increased or decreased cell size (cell hypertrophy or atrophy), cell death (apoptosis), and beta cell proliferation. Nutrients, hormones and growth factors coupled with their signalling intermediates have been...... suggested to play a role in beta cell mass regulation. In addition, genetic mouse model studies have indicated that cyclins and cyclin-dependent kinases that determine cell cycle progression are involved in beta cell replication, and more recently, menin in association with cyclin-dependent kinase...... inhibitors has been demonstrated to be important in beta cell growth. In this review, we consider and highlight some aspects of cell cycle regulation in relation to beta cell replication. The role of cell cycle regulation in beta cell replication is mostly from studies in rodent models, but whether...
Shell Separation for Mirror Replication
1999-01-01
NASA's Space Optics Manufacturing Center has been working to expand our view of the universe via sophisticated new telescopes. The Optics Center's goal is to develop low-cost, advanced space optics technologies for the NASA program in the 21st century - including the long-term goal of imaging Earth-like planets in distant solar systems. To reduce the cost of mirror fabrication, Marshall Space Flight Center (MSFC) has developed replication techniques, the machinery, and materials to replicate electro-formed nickel mirrors. Optics replication uses reusable forms, called mandrels, to make telescope mirrors ready for final finishing. MSFC optical physicist Bill Jones monitors a device used to chill a mandrel, causing it to shrink and separate from the telescope mirror without deforming the mirror's precisely curved surface.
Simulation Studies in Data Replication Strategies
Institute of Scientific and Technical Information of China (English)
HarveyB.Newman; IosifC.Legrand
2001-01-01
The aim of this work is to present the simulation studies in evaluating different data replication strategies between Regional Centers.The simulation Framework developed within the "Models of Networked Analysis at Rgional Centers”(MONARC) project,as a design and optimization tool for large scale distributed systems,has been used for these modeling studies.Remote client-serer access to database servers as well as ftp-like data transfers have been ralistically simulated and the performance and limitations are presented as a function of the characteristics of the protocol used and the network parameters.
Institute of Scientific and Technical Information of China (English)
杨拓宇; 李忠芳; 陈丰; 夏显明
2012-01-01
采用统计理论中的二次多项式回归方法,进行新型的无铅钎料的组分设计.对搜集到的SnAgCu合金成分与熔点数据进行统计分析.按照交互作用关系,建立起熔点与各种元素相关关系的数学模型.在已有的成分与熔点之间,找出一个明确的函数表达式.定量描述成分对熔化温度范围的影响程度.经过对方程的检验,对该数学模型进行优化运算；在区间外对融化范围进行预测,得到银铜交互作用的三维立体图.预测出较低熔点时的合金成分范围.在钎料中添加锗作为合金化元素.采用高频感应加热设备作为热源制备出新的钎料.分析测试表明:试样成分均匀、组织细密.%Adopting the statistical theory of quadratic polynomial regression method, we designed the new component of lead-free solder.Various alloy composition and melting point of the SnAgCu Lead Free System Alloy are collected and analyzed. According to interaction relations between the melting point and various elements, the mathematical model is established.An explicit (unction expression on the composition and melting point is found.The impact of composition on the melting temperature range is quantitatively described. After inspection of the equation, the mathematical model is optimized predicting the melting range outside the range of existing ingredients, three-dimensional map of the silver-copper interaction is obtained.The Alloy composition range in low melting point was predicted.Germanium is added to the solder as an alloying element. Frequency induction heating equipment as a heat source,Quartz tube as a melting vessel,Inert gas as a protective factor, the solder was prepared.Analysis and test results show that: the sample is very uniform composition and its micro structure is very fine.
Personality and Academic Motivation: Replication, Extension, and Replication
Jones, Martin H.; McMichael, Stephanie N.
2015-01-01
Previous work examines the relationships between personality traits and intrinsic/extrinsic motivation. We replicate and extend previous work to examine how personality may relate to achievement goals, efficacious beliefs, and mindset about intelligence. Approximately 200 undergraduates responded to the survey with a 150 participants replicating…
Logistic regression a self-learning text
Kleinbaum, David G
1994-01-01
This textbook provides students and professionals in the health sciences with a presentation of the use of logistic regression in research. The text is self-contained, and designed to be used both in class or as a tool for self-study. It arises from the author's many years of experience teaching this material and the notes on which it is based have been extensively used throughout the world.
Realization of Ridge Regression in MATLAB
Dimitrov, S.; Kovacheva, S.; Prodanova, K.
2008-10-01
The least square estimator (LSE) of the coefficients in the classical linear regression models is unbiased. In the case of multicollinearity of the vectors of design matrix, LSE has very big variance, i.e., the estimator is unstable. A more stable estimator (but biased) can be constructed using ridge-estimator (RE). In this paper the basic methods of obtaining of Ridge-estimators and numerical procedures of its realization in MATLAB are considered. An application to Pharmacokinetics problem is considered.
ORDINAL REGRESSION FOR INFORMATION RETRIEVAL
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
This letter presents a new discriminative model for Information Retrieval (IR), referred to as Ordinal Regression Model (ORM). ORM is different from most existing models in that it views IR as ordinal regression problem (i.e. ranking problem) instead of binary classification. It is noted that the task of IR is to rank documents according to the user information needed, so IR can be viewed as ordinal regression problem. Two parameter learning algorithms for ORM are presented. One is a perceptron-based algorithm. The other is the ranking Support Vector Machine (SVM). The effectiveness of the proposed approach has been evaluated on the task of ad hoc retrieval using three English Text REtrieval Conference (TREC) sets and two Chinese TREC sets. Results show that ORM significantly outperforms the state-of-the-art language model approaches and OKAPI system in all test sets; and it is more appropriate to view IR as ordinal regression other than binary classification.
Multiple Regression and Its Discontents
Snell, Joel C.; Marsh, Mitchell
2012-01-01
Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.
Multiple Regression and Its Discontents
Snell, Joel C.; Marsh, Mitchell
2012-01-01
Multiple regression is part of a larger statistical strategy originated by Gauss. The authors raise questions about the theory and suggest some changes that would make room for Mandelbrot and Serendipity.
Regression methods for medical research
Tai, Bee Choo
2013-01-01
Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the
Forecasting with Dynamic Regression Models
Pankratz, Alan
2012-01-01
One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.
Wrong Signs in Regression Coefficients
McGee, Holly
1999-01-01
When using parametric cost estimation, it is important to note the possibility of the regression coefficients having the wrong sign. A wrong sign is defined as a sign on the regression coefficient opposite to the researcher's intuition and experience. Some possible causes for the wrong sign discussed in this paper are a small range of x's, leverage points, missing variables, multicollinearity, and computational error. Additionally, techniques for determining the cause of the wrong sign are given.
From Rasch scores to regression
DEFF Research Database (Denmark)
Christensen, Karl Bang
2006-01-01
Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....
Regulation of Replication Recovery and Genome Integrity
DEFF Research Database (Denmark)
Colding, Camilla Skettrup
facilitate replication recovery after MMS-induced replication stress. Our data reveal that control of Mrc1 turnover through the interplay between posttranslational modifications and INQ localization adds another layer of regulation to the replication checkpoint. We also add replication recovery to the list...... is mediated by Mrc1, which ensures Mec1 presence at the stalled replication fork thus facilitating Rad53 phosphorylation. When replication can be resumed safely, the replication checkpoint is deactivated and replication forks restart. One mechanism for checkpoint deactivation is the ubiquitin......-targeted proteasomal degradation of Mrc1. In this study, we describe a novel nuclear structure, the intranuclear quality control compartment (INQ), which regulates protein turnover and is important for recovery after replication stress. We find that upon methyl methanesulfonate (MMS)-induced replication stress, INQ...
Hyperthermia stimulates HIV-1 replication.
Directory of Open Access Journals (Sweden)
Ferdinand Roesch
Full Text Available HIV-infected individuals may experience fever episodes. Fever is an elevation of the body temperature accompanied by inflammation. It is usually beneficial for the host through enhancement of immunological defenses. In cultures, transient non-physiological heat shock (42-45°C and Heat Shock Proteins (HSPs modulate HIV-1 replication, through poorly defined mechanisms. The effect of physiological hyperthermia (38-40°C on HIV-1 infection has not been extensively investigated. Here, we show that culturing primary CD4+ T lymphocytes and cell lines at a fever-like temperature (39.5°C increased the efficiency of HIV-1 replication by 2 to 7 fold. Hyperthermia did not facilitate viral entry nor reverse transcription, but increased Tat transactivation of the LTR viral promoter. Hyperthermia also boosted HIV-1 reactivation in a model of latently-infected cells. By imaging HIV-1 transcription, we further show that Hsp90 co-localized with actively transcribing provirus, and this phenomenon was enhanced at 39.5°C. The Hsp90 inhibitor 17-AAG abrogated the increase of HIV-1 replication in hyperthermic cells. Altogether, our results indicate that fever may directly stimulate HIV-1 replication, in a process involving Hsp90 and facilitation of Tat-mediated LTR activity.
Hyperthermia stimulates HIV-1 replication.
Roesch, Ferdinand; Meziane, Oussama; Kula, Anna; Nisole, Sébastien; Porrot, Françoise; Anderson, Ian; Mammano, Fabrizio; Fassati, Ariberto; Marcello, Alessandro; Benkirane, Monsef; Schwartz, Olivier
2012-01-01
HIV-infected individuals may experience fever episodes. Fever is an elevation of the body temperature accompanied by inflammation. It is usually beneficial for the host through enhancement of immunological defenses. In cultures, transient non-physiological heat shock (42-45°C) and Heat Shock Proteins (HSPs) modulate HIV-1 replication, through poorly defined mechanisms. The effect of physiological hyperthermia (38-40°C) on HIV-1 infection has not been extensively investigated. Here, we show that culturing primary CD4+ T lymphocytes and cell lines at a fever-like temperature (39.5°C) increased the efficiency of HIV-1 replication by 2 to 7 fold. Hyperthermia did not facilitate viral entry nor reverse transcription, but increased Tat transactivation of the LTR viral promoter. Hyperthermia also boosted HIV-1 reactivation in a model of latently-infected cells. By imaging HIV-1 transcription, we further show that Hsp90 co-localized with actively transcribing provirus, and this phenomenon was enhanced at 39.5°C. The Hsp90 inhibitor 17-AAG abrogated the increase of HIV-1 replication in hyperthermic cells. Altogether, our results indicate that fever may directly stimulate HIV-1 replication, in a process involving Hsp90 and facilitation of Tat-mediated LTR activity.
Cellular Responses to Replication Problems
M. Budzowska (Magdalena)
2008-01-01
textabstractDuring every S-phase cells need to duplicate their genomes so that both daughter cells inherit complete copies of genetic information. It is a tremendous task, given the large sizes of mammalian genomes and the required precision of DNA replication. A major threat to the accuracy and eff
Covert Reinforcement: A Partial Replication.
Ripstra, Constance C.; And Others
A partial replication of an investigation of the effect of covert reinforcement on a perceptual estimation task is described. The study was extended to include an extinction phase. There were five treatment groups: covert reinforcement, neutral scene reinforcement, noncontingent covert reinforcement, and two control groups. Each subject estimated…
Nonlinear wavelet estimation of regression function with random desigm
Institute of Scientific and Technical Information of China (English)
张双林; 郑忠国
1999-01-01
The nonlinear wavelet estimator of regression function with random design is constructed. The optimal uniform convergence rate of the estimator in a ball of Besov space Bp,q? is proved under quite genera] assumpations. The adaptive nonlinear wavelet estimator with near-optimal convergence rate in a wide range of smoothness function classes is also constructed. The properties of the nonlinear wavelet estimator given for random design regression and only with bounded third order moment of the error can be compared with those of nonlinear wavelet estimator given in literature for equal-spaced fixed design regression with i.i.d. Gauss error.
Crinivirus replication and host interactions
Directory of Open Access Journals (Sweden)
Zsofia A Kiss
2013-05-01
Full Text Available Criniviruses comprise one of the genera within the family Closteroviridae. Members in this family are restricted to the phloem and rely on whitefly vectors of the genera Bemisia and/or Trialeurodes for plant-to-plant transmission. All criniviruses have bipartite, positive-sense ssRNA genomes, although there is an unconfirmed report of one having a tripartite genome. Lettuce infectious yellows virus (LIYV is the type species of the genus, the best studied so far of the criniviruses and the first for which a reverse genetics system was available. LIYV RNA 1 encodes for proteins predicted to be involved in replication, and alone is competent for replication in protoplasts. Replication results in accumulation of cytoplasmic vesiculated membranous structures which are characteristic of most studied members of the Closteroviridae. These membranous structures, often referred to as BYV-type vesicles, are likely sites of RNA replication. LIYV RNA 2 is replicated in trans when co-infecting cells with RNA 1, but is temporally delayed relative to RNA1. Efficient RNA 2 replication also is dependent on the RNA 1-encoded RNA binding protein, P34. No LIYV RNA 2-encoded proteins have been shown to affect RNA replication, but at least four, CP, CPm, Hsp70h, and p59 are virion structural components and CPm is a determinant of whitefly transmissibility. Roles of other LIYV RNA 2-encoded proteins are largely as yet unknown, but P26 is a non-virion protein that accumulates in cells as characteristic plasmalemma deposits which in plants are localized within phloem parenchyma and companion cells over plasmodesmata connections to sieve elements. The two remaining crinivirus-conserved RNA 2-encoded proteins are P5 and P9. P5 is 39 amino acid protein and is encoded at the 5’ end of RNA 2 as ORF1 and is part of the hallmark closterovirus gene array. The orthologous gene in BYV has been shown to play a role in cell-to-cell movement and indicated to be localized to the
A Matlab program for stepwise regression
Directory of Open Access Journals (Sweden)
Yanhong Qi
2016-03-01
Full Text Available The stepwise linear regression is a multi-variable regression for identifying statistically significant variables in the linear regression equation. In present study, we presented the Matlab program of stepwise regression.
XRA image segmentation using regression
Jin, Jesse S.
1996-04-01
Segmentation is an important step in image analysis. Thresholding is one of the most important approaches. There are several difficulties in segmentation, such as automatic selecting threshold, dealing with intensity distortion and noise removal. We have developed an adaptive segmentation scheme by applying the Central Limit Theorem in regression. A Gaussian regression is used to separate the distribution of background from foreground in a single peak histogram. The separation will help to automatically determine the threshold. A small 3 by 3 widow is applied and the modal of the local histogram is used to overcome noise. Thresholding is based on local weighting, where regression is used again for parameter estimation. A connectivity test is applied to the final results to remove impulse noise. We have applied the algorithm to x-ray angiogram images to extract brain arteries. The algorithm works well for single peak distribution where there is no valley in the histogram. The regression provides a method to apply knowledge in clustering. Extending regression for multiple-level segmentation needs further investigation.
Biplots in Reduced-Rank Regression
Braak, ter C.J.F.; Looman, C.W.N.
1994-01-01
Regression problems with a number of related response variables are typically analyzed by separate multiple regressions. This paper shows how these regressions can be visualized jointly in a biplot based on reduced-rank regression. Reduced-rank regression combines multiple regression and principal c
Replication-Uncoupled Histone Deposition during Adenovirus DNA Replication
Komatsu, Tetsuro; Nagata, Kyosuke
2012-01-01
In infected cells, the chromatin structure of the adenovirus genome DNA plays critical roles in its genome functions. Previously, we reported that in early phases of infection, incoming viral DNA is associated with both viral core protein VII and cellular histones. Here we show that in late phases of infection, newly synthesized viral DNA is also associated with histones. We also found that the knockdown of CAF-1, a histone chaperone that functions in the replication-coupled deposition of his...
Interpretation of Standardized Regression Coefficients in Multiple Regression.
Thayer, Jerome D.
The extent to which standardized regression coefficients (beta values) can be used to determine the importance of a variable in an equation was explored. The beta value and the part correlation coefficient--also called the semi-partial correlation coefficient and reported in squared form as the incremental "r squared"--were compared for…
REPLICATION TOOL AND METHOD OF PROVIDING A REPLICATION TOOL
DEFF Research Database (Denmark)
2016-01-01
structured master surface (3a, 3b, 3c, 3d) having a lateral master pattern and a vertical master profile. The microscale structured master surface (3a, 3b, 3c, 3d) has been provided by localized pulsed laser treatment to generate microscale phase explosions. A method for producing a part with microscale......The invention relates to a replication tool (1, 1a, 1b) for producing a part (4) with a microscale textured replica surface (5a, 5b, 5c, 5d). The replication tool (1, 1a, 1b) comprises a tool surface (2a, 2b) defining a general shape of the item. The tool surface (2a, 2b) comprises a microscale...... energy directors on flange portions thereof uses the replication tool (1, 1a, 1b) to form an item (4) with a general shape as defined by the tool surface (2a, 2b). The formed item (4) comprises a microscale textured replica surface (5a, 5b, 5c, 5d) with a lateral arrangement of polydisperse microscale...
Inferential Models for Linear Regression
Directory of Open Access Journals (Sweden)
Zuoyi Zhang
2011-09-01
Full Text Available Linear regression is arguably one of the most widely used statistical methods in applications. However, important problems, especially variable selection, remain a challenge for classical modes of inference. This paper develops a recently proposed framework of inferential models (IMs in the linear regression context. In general, an IM is able to produce meaningful probabilistic summaries of the statistical evidence for and against assertions about the unknown parameter of interest and, moreover, these summaries are shown to be properly calibrated in a frequentist sense. Here we demonstrate, using simple examples, that the IM framework is promising for linear regression analysis --- including model checking, variable selection, and prediction --- and for uncertain inference in general.
[Is regression of atherosclerosis possible?].
Thomas, D; Richard, J L; Emmerich, J; Bruckert, E; Delahaye, F
1992-10-01
Experimental studies have shown the regression of atherosclerosis in animals given a cholesterol-rich diet and then given a normal diet or hypolipidemic therapy. Despite favourable results of clinical trials of primary prevention modifying the lipid profile, the concept of atherosclerosis regression in man remains very controversial. The methodological approach is difficult: this is based on angiographic data and requires strict standardisation of angiographic views and reliable quantitative techniques of analysis which are available with image processing. Several methodologically acceptable clinical coronary studies have shown not only stabilisation but also regression of atherosclerotic lesions with reductions of about 25% in total cholesterol levels and of about 40% in LDL cholesterol levels. These reductions were obtained either by drugs as in CLAS (Cholesterol Lowering Atherosclerosis Study), FATS (Familial Atherosclerosis Treatment Study) and SCOR (Specialized Center of Research Intervention Trial), by profound modifications in dietary habits as in the Lifestyle Heart Trial, or by surgery (ileo-caecal bypass) as in POSCH (Program On the Surgical Control of the Hyperlipidemias). On the other hand, trials with non-lipid lowering drugs such as the calcium antagonists (INTACT, MHIS) have not shown significant regression of existing atherosclerotic lesions but only a decrease on the number of new lesions. The clinical benefits of these regression studies are difficult to demonstrate given the limited period of observation, relatively small population numbers and the fact that in some cases the subjects were asymptomatic. The decrease in the number of cardiovascular events therefore seems relatively modest and concerns essentially subjects who were symptomatic initially. The clinical repercussion of studies of prevention involving a single lipid factor is probably partially due to the reduction in progression and anatomical regression of the atherosclerotic plaque
Nonparametric regression with filtered data
Linton, Oliver; Nielsen, Jens Perch; Van Keilegom, Ingrid; 10.3150/10-BEJ260
2011-01-01
We present a general principle for estimating a regression function nonparametrically, allowing for a wide variety of data filtering, for example, repeated left truncation and right censoring. Both the mean and the median regression cases are considered. The method works by first estimating the conditional hazard function or conditional survivor function and then integrating. We also investigate improved methods that take account of model structure such as independent errors and show that such methods can improve performance when the model structure is true. We establish the pointwise asymptotic normality of our estimators.
Logistic regression for circular data
Al-Daffaie, Kadhem; Khan, Shahjahan
2017-05-01
This paper considers the relationship between a binary response and a circular predictor. It develops the logistic regression model by employing the linear-circular regression approach. The maximum likelihood method is used to estimate the parameters. The Newton-Raphson numerical method is used to find the estimated values of the parameters. A data set from weather records of Toowoomba city is analysed by the proposed methods. Moreover, a simulation study is considered. The R software is used for all computations and simulations.
Quasi-least squares regression
Shults, Justine
2014-01-01
Drawing on the authors' substantial expertise in modeling longitudinal and clustered data, Quasi-Least Squares Regression provides a thorough treatment of quasi-least squares (QLS) regression-a computational approach for the estimation of correlation parameters within the framework of generalized estimating equations (GEEs). The authors present a detailed evaluation of QLS methodology, demonstrating the advantages of QLS in comparison with alternative methods. They describe how QLS can be used to extend the application of the traditional GEE approach to the analysis of unequally spaced longitu
Replication, Communication, and the Population Dynamics of Scientific Discovery.
Directory of Open Access Journals (Sweden)
Richard McElreath
Full Text Available Many published research results are false (Ioannidis, 2005, and controversy continues over the roles of replication and publication policy in improving the reliability of research. Addressing these problems is frustrated by the lack of a formal framework that jointly represents hypothesis formation, replication, publication bias, and variation in research quality. We develop a mathematical model of scientific discovery that combines all of these elements. This model provides both a dynamic model of research as well as a formal framework for reasoning about the normative structure of science. We show that replication may serve as a ratchet that gradually separates true hypotheses from false, but the same factors that make initial findings unreliable also make replications unreliable. The most important factors in improving the reliability of research are the rate of false positives and the base rate of true hypotheses, and we offer suggestions for addressing each. Our results also bring clarity to verbal debates about the communication of research. Surprisingly, publication bias is not always an obstacle, but instead may have positive impacts-suppression of negative novel findings is often beneficial. We also find that communication of negative replications may aid true discovery even when attempts to replicate have diminished power. The model speaks constructively to ongoing debates about the design and conduct of science, focusing analysis and discussion on precise, internally consistent models, as well as highlighting the importance of population dynamics.
Replication, Communication, and the Population Dynamics of Scientific Discovery.
McElreath, Richard; Smaldino, Paul E
2015-01-01
Many published research results are false (Ioannidis, 2005), and controversy continues over the roles of replication and publication policy in improving the reliability of research. Addressing these problems is frustrated by the lack of a formal framework that jointly represents hypothesis formation, replication, publication bias, and variation in research quality. We develop a mathematical model of scientific discovery that combines all of these elements. This model provides both a dynamic model of research as well as a formal framework for reasoning about the normative structure of science. We show that replication may serve as a ratchet that gradually separates true hypotheses from false, but the same factors that make initial findings unreliable also make replications unreliable. The most important factors in improving the reliability of research are the rate of false positives and the base rate of true hypotheses, and we offer suggestions for addressing each. Our results also bring clarity to verbal debates about the communication of research. Surprisingly, publication bias is not always an obstacle, but instead may have positive impacts-suppression of negative novel findings is often beneficial. We also find that communication of negative replications may aid true discovery even when attempts to replicate have diminished power. The model speaks constructively to ongoing debates about the design and conduct of science, focusing analysis and discussion on precise, internally consistent models, as well as highlighting the importance of population dynamics.
Verifying likelihoods for low template DNA profiles using multiple replicates
Steele, Christopher D.; Greenhalgh, Matthew; Balding, David J.
2014-01-01
To date there is no generally accepted method to test the validity of algorithms used to compute likelihood ratios (LR) evaluating forensic DNA profiles from low-template and/or degraded samples. An upper bound on the LR is provided by the inverse of the match probability, which is the usual measure of weight of evidence for standard DNA profiles not subject to the stochastic effects that are the hallmark of low-template profiles. However, even for low-template profiles the LR in favour of a true prosecution hypothesis should approach this bound as the number of profiling replicates increases, provided that the queried contributor is the major contributor. Moreover, for sufficiently many replicates the standard LR for mixtures is often surpassed by the low-template LR. It follows that multiple LTDNA replicates can provide stronger evidence for a contributor to a mixture than a standard analysis of a good-quality profile. Here, we examine the performance of the likeLTD software for up to eight replicate profiling runs. We consider simulated and laboratory-generated replicates as well as resampling replicates from a real crime case. We show that LRs generated by likeLTD usually do exceed the mixture LR given sufficient replicates, are bounded above by the inverse match probability and do approach this bound closely when this is expected. We also show good performance of likeLTD even when a large majority of alleles are designated as uncertain, and suggest that there can be advantages to using different profiling sensitivities for different replicates. Overall, our results support both the validity of the underlying mathematical model and its correct implementation in the likeLTD software. PMID:25082140
Bishop, Malachy; Rumrill, Phillip D; Roessler, Richard T
2015-01-01
This article presents a replication of Rumrill, Roessler, and Fitzgerald's 2004 analysis of a three-factor model of the impact of multiple sclerosis (MS) on quality of life (QOL). The three factors in the original model included illness-related, employment-related, and psychosocial adjustment factors. To test hypothesized relationships between QOL and illness-related, employment-related, and psychosocial variables using data from a survey of the employment concerns of Americans with MS (N = 1,839). An ex post facto, multiple correlational design was employed incorporating correlational and multiple regression analyses. QOL was positively related to educational level, employment status, job satisfaction, and job-match, and negatively related to number of symptoms, severity of symptoms, and perceived stress level. The three-factor model explained approximately 37 percent of the variance in QOL scores. The results of this replication confirm the continuing value of the three-factor model for predicting the QOL of adults with MS, and demonstrate the importance of medical, mental health, and vocational rehabilitation interventions and services in promoting QOL.
An Efficient Local Algorithm for Distributed Multivariate Regression
National Aeronautics and Space Administration — This paper offers a local distributed algorithm for multivariate regression in large peer-to-peer environments. The algorithm is designed for distributed...
Therapeutic targeting of replicative immortality
Yaswen, Paul; MacKenzie, Karen L.; Keith, W. Nicol; Hentosh, Patricia; Rodier, Francis; Zhu, Jiyue; Firestone, Gary L.; Matheu, Ander; Carnero, Amancio; Bilsland, Alan; Sundin, Tabetha; Honoki, Kanya; Fujii, Hiromasa; Georgakilas, Alexandros G.; Amedei, Amedeo
2015-01-01
One of the hallmarks of malignant cell populations is the ability to undergo continuous proliferation. This property allows clonal lineages to acquire sequential aberrations that can fuel increasingly autonomous growth, invasiveness, and therapeutic resistance. Innate cellular mechanisms have evolved to regulate replicative potential as a hedge against malignant progression. When activated in the absence of normal terminal differentiation cues, these mechanisms can result in a state of persis...
Alphavirus polymerase and RNA replication.
Pietilä, Maija K; Hellström, Kirsi; Ahola, Tero
2017-01-16
Alphaviruses are typically arthropod-borne, and many are important pathogens such as chikungunya virus. Alphaviruses encode four nonstructural proteins (nsP1-4), initially produced as a polyprotein P1234. nsP4 is the core RNA-dependent RNA polymerase but all four nsPs are required for RNA synthesis. The early replication complex (RC) formed by the polyprotein P123 and nsP4 synthesizes minus RNA strands, and the late RC composed of fully processed nsP1-nsP4 is responsible for the production of genomic and subgenomic plus strands. Different parts of nsP4 recognize the promoters for minus and plus strands but the binding also requires the other nsPs. The alphavirus polymerase has been purified and is capable of de novo RNA synthesis only in the presence of the other nsPs. The purified nsP4 also has terminal adenylyltransferase activity, which may generate the poly(A) tail at the 3' end of the genome. Membrane association of the nsPs is vital for replication, and alphaviruses induce membrane invaginations called spherules, which form a microenvironment for RNA synthesis by concentrating replication components and protecting double-stranded RNA intermediates. The RCs isolated as crude membrane preparations are active in RNA synthesis in vitro, but high-resolution structure of the RC has not been achieved, and thus the arrangement of viral and possible host components remains unknown. For some alphaviruses, Ras-GTPase-activating protein (Src-homology 3 (SH3) domain)-binding proteins (G3BPs) and amphiphysins have been shown to be essential for RNA replication and are present in the RCs. Host factors offer an additional target for antivirals, as only few alphavirus polymerase inhibitors have been described.
Regression of lumbar disk herniation
Directory of Open Access Journals (Sweden)
G. Yu Evzikov
2015-01-01
Full Text Available Compression of the spinal nerve root, giving rise to pain and sensory and motor disorders in the area of its innervation is the most vivid manifestation of herniated intervertebral disk. Different treatment modalities, including neurosurgery, for evolving these conditions are discussed. There has been recent evidence that spontaneous regression of disk herniation can regress. The paper describes a female patient with large lateralized disc extrusion that has caused compression of the nerve root S1, leading to obvious myotonic and radicular syndrome. Magnetic resonance imaging has shown that the clinical manifestations of discogenic radiculopathy, as well myotonic syndrome and morphological changes completely regressed 8 months later. The likely mechanism is inflammation-induced resorption of a large herniated disk fragment, which agrees with the data available in the literature. A decision to perform neurosurgery for which the patient had indications was made during her first consultation. After regression of discogenic radiculopathy, there was only moderate pain caused by musculoskeletal diseases (facet syndrome, piriformis syndrome that were successfully eliminated by minimally invasive techniques.
Heteroscedasticity checks for regression models
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
For checking on heteroscedasticity in regression models, a unified approach is proposed to constructing test statistics in parametric and nonparametric regression models. For nonparametric regression, the test is not affected sensitively by the choice of smoothing parameters which are involved in estimation of the nonparametric regression function. The limiting null distribution of the test statistic remains the same in a wide range of the smoothing parameters. When the covariate is one-dimensional, the tests are, under some conditions, asymptotically distribution-free. In the high-dimensional cases, the validity of bootstrap approximations is investigated. It is shown that a variant of the wild bootstrap is consistent while the classical bootstrap is not in the general case, but is applicable if some extra assumption on conditional variance of the squared error is imposed. A simulation study is performed to provide evidence of how the tests work and compare with tests that have appeared in the literature. The approach may readily be extended to handle partial linear, and linear autoregressive models.
Growth Regression and Economic Theory
Elbers, Chris; Gunning, Jan Willem
2002-01-01
In this note we show that the standard, loglinear growth regression specificationis consistent with one and only one model in the class of stochastic Ramsey models. Thismodel is highly restrictive: it requires a Cobb-Douglas technology and a 100% depreciationrate and it implies that risk does not af
Correlation Weights in Multiple Regression
Waller, Niels G.; Jones, Jeff A.
2010-01-01
A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting…
Ridge Regression for Interactive Models.
Tate, Richard L.
1988-01-01
An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are favorable to…
Dynamic replication of Web contents
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The phenomenal growth of the World Wide Web has brought huge increase in the traffic to the popular web sites.Long delays and denial of service experienced by the end-users,especially during the peak hours,continues to be the common problem while accessing popular sites.Replicating some of the objects at multiple sites in a distributed web-server environment is one of the possible solutions to improve the response time/Iatency. The decision of what and where to replicate requires solving a constraint optimization problem,which is NP-complete in general.In this paper, we consider the problem of placing copies of objects in a distributed web server system to minimize the cost of serving read and write requests when the web servers have Iimited storage capacity.We formulate the problem as a 0-1 optimization problem and present a polynomial time greedy algorithm with backtracking to dynamically replicate objects at the appropriate sites to minimize a cost function．To reduce the solution search space，we present necessary condi tions for a site to have a replica of an object jn order to minimize the cost function We present simulation resuIts for a variety of problems to illustrate the accuracy and efficiency of the proposed algorithms and compare them with those of some well-known algorithms．The simulation resuIts demonstrate the superiority of the proposed algorithms．
ASYMPTOTIC EFFICIENT ESTIMATION IN SEMIPARAMETRIC NONLINEAR REGRESSION MODELS
Institute of Scientific and Technical Information of China (English)
ZhuZhongyi; WeiBocheng
1999-01-01
In this paper, the estimation method based on the “generalized profile likelihood” for the conditionally parametric models in the paper given by Severini and Wong (1992) is extendedto fixed design semiparametrie nonlinear regression models. For these semiparametrie nonlinear regression models,the resulting estimator of parametric component of the model is shown to beasymptotically efficient and the strong convergence rate of nonparametric component is investigated. Many results (for example Chen (1988) ,Gao & Zhao (1993), Rice (1986) et al. ) are extended to fixed design semiparametric nonlinear regression models.
Evaluating replicability of laboratory experiments in economics.
Camerer, Colin F; Dreber, Anna; Forsell, Eskil; Ho, Teck-Hua; Huber, Jürgen; Johannesson, Magnus; Kirchler, Michael; Almenberg, Johan; Altmejd, Adam; Chan, Taizan; Heikensten, Emma; Holzmeister, Felix; Imai, Taisuke; Isaksson, Siri; Nave, Gideon; Pfeiffer, Thomas; Razen, Michael; Wu, Hang
2016-03-25
The replicability of some scientific findings has recently been called into question. To contribute data about replicability in economics, we replicated 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014. All of these replications followed predefined analysis plans that were made publicly available beforehand, and they all have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We found a significant effect in the same direction as in the original study for 11 replications (61%); on average, the replicated effect size is 66% of the original. The replicability rate varies between 67% and 78% for four additional replicability indicators, including a prediction market measure of peer beliefs.
Regression Verification Using Impact Summaries
Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana
2013-01-01
Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program
A Paper Model of DNA Structure and Replication.
Sigismondi, Linda A.
1989-01-01
A paper model which is designed to give students a hands-on experience during lecture and blackboard instruction on DNA structure is provided. A list of materials, paper patterns, and procedures for using the models to teach DNA structure and replication are given. (CW)
The Persuasiveness of Metaphor: A Replication and Extension.
Siltanen, Susan A.
A study was conducted to replicate and extend an earlier investigation of the persuasive effects of extended, intense concluding sex and death metaphors by using a more controlled design and by mixing metaphors. Fifty-eight high school students completed pretests assessing their attitudes toward a speech topic (legalization of marijuana). Two…
Adenovirus sequences required for replication in vivo.
Wang, K.; Pearson, G D
1985-01-01
We have studied the in vivo replication properties of plasmids carrying deletion mutations within cloned adenovirus terminal sequences. Deletion mapping located the adenovirus DNA replication origin entirely within the first 67 bp of the adenovirus inverted terminal repeat. This region could be further subdivided into two functional domains: a minimal replication origin and an adjacent auxillary region which boosted the efficiency of replication by more than 100-fold. The minimal origin occup...
Replication Origin Specification Gets a Push.
Plosky, Brian S
2015-12-03
During the gap between G1 and S phases when replication origins are licensed and fired, it is possible that DNA translocases could disrupt pre-replicative complexes (pre-RCs). In this issue of Molecular Cell, Gros et al. (2015) find that pre-RCs can be pushed along DNA and retain the ability to support replication.
Exploiting replicative stress to treat cancer
DEFF Research Database (Denmark)
Dobbelstein, Matthias; Sørensen, Claus Storgaard
2015-01-01
DNA replication in cancer cells is accompanied by stalling and collapse of the replication fork and signalling in response to DNA damage and/or premature mitosis; these processes are collectively known as 'replicative stress'. Progress is being made to increase our understanding of the mechanisms...
Bayesian model selection in Gaussian regression
Abramovich, Felix
2009-01-01
We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associated with a prior on the model size. We investigate the optimality properties of the resulting estimator. We establish the oracle inequality and specify conditions on the prior that imply its asymptotic minimaxity within a wide range of sparse and dense settings for "nearly-orthogonal" and "multicollinear" designs.
Polynomial Regressions and Nonsense Inference
Directory of Open Access Journals (Sweden)
Daniel Ventosa-Santaulària
2013-11-01
Full Text Available Polynomial specifications are widely used, not only in applied economics, but also in epidemiology, physics, political analysis and psychology, just to mention a few examples. In many cases, the data employed to estimate such specifications are time series that may exhibit stochastic nonstationary behavior. We extend Phillips’ results (Phillips, P. Understanding spurious regressions in econometrics. J. Econom. 1986, 33, 311–340. by proving that an inference drawn from polynomial specifications, under stochastic nonstationarity, is misleading unless the variables cointegrate. We use a generalized polynomial specification as a vehicle to study its asymptotic and finite-sample properties. Our results, therefore, lead to a call to be cautious whenever practitioners estimate polynomial regressions.
Producing The New Regressive Left
DEFF Research Database (Denmark)
Crone, Christine
to be a committed artist, and how that translates into supporting al-Assad’s rule in Syria; the Ramadan programme Harrir Aqlak’s attempt to relaunch an intellectual renaissance and to promote religious pluralism; and finally, al-Mayadeen’s cooperation with the pan-Latin American TV station TeleSur and its ambitions...... becomes clear from the analytical chapters is the emergence of the new cross-ideological alliance of The New Regressive Left. This emerging coalition between Shia Muslims, religious minorities, parts of the Arab Left, secular cultural producers, and the remnants of the political,strategic resistance...... coalition (Iran, Hizbollah, Syria), capitalises on a series of factors that bring them together in spite of their otherwise diverse worldviews and agendas. The New Regressive Left is united by resistance against the growing influence of Saudi Arabia in the religious, cultural, political, economic...
Quantile Regression With Measurement Error
Wei, Ying
2009-08-27
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.
Heteroscedasticity checks for regression models
Institute of Scientific and Technical Information of China (English)
ZHU; Lixing
2001-01-01
［1］Carroll, R. J., Ruppert, D., Transformation and Weighting in Regression, New York: Chapman and Hall, 1988.［2］Cook, R. D., Weisberg, S., Diagnostics for heteroscedasticity in regression, Biometrika, 1988, 70: 1—10.［3］Davidian, M., Carroll, R. J., Variance function estimation, J. Amer. Statist. Assoc., 1987, 82: 1079—1091.［4］Bickel, P., Using residuals robustly I: Tests for heteroscedasticity, Ann. Statist., 1978, 6: 266—291.［5］Carroll, R. J., Ruppert, D., On robust tests for heteroscedasticity, Ann. Statist., 1981, 9: 205—209.［6］Eubank, R. L., Thomas, W., Detecting heteroscedasticity in nonparametric regression, J. Roy. Statist. Soc., Ser. B, 1993, 55: 145—155.［7］Diblasi, A., Bowman, A., Testing for constant variance in a linear model, Statist. and Probab. Letters, 1997, 33: 95—103.［8］Dette, H., Munk, A., Testing heteoscedasticity in nonparametric regression, J. R. Statist. Soc. B, 1998, 60: 693—708.［9］Müller, H. G., Zhao, P. L., On a semi-parametric variance function model and a test for heteroscedasticity, Ann. Statist., 1995, 23: 946—967.［10］Stute, W., Manteiga, G., Quindimil, M. P., Bootstrap approximations in model checks for regression, J. Amer. Statist. Asso., 1998, 93: 141—149.［11］Stute, W., Thies, G., Zhu, L. X., Model checks for regression: An innovation approach, Ann. Statist., 1998, 26: 1916—1939.［12］Shorack, G. R., Wellner, J. A., Empirical Processes with Applications to Statistics, New York: Wiley, 1986.［13］Efron, B., Bootstrap methods: Another look at the jackknife, Ann. Statist., 1979, 7: 1—26.［14］Wu, C. F. J., Jackknife, bootstrap and other re-sampling methods in regression analysis, Ann. Statist., 1986, 14: 1261—1295.［15］H rdle, W., Mammen, E., Comparing non-parametric versus parametric regression fits, Ann. Statist., 1993, 21: 1926—1947.［16］Liu, R. Y., Bootstrap procedures under some non-i.i.d. models, Ann. Statist., 1988, 16: 1696—1708.［17
Clustered regression with unknown clusters
Barman, Kishor
2011-01-01
We consider a collection of prediction experiments, which are clustered in the sense that groups of experiments ex- hibit similar relationship between the predictor and response variables. The experiment clusters as well as the regres- sion relationships are unknown. The regression relation- ships define the experiment clusters, and in general, the predictor and response variables may not exhibit any clus- tering. We call this prediction problem clustered regres- sion with unknown clusters (CRUC) and in this paper we focus on linear regression. We study and compare several methods for CRUC, demonstrate their applicability to the Yahoo Learning-to-rank Challenge (YLRC) dataset, and in- vestigate an associated mathematical model. CRUC is at the crossroads of many prior works and we study several prediction algorithms with diverse origins: an adaptation of the expectation-maximization algorithm, an approach in- spired by K-means clustering, the singular value threshold- ing approach to matrix rank minimization u...
General regression and representation model for classification.
Directory of Open Access Journals (Sweden)
Jianjun Qian
Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.
Robust nonlinear regression in applications
Lim, Changwon; Sen, Pranab K.; Peddada, Shyamal D.
2013-01-01
Robust statistical methods, such as M-estimators, are needed for nonlinear regression models because of the presence of outliers/influential observations and heteroscedasticity. Outliers and influential observations are commonly observed in many applications, especially in toxicology and agricultural experiments. For example, dose response studies, which are routinely conducted in toxicology and agriculture, sometimes result in potential outliers, especially in the high dose gr...
Astronomical Methods for Nonparametric Regression
Steinhardt, Charles L.; Jermyn, Adam
2017-01-01
I will discuss commonly used techniques for nonparametric regression in astronomy. We find that several of them, particularly running averages and running medians, are generically biased, asymmetric between dependent and independent variables, and perform poorly in recovering the underlying function, even when errors are present only in one variable. We then examine less-commonly used techniques such as Multivariate Adaptive Regressive Splines and Boosted Trees and find them superior in bias, asymmetry, and variance both theoretically and in practice under a wide range of numerical benchmarks. In this context the chief advantage of the common techniques is runtime, which even for large datasets is now measured in microseconds compared with milliseconds for the more statistically robust techniques. This points to a tradeoff between bias, variance, and computational resources which in recent years has shifted heavily in favor of the more advanced methods, primarily driven by Moore's Law. Along these lines, we also propose a new algorithm which has better overall statistical properties than all techniques examined thus far, at the cost of significantly worse runtime, in addition to providing guidance on choosing the nonparametric regression technique most suitable to any specific problem. We then examine the more general problem of errors in both variables and provide a new algorithm which performs well in most cases and lacks the clear asymmetry of existing non-parametric methods, which fail to account for errors in both variables.
Nanoscale topographical replication of graphene architecture by manufactured DNA nanostructures
Moon, Youngkwon; Shin, Jihoon; Seo, Soonbeom; Park, Sung Ha; Ahn, Joung Real
2015-03-01
Despite many studies on how geometry can be used to control the electronic properties of graphene, certain limitations to fabrication of designed graphene nanostructures exist. Here, we demonstrate controlled topographical replication of graphene by artificial deoxyribonucleic acid (DNA) nanostructures. Owing to the high degree of geometrical freedom of DNA nanostructures, we controlled the nanoscale topography of graphene. The topography of graphene replicated from DNA nanostructures showed enhanced thermal stability and revealed an interesting negative temperature coefficient of sheet resistivity when underlying DNA nanostructures were denatured at high temperatures.
Genetics Home Reference: caudal regression syndrome
... Twitter Home Health Conditions caudal regression syndrome caudal regression syndrome Enable Javascript to view the expand/collapse ... Download PDF Open All Close All Description Caudal regression syndrome is a disorder that impairs the development ...
An introduction to using Bayesian linear regression with clinical data.
Baldwin, Scott A; Larson, Michael J
2017-11-01
Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.
Replication of micro and nano surface geometries
DEFF Research Database (Denmark)
Hansen, Hans Nørgaard; Hocken, R.J.; Tosello, Guido
2011-01-01
: manufacture of net-shape micro/nano surfaces, tooling (i.e. master making), and surface quality control (metrology, inspection). Replication processes and methods as well as the metrology of surfaces to determine the degree of replication are presented and classified. Examples from various application areas...... are given including replication for surface texture measurements, surface roughness standards, manufacture of micro and nano structured functional surfaces, replicated surfaces for optical applications (e.g. optical gratings), and process chains based on combinations of repeated surface replication steps....
Replication of prions in differentiated muscle cells.
Herbst, Allen; Aiken, Judd M; McKenzie, Debbie
2014-01-01
We have demonstrated that prions accumulate to high levels in non-proliferative C2C12 myotubes. C2C12 cells replicate as myoblasts but can be differentiated into myotubes. Earlier studies indicated that C2C12 myoblasts are not competent for prion replication. (1) We confirmed that observation and demonstrated, for the first time, that while replicative myoblasts do not accumulate PrP(Sc), differentiated post-mitotic myotube cultures replicate prions robustly. Here we extend our observations and describe the implication and utility of this system for replicating prions.
Directory of Open Access Journals (Sweden)
Cameron Palmer
2017-07-01
Full Text Available Genome-wide association studies (GWAS have identified hundreds of SNPs responsible for variation in human quantitative traits. However, genome-wide-significant associations often fail to replicate across independent cohorts, in apparent inconsistency with their apparent strong effects in discovery cohorts. This limited success of replication raises pervasive questions about the utility of the GWAS field. We identify all 332 studies of quantitative traits from the NHGRI-EBI GWAS Database with attempted replication. We find that the majority of studies provide insufficient data to evaluate replication rates. The remaining papers replicate significantly worse than expected (p < 10-14, even when adjusting for regression-to-the-mean of effect size between discovery- and replication-cohorts termed the Winner's Curse (p < 10-16. We show this is due in part to misreporting replication cohort-size as a maximum number, rather than per-locus one. In 39 studies accurately reporting per-locus cohort-size for attempted replication of 707 loci in samples with similar ancestry, replication rate matched expectation (predicted 458, observed 457, p = 0.94. In contrast, ancestry differences between replication and discovery (13 studies, 385 loci cause the most highly-powered decile of loci to replicate worse than expected, due to difference in linkage disequilibrium.
Mission assurance increased with regression testing
Lang, R.; Spezio, M.
Knowing what to test is an important attribute in any testing campaign, especially when it has to be right or the mission could be in jeopardy. The New Horizons mission, developed and operated by the John Hopkins University Applied Physics Laboratory, received a planned major upgrade to their Mission Operations and Control (MOC) ground system architecture. Early in the mission planning it was recognized that the ground system platform would require an upgrade to assure continued support of technology used for spacecraft operations. With the planned update to the six year operational ground architecture from Solaris 8 to Solaris 10, it was critical that the new architecture maintain critical operations and control functions. The New Horizons spacecraft is heading to its historic rendezvous with Pluto in July 2015 and then proceeding into the Kuiper Belt. This paper discusses the Independent Software Acceptance Testing (ISAT) Regression test campaign that played a critical role to assure the continued success of the New Horizons mission. The New Horizons ISAT process was designed to assure all the requirements were being met for the ground software functions developed to support the mission objectives. The ISAT team developed a test plan with a series of test case designs. The test objectives were to verify that the software developed from the requirements functioned as expected in the operational environment. As the test cases were developed and executed, a regression test suite was identified at the functional level. This regression test suite would serve as a crucial resource in assuring the operational system continued to function as required with such a large scale change being introduced. Some of the New Horizons ground software changes required modifications to the most critical functions of the operational software. Of particular concern was the new MOC architecture (Solaris 10) is Intel based and little endian, and the legacy architecture (Solaris 8) was SPA
Replicator dynamics in value chains
DEFF Research Database (Denmark)
Cantner, Uwe; Savin, Ivan; Vannuccini, Simone
2016-01-01
dynamics may revert its effect. In these regressive developments of market selection, firms with low fitness expand because of being integrated with highly fit partners, and the other way around; ii) allowing partner's switching within a value chain illustrates that periods of instability in the early...
Optorsim: A Grid Simulator for Studying Dynamic Data Replication Strategies
Bell, William H; Millar, A Paul; Capozza, Luigi; Stockinger, Kurt; Zini, Floriano
2003-01-01
Computational grids process large, computationally intensive problems on small data sets. In contrast, data grids process large computational problems that in turn require evaluating, mining and producing large amounts of data. Replication, creating geographically disparate identical copies of data, is regarded as one of the major optimization techniques for reducing data access costs. In this paper, several replication algorithms are discussed. These algorithms were studied using the Grid simulator: OptorSim. OptorSim provides a modular framework within which optimization strategies can be studied under different Grid configurations. The goal is to explore the stability and transient behaviour of selected optimization techniques. We detail the design and implementation of OptorSim and analyze various replication algorithms based on different Grid workloads.
DNA replication stress: causes, resolution and disease.
Mazouzi, Abdelghani; Velimezi, Georgia; Loizou, Joanna I
2014-11-15
DNA replication is a fundamental process of the cell that ensures accurate duplication of the genetic information and subsequent transfer to daughter cells. Various pertubations, originating from endogenous or exogenous sources, can interfere with proper progression and completion of the replication process, thus threatening genome integrity. Coordinated regulation of replication and the DNA damage response is therefore fundamental to counteract these challenges and ensure accurate synthesis of the genetic material under conditions of replication stress. In this review, we summarize the main sources of replication stress and the DNA damage signaling pathways that are activated in order to preserve genome integrity during DNA replication. We also discuss the association of replication stress and DNA damage in human disease and future perspectives in the field. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Replication Stress: A Lifetime of Epigenetic Change
Directory of Open Access Journals (Sweden)
Simran Khurana
2015-09-01
Full Text Available DNA replication is essential for cell division. Challenges to the progression of DNA polymerase can result in replication stress, promoting the stalling and ultimately collapse of replication forks. The latter involves the formation of DNA double-strand breaks (DSBs and has been linked to both genome instability and irreversible cell cycle arrest (senescence. Recent technological advances have elucidated many of the factors that contribute to the sensing and repair of stalled or broken replication forks. In addition to bona fide repair factors, these efforts highlight a range of chromatin-associated changes at and near sites of replication stress, suggesting defects in epigenome maintenance as a potential outcome of aberrant DNA replication. Here, we will summarize recent insight into replication stress-induced chromatin-reorganization and will speculate on possible adverse effects for gene expression, nuclear integrity and, ultimately, cell function.
Replication-selective oncolytic viruses in the treatment of cancer.
Everts, Bart; van der Poel, Henk G
2005-02-01
In the search for novel strategies, oncolytic virotherapy has recently emerged as a viable approach to specifically kill tumor cells. Unlike conventional gene therapy, it uses replication competent viruses that are able to spread through tumor tissue by virtue of viral replication and concomitant cell lysis. Recent advances in molecular biology have allowed the design of several genetically modified viruses, such as adenovirus and herpes simplex virus that specifically replicate in, and kill tumor cells. On the other hand, viruses with intrinsic oncolytic capacity are also being evaluated for therapeutic purposes. In this review, an overview is given of the general mechanisms and genetic modifications by which these viruses achieve tumor cell-specific replication and antitumor efficacy. However, although generally the oncolytic efficacy of these approaches has been demonstrated in preclinical studies the therapeutic efficacy in clinical trails is still not optimal. Therefore, strategies are evaluated that could further enhance the oncolytic potential of conditionally replicating viruses. In this respect, the use of tumor-selective viruses in conjunction with other standard therapies seems most promising. However, still several hurdles regarding clinical limitations and safety issues should be overcome before this mode of therapy can become of clinical relevance.
Making Connections: Replicating and Extending the Utility Value Intervention in the Classroom
Hulleman, Chris S.; Kosovich, Jeff J.; Barron, Kenneth E.; Daniel, David B.
2017-01-01
We replicated and extended prior research investigating a theoretically guided intervention based on expectancy-value theory designed to enhance student learning outcomes (e.g., Hulleman & Harackiewicz, 2009). First, we replicated prior work by demonstrating that the utility value intervention, which manipulated whether students made…
On Weighted Support Vector Regression
DEFF Research Database (Denmark)
Han, Xixuan; Clemmensen, Line Katrine Harder
2014-01-01
We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...
Gmur, Stephan; Vogt, Daniel; Zabowski, Darlene; Moskal, L Monika
2012-01-01
The characterization of soil attributes using hyperspectral sensors has revealed patterns in soil spectra that are known to respond to mineral composition, organic matter, soil moisture and particle size distribution. Soil samples from different soil horizons of replicated soil series from sites located within Washington and Oregon were analyzed with the FieldSpec Spectroradiometer to measure their spectral signatures across the electromagnetic range of 400 to 1,000 nm. Similarity rankings of individual soil samples reveal differences between replicate series as well as samples within the same replicate series. Using classification and regression tree statistical methods, regression trees were fitted to each spectral response using concentrations of nitrogen, carbon, carbonate and organic matter as the response variables. Statistics resulting from fitted trees were: nitrogen R(2) 0.91 (p organic matter R(2) 0.98 (p organic matter for upper soil horizons in a nondestructive method.
Learning Inverse Rig Mappings by Nonlinear Regression.
Holden, Daniel; Saito, Jun; Komura, Taku
2016-11-11
We present a framework to design inverse rig-functions - functions that map low level representations of a character's pose such as joint positions or surface geometry to the representation used by animators called the animation rig. Animators design scenes using an animation rig, a framework widely adopted in animation production which allows animators to design character poses and geometry via intuitive parameters and interfaces. Yet most state-of-the-art computer animation techniques control characters through raw, low level representations such as joint angles, joint positions, or vertex coordinates. This difference often stops the adoption of state-of-the-art techniques in animation production. Our framework solves this issue by learning a mapping between the low level representations of the pose and the animation rig. We use nonlinear regression techniques, learning from example animation sequences designed by the animators. When new motions are provided in the skeleton space, the learned mapping is used to estimate the rig controls that reproduce such a motion. We introduce two nonlinear functions for producing such a mapping: Gaussian process regression and feedforward neural networks. The appropriate solution depends on the nature of the rig and the amount of data available for training. We show our framework applied to various examples including articulated biped characters, quadruped characters, facial animation rigs, and deformable characters. With our system, animators have the freedom to apply any motion synthesis algorithm to arbitrary rigging and animation pipelines for immediate editing. This greatly improves the productivity of 3D animation, while retaining the flexibility and creativity of artistic input.
Regression away from the mean: Theory and examples.
Schwarz, Wolf; Reike, Dennis
2017-06-30
Using a standard repeated measures model with arbitrary true score distribution and normal error variables, we present some fundamental closed-form results which explicitly indicate the conditions under which regression effects towards (RTM) and away from the mean are expected. Specifically, we show that for skewed and bimodal distributions many or even most cases will show a regression effect that is in expectation away from the mean, or that is not just towards but actually beyond the mean. We illustrate our results in quantitative detail with typical examples from experimental and biometric applications, which exhibit a clear regression away from the mean ('egression from the mean') signature. We aim not to repeal cautionary advice against potential RTM effects, but to present a balanced view of regression effects, based on a clear identification of the conditions governing the form that regression effects take in repeated measures designs. © 2017 The British Psychological Society.
Multiatlas segmentation as nonparametric regression.
Awate, Suyash P; Whitaker, Ross T
2014-09-01
This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems.
Al-Khatib, Issam A; Abu Fkhidah, Ismail; Khatib, Jumana I; Kontogianni, Stamatia
2016-03-01
Forecasting of hospital solid waste generation is a critical challenge for future planning. The composition and generation rate of hospital solid waste in hospital units was the field where the proposed methodology of the present article was applied in order to validate the results and secure the outcomes of the management plan in national hospitals. A set of three multiple-variable regression models has been derived for estimating the daily total hospital waste, general hospital waste, and total hazardous waste as a function of number of inpatients, number of total patients, and number of beds. The application of several key indicators and validation procedures indicates the high significance and reliability of the developed models in predicting the hospital solid waste of any hospital. Methodology data were drawn from existent scientific literature. Also, useful raw data were retrieved from international organisations and the investigated hospitals' personnel. The primal generation outcomes are compared with other local hospitals and also with hospitals from other countries. The main outcome, which is the developed model results, are presented and analysed thoroughly. The goal is this model to act as leverage in the discussions among governmental authorities on the implementation of a national plan for safe hospital waste management in Palestine.
DNA Replication via Entanglement Swapping
Pusuluk, Onur
2010-01-01
Quantum effects are mainly used for the determination of molecular shapes in molecular biology, but quantum information theory may be a more useful tool to understand the physics of life. Molecular biology assumes that function is explained by structure, the complementary geometries of molecules and weak intermolecular hydrogen bonds. However, both this assumption and its converse are possible if organic molecules and quantum circuits/protocols are considered as hardware and software of living systems that are co-optimized during evolution. In this paper, we try to model DNA replication as a multiparticle entanglement swapping with a reliable qubit representation of nucleotides. In the model, molecular recognition of a nucleotide triggers an intrabase entanglement corresponding to a superposition state of different tautomer forms. Then, base pairing occurs by swapping intrabase entanglements with interbase entanglements.
Therapeutic targeting of replicative immortality.
Yaswen, Paul; MacKenzie, Karen L; Keith, W Nicol; Hentosh, Patricia; Rodier, Francis; Zhu, Jiyue; Firestone, Gary L; Matheu, Ander; Carnero, Amancio; Bilsland, Alan; Sundin, Tabetha; Honoki, Kanya; Fujii, Hiromasa; Georgakilas, Alexandros G; Amedei, Amedeo; Amin, Amr; Helferich, Bill; Boosani, Chandra S; Guha, Gunjan; Ciriolo, Maria Rosa; Chen, Sophie; Mohammed, Sulma I; Azmi, Asfar S; Bhakta, Dipita; Halicka, Dorota; Niccolai, Elena; Aquilano, Katia; Ashraf, S Salman; Nowsheen, Somaira; Yang, Xujuan
2015-12-01
One of the hallmarks of malignant cell populations is the ability to undergo continuous proliferation. This property allows clonal lineages to acquire sequential aberrations that can fuel increasingly autonomous growth, invasiveness, and therapeutic resistance. Innate cellular mechanisms have evolved to regulate replicative potential as a hedge against malignant progression. When activated in the absence of normal terminal differentiation cues, these mechanisms can result in a state of persistent cytostasis. This state, termed "senescence," can be triggered by intrinsic cellular processes such as telomere dysfunction and oncogene expression, and by exogenous factors such as DNA damaging agents or oxidative environments. Despite differences in upstream signaling, senescence often involves convergent interdependent activation of tumor suppressors p53 and p16/pRB, but can be induced, albeit with reduced sensitivity, when these suppressors are compromised. Doses of conventional genotoxic drugs required to achieve cancer cell senescence are often much lower than doses required to achieve outright cell death. Additional therapies, such as those targeting cyclin dependent kinases or components of the PI3K signaling pathway, may induce senescence specifically in cancer cells by circumventing defects in tumor suppressor pathways or exploiting cancer cells' heightened requirements for telomerase. Such treatments sufficient to induce cancer cell senescence could provide increased patient survival with fewer and less severe side effects than conventional cytotoxic regimens. This positive aspect is countered by important caveats regarding senescence reversibility, genomic instability, and paracrine effects that may increase heterogeneity and adaptive resistance of surviving cancer cells. Nevertheless, agents that effectively disrupt replicative immortality will likely be valuable components of new combinatorial approaches to cancer therapy. Copyright © 2015 The Authors
Nonparametric Least Squares Estimation of a Multivariate Convex Regression Function
Seijo, Emilio
2010-01-01
This paper deals with the consistency of the least squares estimator of a convex regression function when the predictor is multidimensional. We characterize and discuss the computation of such an estimator via the solution of certain quadratic and linear programs. Mild sufficient conditions for the consistency of this estimator and its subdifferentials in fixed and stochastic design regression settings are provided. We also consider a regression function which is known to be convex and componentwise nonincreasing and discuss the characterization, computation and consistency of its least squares estimator.
Replication initiatives will not salvage the trustworthiness of psychology.
Coyne, James C
2016-05-31
Replication initiatives in psychology continue to gather considerable attention from far outside the field, as well as controversy from within. Some accomplishments of these initiatives are noted, but this article focuses on why they do not provide a general solution for what ails psychology. There are inherent limitations to mass replications ever being conducted in many areas of psychology, both in terms of their practicality and their prospects for improving the science. Unnecessary compromises were built into the ground rules for design and publication of the Open Science Collaboration: Psychology that undermine its effectiveness. Some ground rules could actually be flipped into guidance for how not to conduct replications. Greater adherence to best publication practices, transparency in the design and publishing of research, strengthening of independent post-publication peer review and firmer enforcement of rules about data sharing and declarations of conflict of interest would make many replications unnecessary. Yet, it has been difficult to move beyond simple endorsement of these measures to consistent implementation. Given the strong institutional support for questionable publication practices, progress will depend on effective individual and collective use of social media to expose lapses and demand reform. Some recent incidents highlight the necessity of this.
Thota, S; Khan, S M; Tippabhotla, S K; Battula, R; Gadiko, C; Vobalaboina, V
2013-11-01
An open-label, 2-treatment, 3-sequence, 3-period, single-dose, partial replicate crossover studies under fasting (n=48), fed (n=60) and fasting-applesauce (n=48) (sprinkled on one table spoonful of applesauce) modalities were conducted in healthy adult male volunteers to evaluate bioequivalence between 2 formulations of lansoprazole delayed release capsules 30 mg. In all the 3 studies, as per randomization, either test or reference formulations were administered in a crossover manner with a required washout period of at least 7 days. Blood samples were collected adequately (0-24 h) to determine lansoprazole plasma concentrations using a validated LC-MS/MS analytical method. To characterize the pharmacokinetic parameters (Cmax, AUC0-t, AUC0-∞, Tmax, Kel and T1/2) of lansoprazole, non-compartmental analysis and ANOVA was applied on ln-transformed values. The bioequivalence was tested based on within-subject variability of the reference formulation. In fasting and fed studies (within-subject variability>30%) bioequivalence was evaluated with scaled average bioequivalence, hence for the pharmacokinetic parameters Cmax, AUC0-t and AUC0-∞, the 95% upper confidence bound for (μT-μR)2-θσ2 WR was ≤0, and the point estimates (test-to-reference ratio) were within the regulatory acceptance limit 80.00-125.00%. In fasting-applesauce study (within-subject variability<30%) bioequivalence was evaluated with average bioequivalence, the 90% CI of ln-transformed data of Cmax, AUC0-t and AUC0-∞ were within the regulatory acceptance limit 80.00-125.00%. Based on these aforesaid statistical inferences, it was concluded that the test formulation is bioequivalent to reference formulation.
Prediction, Regression and Critical Realism
DEFF Research Database (Denmark)
Næss, Petter
2004-01-01
This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... of prediction necessary and possible in spatial planning of urban development. Finally, the political implications of positions within theory of science rejecting the possibility of predictions about social phenomena are addressed....... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...
Nonparametric Regression with Common Shocks
Directory of Open Access Journals (Sweden)
Eduardo A. Souza-Rodrigues
2016-09-01
Full Text Available This paper considers a nonparametric regression model for cross-sectional data in the presence of common shocks. Common shocks are allowed to be very general in nature; they do not need to be finite dimensional with a known (small number of factors. I investigate the properties of the Nadaraya-Watson kernel estimator and determine how general the common shocks can be while still obtaining meaningful kernel estimates. Restrictions on the common shocks are necessary because kernel estimators typically manipulate conditional densities, and conditional densities do not necessarily exist in the present case. By appealing to disintegration theory, I provide sufficient conditions for the existence of such conditional densities and show that the estimator converges in probability to the Kolmogorov conditional expectation given the sigma-field generated by the common shocks. I also establish the rate of convergence and the asymptotic distribution of the kernel estimator.
Practical Session: Multiple Linear Regression
Clausel, M.; Grégoire, G.
2014-12-01
Three exercises are proposed to illustrate the simple linear regression. In the first one investigates the influence of several factors on atmospheric pollution. It has been proposed by D. Chessel and A.B. Dufour in Lyon 1 (see Sect. 6 of http://pbil.univ-lyon1.fr/R/pdf/tdr33.pdf) and is based on data coming from 20 cities of U.S. Exercise 2 is an introduction to model selection whereas Exercise 3 provides a first example of analysis of variance. Exercises 2 and 3 have been proposed by A. Dalalyan at ENPC (see Exercises 2 and 3 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_5.pdf).
Lumbar herniated disc: spontaneous regression
Yüksel, Kasım Zafer
2017-01-01
Background Low back pain is a frequent condition that results in substantial disability and causes admission of patients to neurosurgery clinics. To evaluate and present the therapeutic outcomes in lumbar disc hernia (LDH) patients treated by means of a conservative approach, consisting of bed rest and medical therapy. Methods This retrospective cohort was carried out in the neurosurgery departments of hospitals in Kahramanmaraş city and 23 patients diagnosed with LDH at the levels of L3−L4, L4−L5 or L5−S1 were enrolled. Results The average age was 38.4 ± 8.0 and the chief complaint was low back pain and sciatica radiating to one or both lower extremities. Conservative treatment was administered. Neurological examination findings, durations of treatment and intervals until symptomatic recovery were recorded. Laségue tests and neurosensory examination revealed that mild neurological deficits existed in 16 of our patients. Previously, 5 patients had received physiotherapy and 7 patients had been on medical treatment. The number of patients with LDH at the level of L3−L4, L4−L5, and L5−S1 were 1, 13, and 9, respectively. All patients reported that they had benefit from medical treatment and bed rest, and radiologic improvement was observed simultaneously on MRI scans. The average duration until symptomatic recovery and/or regression of LDH symptoms was 13.6 ± 5.4 months (range: 5−22). Conclusions It should be kept in mind that lumbar disc hernias could regress with medical treatment and rest without surgery, and there should be an awareness that these patients could recover radiologically. This condition must be taken into account during decision making for surgical intervention in LDH patients devoid of indications for emergent surgery. PMID:28119770
Credit Scoring Problem Based on Regression Analysis
Khassawneh, Bashar Suhil Jad Allah
2014-01-01
ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....
Regulation of Unperturbed DNA Replication by Ubiquitylation
Directory of Open Access Journals (Sweden)
Sara Priego Moreno
2015-06-01
Full Text Available Posttranslational modification of proteins by means of attachment of a small globular protein ubiquitin (i.e., ubiquitylation represents one of the most abundant and versatile mechanisms of protein regulation employed by eukaryotic cells. Ubiquitylation influences almost every cellular process and its key role in coordination of the DNA damage response is well established. In this review we focus, however, on the ways ubiquitylation controls the process of unperturbed DNA replication. We summarise the accumulated knowledge showing the leading role of ubiquitin driven protein degradation in setting up conditions favourable for replication origin licensing and S-phase entry. Importantly, we also present the emerging major role of ubiquitylation in coordination of the active DNA replication process: preventing re-replication, regulating the progression of DNA replication forks, chromatin re-establishment and disassembly of the replisome at the termination of replication forks.
Chromosome replication and segregation in bacteria.
Reyes-Lamothe, Rodrigo; Nicolas, Emilien; Sherratt, David J
2012-01-01
In dividing cells, chromosome duplication once per generation must be coordinated with faithful segregation of newly replicated chromosomes and with cell growth and division. Many of the mechanistic details of bacterial replication elongation are well established. However, an understanding of the complexities of how replication initiation is controlled and coordinated with other cellular processes is emerging only slowly. In contrast to eukaryotes, in which replication and segregation are separate in time, the segregation of most newly replicated bacterial genetic loci occurs sequentially soon after replication. We compare the strategies used by chromosomes and plasmids to ensure their accurate duplication and segregation and discuss how these processes are coordinated spatially and temporally with growth and cell division. We also describe what is known about the three conserved families of ATP-binding proteins that contribute to chromosome segregation and discuss their inter-relationships in a range of disparate bacteria.
DEFF Research Database (Denmark)
Volf, Mette
This publication is unique in its demystification and operationalization of the complex and elusive nature of the design process. The publication portrays the designer’s daily work and the creative process, which the designer is a part of. Apart from displaying the designer’s work methods...... and design parameters, the publication shows examples from renowned Danish design firms. Through these examples the reader gets an insight into the designer’s reality....
Assessing and correcting for regression toward the mean in deviance-induced social conformity.
Schnuerch, Robert; Schnuerch, Martin; Gibbons, Henning
2015-01-01
Our understanding of the mechanisms underlying social conformity has recently advanced due to the employment of neuroscience methodology and novel experimental approaches. Most prominently, several studies have demonstrated the role of neural reinforcement-learning processes in conformal adjustments using a specifically designed and frequently replicated paradigm. Only very recently, the validity of the critical behavioral effect in this very paradigm was seriously questioned, as it invites the unwanted contribution of regression toward the mean. Using a straightforward control-group design, we corroborate this recent finding and demonstrate the involvement of statistical distortions. Additionally, however, we provide conclusive evidence that the paradigm nevertheless captures behavioral effects that can only be attributed to social influence. Finally, we present a mathematical approach that allows to isolate and quantify the paradigm's true conformity effect both at the group level and for each individual participant. These data as well as relevant theoretical considerations suggest that the groundbreaking findings regarding the brain mechanisms of social conformity that were obtained with this recently criticized paradigm were indeed valid. Moreover, we support earlier suggestions that distorted behavioral effects can be rectified by means of appropriate correction procedures.
Semiconservative replication in the quasispecies model
Tannenbaum, Emmanuel; Deeds, Eric J.; Shakhnovich, Eugene I.
2004-06-01
This paper extends Eigen’s quasispecies equations to account for the semiconservative nature of DNA replication. We solve the equations in the limit of infinite sequence length for the simplest case of a static, sharply peaked fitness landscape. We show that the error catastrophe occurs when μ , the product of sequence length and per base pair mismatch probability, exceeds 2 ln [2/ ( 1+1/k ) ] , where k>1 is the first-order growth rate constant of the viable “master” sequence (with all other sequences having a first-order growth rate constant of 1 ). This is in contrast to the result of ln k for conservative replication. In particular, as k→∞ , the error catastrophe is never reached for conservative replication, while for semiconservative replication the critical μ approaches 2 ln 2 . Semiconservative replication is therefore considerably less robust than conservative replication to the effect of replication errors. We also show that the mean equilibrium fitness of a semiconservatively replicating system is given by k ( 2 e-μ/2 -1 ) below the error catastrophe, in contrast to the standard result of k e-μ for conservative replication (derived by Kimura and Maruyama in 1966). From this result it is readily shown that semiconservative replication is necessary to account for the observation that, at sufficiently high mutagen concentrations, faster replicating cells will die more quickly than more slowly replicating cells. Thus, in contrast to Eigen’s original model, the semiconservative quasispecies equations are able to provide a mathematical basis for explaining the efficacy of mutagens as chemotherapeutic agents.
Hypotension and Environmental Noise: A Replication Study
Lercher, Peter; Widmann, Ulrich; Thudium, Jürg
2014-01-01
Up to now, traffic noise effect studies focused on hypertension as health outcome. Hypotension has not been considered as a potential health outcome although in experiments some people also responded to noise with decreases of blood pressure. Currently, the characteristics of these persons are not known and whether this down regulation of blood pressure is an experimental artifact, selection, or can also be observed in population studies is unanswered. In a cross-sectional replication study, we randomly sampled participants (age 20–75, N = 807) from circular areas (radius = 500 m) around 31 noise measurement sites from four noise exposure strata (35–44, 45–54, 55–64, >64 Leq, dBA). Repeated blood pressure measurements were available for a smaller sample (N = 570). Standardized information on socio-demographics, housing, life style and health was obtained by door to door visits including anthropometric measurements. Noise and air pollution exposure was assigned by GIS based on both calculation and measurements. Reported hypotension or hypotension medication past year was the main outcome studied. Exposure-effect relationships were modeled with multiple non-linear logistic regression techniques using separate noise estimations for total, highway and rail exposure. Reported hypotension was significantly associated with rail and total noise exposure and strongly modified by weather sensitivity. Reported hypotension medication showed associations of similar size with rail and total noise exposure without effect modification by weather sensitivity. The size of the associations in the smaller sample with BMI as additional covariate was similar. Other important cofactors (sex, age, BMI, health) and moderators (weather sensitivity, adjacent main roads and associated annoyance) need to be considered as indispensible part of the observed relationship. This study confirms a potential new noise effect pathway and discusses potential patho-physiological routes of actions
Hypotension and Environmental Noise: A Replication Study
Directory of Open Access Journals (Sweden)
Peter Lercher
2014-08-01
Full Text Available Up to now, traffic noise effect studies focused on hypertension as health outcome. Hypotension has not been considered as a potential health outcome although in experiments some people also responded to noise with decreases of blood pressure. Currently, the characteristics of these persons are not known and whether this down regulation of blood pressure is an experimental artifact, selection, or can also be observed in population studies is unanswered. In a cross-sectional replication study, we randomly sampled participants (age 20–75, N = 807 from circular areas (radius = 500 m around 31 noise measurement sites from four noise exposure strata (35–44, 45–54, 55–64, >64 Leq, dBA. Repeated blood pressure measurements were available for a smaller sample (N = 570. Standardized information on socio-demographics, housing, life style and health was obtained by door to door visits including anthropometric measurements. Noise and air pollution exposure was assigned by GIS based on both calculation and measurements. Reported hypotension or hypotension medication past year was the main outcome studied. Exposure-effect relationships were modeled with multiple non-linear logistic regression techniques using separate noise estimations for total, highway and rail exposure. Reported hypotension was significantly associated with rail and total noise exposure and strongly modified by weather sensitivity. Reported hypotension medication showed associations of similar size with rail and total noise exposure without effect modification by weather sensitivity. The size of the associations in the smaller sample with BMI as additional covariate was similar. Other important cofactors (sex, age, BMI, health and moderators (weather sensitivity, adjacent main roads and associated annoyance need to be considered as indispensible part of the observed relationship. This study confirms a potential new noise effect pathway and discusses potential patho
Regulation of chromosomal replication in Caulobacter crescentus.
Collier, Justine
2012-03-01
The alpha-proteobacterium Caulobacter crescentus is characterized by its asymmetric cell division, which gives rise to a replicating stalked cell and a non-replicating swarmer cell. Thus, the initiation of chromosomal replication is tightly regulated, temporally and spatially, to ensure that it is coordinated with cell differentiation and cell cycle progression. Waves of DnaA and CtrA activities control when and where the initiation of DNA replication will take place in C. crescentus cells. The conserved DnaA protein initiates chromosomal replication by directly binding to sites within the chromosomal origin (Cori), ensuring that DNA replication starts once and only once per cell cycle. The CtrA response regulator represses the initiation of DNA replication in swarmer cells and in the swarmer compartment of pre-divisional cells, probably by competing with DnaA for binding to Cori. CtrA and DnaA are controlled by multiple redundant regulatory pathways that include DNA methylation-dependent transcriptional regulation, temporally regulated proteolysis and the targeting of regulators to specific locations within the cell. Besides being critical regulators of chromosomal replication, CtrA and DnaA are also master transcriptional regulators that control the expression of many genes, thus connecting DNA replication with other events of the C. crescentus cell cycle. Copyright © 2012 Elsevier Inc. All rights reserved.
Tannenbaum, Emmanuel
2008-01-01
This paper studies the mutation-selection balance in three simplified replication models. The first model considers a population of organisms replicating via the production of asexual spores. The second model considers a sexually replicating population that produces identical gametes. The third model considers a sexually replicating population that produces distinct sperm and egg gametes. All models assume diploid organisms whose genomes consist of two chromosomes, each of which is taken to be functional if equal to some master sequence, and defective otherwise. In the asexual population, the asexual diploid spores develop directly into adult organisms. In the sexual populations, the haploid gametes enter a haploid pool, where they may fuse with other haploids. The resulting immature diploid organisms then proceed to develop into mature organisms. Based on an analysis of all three models, we find that, as organism size increases, a sexually replicating population can only outcompete an asexually replicating population if the adult organisms produce distinct sperm and egg gametes. A sexual replication strategy that is based on the production of large numbers of sperm cells to fertilize a small number of eggs is found to be necessary in order to maintain a sufficiently low cost for sex for the strategy to be selected for over a purely asexual strategy. We discuss the usefulness of this model in understanding the evolution and maintenance of sexual replication as the preferred replication strategy in complex, multicellular organisms.
Regression Models for Count Data in R
Directory of Open Access Journals (Sweden)
Christian Kleiber
2008-06-01
Full Text Available The classical Poisson, geometric and negative binomial regression models for count data belong to the family of generalized linear models and are available at the core of the statistics toolbox in the R system for statistical computing. After reviewing the conceptual and computational features of these methods, a new implementation of hurdle and zero-inﬂated regression models in the functions hurdle( and zeroinfl( from the package pscl is introduced. It re-uses design and functionality of the basic R functions just as the underlying conceptual tools extend the classical models. Both hurdle and zero-inﬂated model, are able to incorporate over-dispersion and excess zeros-two problems that typically occur in count data sets in economics and the social sciences—better than their classical counterparts. Using cross-section data on the demand for medical care, it is illustrated how the classical as well as the zero-augmented models can be ﬁtted, inspected and tested in practice.
Varying-coefficient functional linear regression
Wu, Yichao; Müller, Hans-Georg; 10.3150/09-BEJ231
2011-01-01
Functional linear regression analysis aims to model regression relations which include a functional predictor. The analog of the regression parameter vector or matrix in conventional multivariate or multiple-response linear regression models is a regression parameter function in one or two arguments. If, in addition, one has scalar predictors, as is often the case in applications to longitudinal studies, the question arises how to incorporate these into a functional regression model. We study a varying-coefficient approach where the scalar covariates are modeled as additional arguments of the regression parameter function. This extension of the functional linear regression model is analogous to the extension of conventional linear regression models to varying-coefficient models and shares its advantages, such as increased flexibility; however, the details of this extension are more challenging in the functional case. Our methodology combines smoothing methods with regularization by truncation at a finite numb...
Robust Mediation Analysis Based on Median Regression
Yuan, Ying; MacKinnon, David P.
2014-01-01
Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925
Nonparametric additive regression for repeatedly measured data
Carroll, R. J.
2009-05-20
We develop an easily computed smooth backfitting algorithm for additive model fitting in repeated measures problems. Our methodology easily copes with various settings, such as when some covariates are the same over repeated response measurements. We allow for a working covariance matrix for the regression errors, showing that our method is most efficient when the correct covariance matrix is used. The component functions achieve the known asymptotic variance lower bound for the scalar argument case. Smooth backfitting also leads directly to design-independent biases in the local linear case. Simulations show our estimator has smaller variance than the usual kernel estimator. This is also illustrated by an example from nutritional epidemiology. © 2009 Biometrika Trust.
Auborn, K. J.; Little, R. D.; Platt, T. H. K.; Vaccariello, M. A.; Schildkraut, C. L.
1994-07-01
We have examined the structures of replication intermediates from the human papillomavirus type 11 genome in DNA extracted from papilloma lesions (laryngeal papillomas). The sites of replication initiation and termination utilized in vivo were mapped by using neutral/neutral and neutral/alkaline two-dimensional agarose gel electrophoresis methods. Initiation of replication was detected in or very close to the upstream regulatory region (URR; the noncoding, regulatory sequences upstream of the open reading frames in the papillomavirus genome). We also show that replication forks proceed bidirectionally from the origin and converge 180circ opposite the URR. These results demonstrate the feasibility of analysis of replication of viral genomes directly from infected tissue.
Functional Regression for Quasar Spectra
Ciollaro, Mattia; Freeman, Peter; Genovese, Christopher; Lei, Jing; O'Connell, Ross; Wasserman, Larry
2014-01-01
The Lyman-alpha forest is a portion of the observed light spectrum of distant galactic nuclei which allows us to probe remote regions of the Universe that are otherwise inaccessible. The observed Lyman-alpha forest of a quasar light spectrum can be modeled as a noisy realization of a smooth curve that is affected by a `damping effect' which occurs whenever the light emitted by the quasar travels through regions of the Universe with higher matter concentration. To decode the information conveyed by the Lyman-alpha forest about the matter distribution, we must be able to separate the smooth `continuum' from the noise and the contribution of the damping effect in the quasar light spectra. To predict the continuum in the Lyman-alpha forest, we use a nonparametric functional regression model in which both the response and the predictor variable (the smooth part of the damping-free portion of the spectrum) are function-valued random variables. We demonstrate that the proposed method accurately predicts the unobserv...
Knowledge and Awareness: Linear Regression
Directory of Open Access Journals (Sweden)
Monika Raghuvanshi
2016-12-01
Full Text Available Knowledge and awareness are factors guiding development of an individual. These may seem simple and practicable, but in reality a proper combination of these is a complex task. Economically driven state of development in younger generations is an impediment to the correct manner of development. As youths are at the learning phase, they can be molded to follow a correct lifestyle. Awareness and knowledge are important components of any formal or informal environmental education. The purpose of this study is to evaluate the relationship of these components among students of secondary/ senior secondary schools who have undergone a formal study of environment in their curricula. A suitable instrument is developed in order to measure the elements of Awareness and Knowledge among the participants of the study. Data was collected from various secondary and senior secondary school students in the age group 14 to 20 years using cluster sampling technique from the city of Bikaner, India. Linear regression analysis was performed using IBM SPSS 23 statistical tool. There exists a weak relation between knowledge and awareness about environmental issues, caused due to routine practices mishandling; hence one component can be complemented by other for improvement in both. Knowledge and awareness are crucial factors and can provide huge opportunities in any field. Resource utilization for economic solutions may pave the way for eco-friendly products and practices. If green practices are inculcated at the learning phase, they may become normal routine. This will also help in repletion of the environment.
Streamflow forecasting using functional regression
Masselot, Pierre; Dabo-Niang, Sophie; Chebana, Fateh; Ouarda, Taha B. M. J.
2016-07-01
Streamflow, as a natural phenomenon, is continuous in time and so are the meteorological variables which influence its variability. In practice, it can be of interest to forecast the whole flow curve instead of points (daily or hourly). To this end, this paper introduces the functional linear models and adapts it to hydrological forecasting. More precisely, functional linear models are regression models based on curves instead of single values. They allow to consider the whole process instead of a limited number of time points or features. We apply these models to analyse the flow volume and the whole streamflow curve during a given period by using precipitations curves. The functional model is shown to lead to encouraging results. The potential of functional linear models to detect special features that would have been hard to see otherwise is pointed out. The functional model is also compared to the artificial neural network approach and the advantages and disadvantages of both models are discussed. Finally, future research directions involving the functional model in hydrology are presented.
Principal component regression analysis with SPSS.
Liu, R X; Kuang, J; Gong, Q; Hou, X L
2003-06-01
The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.
The Regressive Effect of STAR.
Widerquist, Karl
New York State's School Tax Relief Aid (STAR) heavily favors wealthier districts, partially reversing equalizing effects that state aid is designed to have. Normally state school aid helps bring less wealthy school districts closer to the standard of wealthier districts. It increases and makes up the lost revenue from taxpayers in the state as a…
Dunlop, Joanna Leigh; Vandal, Alain Charles; de Zoysa, Janak Rashme; Gabriel, Ruvin Sampath; Haloob, Imad Adbi; Hood, Christopher John; Matheson, Philip James; McGregor, David Owen Ross; Rabindranath, Kannaiyan Samuel; Semple, David John; Marshall, Mark Roger
2013-07-15
The current literature recognises that left ventricular hypertrophy makes a key contribution to the high rate of premature cardiovascular mortality in dialysis patients. Determining how we might intervene to ameliorate left ventricular hypertrophy in dialysis populations has become a research priority. Reducing sodium exposure through lower dialysate sodium may be a promising intervention in this regard. However there is clinical equipoise around this intervention because the benefit has not yet been demonstrated in a robust prospective clinical trial, and several observational studies have suggested sodium lowering interventions may be deleterious in some dialysis patients. The Sodium Lowering in Dialysate (SoLID) study is funded by the Health Research Council of New Zealand. It is a multi-centre, prospective, randomised, single-blind (outcomes assessor), controlled parallel assignment 3-year clinical trial. The SoLID study is designed to study what impact low dialysate sodium has upon cardiovascular risk in dialysis patients. The study intends to enrol 118 home hemodialysis patients from 6 sites in New Zealand over 24 months and follow up each participant over 12 months. Key exclusion criteria are: patients who dialyse more frequently than 3.5 times per week, pre-dialysis serum sodium of dialysed using dialysate sodium 135 mM and 140 mM respectively, for 12 months. The primary outcome measure is left ventricular mass index, as measured by cardiac magnetic resonance imaging, after 12 months of intervention. Eleven or more secondary outcomes will be studied in an attempt to better understand the physiologic and clinical mechanisms by which lower dialysate sodium alters the primary end point. The SoLID study is designed to clarify the effect of low dialysate sodium upon the cardiovascular outcomes of dialysis patients. The study results will provide much needed information about the efficacy of a cost effective, economically sustainable solution to a condition which
The Dual Nature of Nek9 in Adenovirus Replication
Jung, Richard; Radko, Sandi; Pelka, Peter
2016-01-01
To successfully replicate in an infected host cell, a virus must overcome sophisticated host defense mechanisms. Viruses, therefore, have evolved a multitude of devices designed to circumvent cellular defenses that would lead to abortive infection. Previous studies have identified Nek9, a cellular kinase, as a binding partner of adenovirus E1A, but the biology behind this association remains a mystery. Here we show that Nek9 is a transcriptional repressor that functions together with E1A to s...
Surface micro topography replication in injection moulding
DEFF Research Database (Denmark)
Arlø, Uffe Rolf
of the mechanisms controlling topography replication. Surface micro topography replication in injection moulding depends on the main elements of Process conditions Plastic material Mould topography In this work, the process conditions is the main factor considered, but the impact of plastic material...
Replication and Robustness in Developmental Research
Duncan, Greg J.; Engel, Mimi; Claessens, Amy; Dowsett, Chantelle J.
2014-01-01
Replications and robustness checks are key elements of the scientific method and a staple in many disciplines. However, leading journals in developmental psychology rarely include explicit replications of prior research conducted by different investigators, and few require authors to establish in their articles or online appendices that their key…
Completion of DNA replication in Escherichia coli.
Wendel, Brian M; Courcelle, Charmain T; Courcelle, Justin
2014-11-18
The mechanism by which cells recognize and complete replicated regions at their precise doubling point must be remarkably efficient, occurring thousands of times per cell division along the chromosomes of humans. However, this process remains poorly understood. Here we show that, in Escherichia coli, the completion of replication involves an enzymatic system that effectively counts pairs and limits cellular replication to its doubling point by allowing converging replication forks to transiently continue through the doubling point before the excess, over-replicated regions are incised, resected, and joined. Completion requires RecBCD and involves several proteins associated with repairing double-strand breaks including, ExoI, SbcDC, and RecG. However, unlike double-strand break repair, completion occurs independently of homologous recombination and RecA. In some bacterial viruses, the completion mechanism is specifically targeted for inactivation to allow over-replication to occur during lytic replication. The results suggest that a primary cause of genomic instabilities in many double-strand-break-repair mutants arises from an impaired ability to complete replication, independent from DNA damage.
Replication and Robustness in Developmental Research
Duncan, Greg J.; Engel, Mimi; Claessens, Amy; Dowsett, Chantelle J.
2014-01-01
Replications and robustness checks are key elements of the scientific method and a staple in many disciplines. However, leading journals in developmental psychology rarely include explicit replications of prior research conducted by different investigators, and few require authors to establish in their articles or online appendices that their key…
Using Replication Projects in Teaching Research Methods
Standing, Lionel G.; Grenier, Manuel; Lane, Erica A.; Roberts, Meigan S.; Sykes, Sarah J.
2014-01-01
It is suggested that replication projects may be valuable in teaching research methods, and also address the current need in psychology for more independent verification of published studies. Their use in an undergraduate methods course is described, involving student teams who performed direct replications of four well-known experiments, yielding…
How frog embryos replicate their DNA reliably
Bechhoefer, John; Marshall, Brandon
2007-03-01
Frog embryos contain three billion base pairs of DNA. In early embryos (cycles 2-12), DNA replication is extremely rapid, about 20 min., and the entire cell cycle lasts only 25 min., meaning that mitosis (cell division) takes place in about 5 min. In this stripped-down cell cycle, there are no efficient checkpoints to prevent the cell from dividing before its DNA has finished replication - a disastrous scenario. Even worse, the many origins of replication are laid down stochastically and are also initiated stochastically throughout the replication process. Despite the very tight time constraints and despite the randomness introduced by origin stochasticity, replication is extremely reliable, with cell division failing no more than once in 10,000 tries. We discuss a recent model of DNA replication that is drawn from condensed-matter theories of 1d nucleation and growth. Using our model, we discuss different strategies of replication: should one initiate all origins as early as possible, or is it better to hold back and initiate some later on? Using concepts from extreme-value statistics, we derive the distribution of replication times given a particular scenario for the initiation of origins. We show that the experimentally observed initiation strategy for frog embryos meets the reliability constraint and is close to the one that requires the fewest resources of a cell.
Mammalian RAD52 Functions in Break-Induced Replication Repair of Collapsed DNA Replication Forks
DEFF Research Database (Denmark)
Sotiriou, Sotirios K; Kamileri, Irene; Lugli, Natalia
2016-01-01
Human cancers are characterized by the presence of oncogene-induced DNA replication stress (DRS), making them dependent on repair pathways such as break-induced replication (BIR) for damaged DNA replication forks. To better understand BIR, we performed a targeted siRNA screen for genes whose depl...
DBR1 siRNA inhibition of HIV-1 replication
Directory of Open Access Journals (Sweden)
Naidu Yathi
2005-10-01
Full Text Available Abstract Background HIV-1 and all retroviruses are related to retroelements of simpler organisms such as the yeast Ty elements. Recent work has suggested that the yeast retroelement Ty1 replicates via an unexpected RNA lariat intermediate in cDNA synthesis. The putative genomic RNA lariat intermediate is formed by a 2'-5' phosphodiester bond, like that found in pre-mRNA intron lariats and it facilitates the minus-strand template switch during cDNA synthesis. We hypothesized that HIV-1 might also form a genomic RNA lariat and therefore that siRNA-mediated inhibition of expression of the human RNA lariat de-branching enzyme (DBR1 expression would specifically inhibit HIV-1 replication. Results We designed three short interfering RNA (siRNA molecules targeting DBR1, which were capable of reducing DBR1 mRNA expression by 80% and did not significantly affect cell viability. We assessed HIV-1 replication in the presence of DBR1 siRNA and found that DBR1 knockdown led to decreases in viral cDNA and protein production. These effects could be reversed by cotransfection of a DBR1 cDNA indicating that the inhibition of HIV-1 replication was a specific effect of DBR1 underexpression. Conclusion These data suggest that DBR1 function may be needed to debranch a putative HIV-1 genomic RNA lariat prior to completion of reverse transcription.
Spontaneous Regression of an Incidental Spinal Meningioma
National Research Council Canada - National Science Library
Yilmaz, Ali; Kizilay, Zahir; Sair, Ahmet; Avcil, Mucahit; Ozkul, Ayca
2015-01-01
AIM: The regression of meningioma has been reported in literature before. In spite of the fact that the regression may be involved by hemorrhage, calcification or some drugs withdrawal, it is rarely observed spontaneously. CASE REPORT...
Common pitfalls in statistical analysis: Logistic regression.
Ranganathan, Priya; Pramesh, C S; Aggarwal, Rakesh
2017-01-01
Logistic regression analysis is a statistical technique to evaluate the relationship between various predictor variables (either categorical or continuous) and an outcome which is binary (dichotomous). In this article, we discuss logistic regression analysis and the limitations of this technique.
Robust Depth-Weighted Wavelet for Nonparametric Regression Models
Institute of Scientific and Technical Information of China (English)
Lu LIN
2005-01-01
In the nonpaxametric regression models, the original regression estimators including kernel estimator, Fourier series estimator and wavelet estimator are always constructed by the weighted sum of data, and the weights depend only on the distance between the design points and estimation points. As a result these estimators are not robust to the perturbations in data. In order to avoid this problem, a new nonparametric regression model, called the depth-weighted regression model, is introduced and then the depth-weighted wavelet estimation is defined. The new estimation is robust to the perturbations in data, which attains very high breakdown value close to 1/2. On the other hand, some asymptotic behaviours such as asymptotic normality are obtained. Some simulations illustrate that the proposed wavelet estimator is more robust than the original wavelet estimator and, as a price to pay for the robustness, the new method is slightly less efficient than the original method.
Fuzzy rule-based support vector regression system
Institute of Scientific and Technical Information of China (English)
Ling WANG; Zhichun MU; Hui GUO
2005-01-01
In this paper,we design a fuzzy rule-based support vector regression system.The proposed system utilizes the advantages of fuzzy model and support vector regression to extract support vectors to generate fuzzy if-then rules from the training data set.Based on the first-order linear Tagaki-Sugeno (TS) model,the structure of rules is identified by the support vector regression and then the consequent parameters of rules are tuned by the global least squares method.Our model is applied to the real world regression task.The simulation results gives promising performances in terms of a set of fuzzy rules,which can be easily interpreted by humans.
Rescue from replication stress during mitosis.
Fragkos, Michalis; Naim, Valeria
2017-04-03
Genomic instability is a hallmark of cancer and a common feature of human disorders, characterized by growth defects, neurodegeneration, cancer predisposition, and aging. Recent evidence has shown that DNA replication stress is a major driver of genomic instability and tumorigenesis. Cells can undergo mitosis with under-replicated DNA or unresolved DNA structures, and specific pathways are dedicated to resolving these structures during mitosis, suggesting that mitotic rescue from replication stress (MRRS) is a key process influencing genome stability and cellular homeostasis. Deregulation of MRRS following oncogene activation or loss-of-function of caretaker genes may be the cause of chromosomal aberrations that promote cancer initiation and progression. In this review, we discuss the causes and consequences of replication stress, focusing on its persistence in mitosis as well as the mechanisms and factors involved in its resolution, and the potential impact of incomplete replication or aberrant MRRS on tumorigenesis, aging and disease.
Semiparametric regression during 2003–2007
Ruppert, David
2009-01-01
Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.
Unbalanced Regressions and the Predictive Equation
DEFF Research Database (Denmark)
Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo
Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...
A New Replication Norm for Psychology
Directory of Open Access Journals (Sweden)
Etienne P LeBel
2015-10-01
Full Text Available In recent years, there has been a growing concern regarding the replicability of findings in psychology, including a mounting number of prominent findings that have failed to replicate via high-powered independent replication attempts. In the face of this replicability “crisis of confidence”, several initiatives have been implemented to increase the reliability of empirical findings. In the current article, I propose a new replication norm that aims to further boost the dependability of findings in psychology. Paralleling the extant social norm that researchers should peer review about three times as many articles that they themselves publish per year, the new replication norm states that researchers should aim to independently replicate important findings in their own research areas in proportion to the number of original studies they themselves publish per year (e.g., a 4:1 original-to-replication studies ratio. I argue this simple approach could significantly advance our science by increasing the reliability and cumulative nature of our empirical knowledge base, accelerating our theoretical understanding of psychological phenomena, instilling a focus on quality rather than quantity, and by facilitating our transformation toward a research culture where executing and reporting independent direct replications is viewed as an ordinary part of the research process. To help promote the new norm, I delineate (1 how each of the major constituencies of the research process (i.e., funders, journals, professional societies, departments, and individual researchers can incentivize replications and promote the new norm and (2 any obstacles each constituency faces in supporting the new norm.
Standards for Standardized Logistic Regression Coefficients
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
Synthesizing Regression Results: A Factored Likelihood Method
Wu, Meng-Jia; Becker, Betsy Jane
2013-01-01
Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…
Regression Analysis by Example. 5th Edition
Chatterjee, Samprit; Hadi, Ali S.
2012-01-01
Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…
Regression with Sparse Approximations of Data
DEFF Research Database (Denmark)
Noorzad, Pardis; Sturm, Bob L.
2012-01-01
We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected by...
Standards for Standardized Logistic Regression Coefficients
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
Data from Investigating Variation in Replicability: A “Many Labs” Replication Project
Directory of Open Access Journals (Sweden)
Richard A. Klein
2014-04-01
Full Text Available This dataset is from the Many Labs Replication Project in which 13 effects were replicated across 36 samples and over 6,000 participants. Data from the replications are included, along with demographic variables about the participants and contextual information about the environment in which the replication was conducted. Data were collected in-lab and online through a standardized procedure administered via an online link. The dataset is stored on the Open Science Framework website. These data could be used to further investigate the results of the included 13 effects or to study replication and generalizability more broadly.
Challenges in high accuracy surface replication for micro optics and micro fluidics manufacture
DEFF Research Database (Denmark)
Tosello, Guido; Hansen, Hans Nørgaard; Calaon, Matteo;
2014-01-01
by replication technologies such as nickel electroplating. All replication steps are enabled by a high precision master and high reproduction fidelity to ensure that the functionalities associated with the design are transferred to the final component. Engineered surface micro structures can be either......Patterning the surface of polymer components with microstructured geometries is employed in optical and microfluidic applications. Mass fabrication of polymer micro structured products is enabled by replication technologies such as injection moulding. Micro structured tools are also produced...... distributed, e.g., to create an optical pattern, or discretised, e.g., as micro channels for fluids manipulation. Key aspects of two process chains based on replication technologies for both types of micro structures are investigated: lateral replication fidelity, dimensional control at micro scale, edge...
Regression with Sparse Approximations of Data
DEFF Research Database (Denmark)
Noorzad, Pardis; Sturm, Bob L.
2012-01-01
We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...
Cheng-Hsin Yang, Scott; Bechhoefer, John
2008-03-01
DNA synthesis in Xenopus frog embryos initiates stochastically in time at many sites (origins) along the chromosome. Stochastic initiation implies fluctuations in the replication time and may lead to cell death if replication takes longer than the cell cycle time (˜ 25 min.). Surprisingly, although the typical replication time is about 20 min., in vivo experiments show that replication fails to complete only about 1 in 250 times. How is replication timing accurately controlled despite the stochasticity? Biologists have proposed two mechanisms: the first uses a regular spatial distribution of origins, while the second uses randomly located origins but increases their probability of initiation as the cell cycle proceeds. Here, we show that both mechanisms yield similar end-time distributions, implying that regular origin spacing is not needed for control of replication time. Moreover, we show that the experimentally inferred time-dependent initiation rate satisfies the observed low failure probability and nearly optimizes the use of replicative proteins.
Collaborative regression-based anatomical landmark detection
Gao, Yaozong; Shen, Dinggang
2015-12-01
Anatomical landmark detection plays an important role in medical image analysis, e.g. for registration, segmentation and quantitative analysis. Among the various existing methods for landmark detection, regression-based methods have recently attracted much attention due to their robustness and efficiency. In these methods, landmarks are localised through voting from all image voxels, which is completely different from the classification-based methods that use voxel-wise classification to detect landmarks. Despite their robustness, the accuracy of regression-based landmark detection methods is often limited due to (1) the inclusion of uninformative image voxels in the voting procedure, and (2) the lack of effective ways to incorporate inter-landmark spatial dependency into the detection step. In this paper, we propose a collaborative landmark detection framework to address these limitations. The concept of collaboration is reflected in two aspects. (1) Multi-resolution collaboration. A multi-resolution strategy is proposed to hierarchically localise landmarks by gradually excluding uninformative votes from faraway voxels. Moreover, for informative voxels near the landmark, a spherical sampling strategy is also designed at the training stage to improve their prediction accuracy. (2) Inter-landmark collaboration. A confidence-based landmark detection strategy is proposed to improve the detection accuracy of ‘difficult-to-detect’ landmarks by using spatial guidance from ‘easy-to-detect’ landmarks. To evaluate our method, we conducted experiments extensively on three datasets for detecting prostate landmarks and head & neck landmarks in computed tomography images, and also dental landmarks in cone beam computed tomography images. The results show the effectiveness of our collaborative landmark detection framework in improving landmark detection accuracy, compared to other state-of-the-art methods.
Targeting DNA Replication Stress for Cancer Therapy
Zhang, Jun; Dai, Qun; Park, Dongkyoo; Deng, Xingming
2016-01-01
The human cellular genome is under constant stress from extrinsic and intrinsic factors, which can lead to DNA damage and defective replication. In normal cells, DNA damage response (DDR) mediated by various checkpoints will either activate the DNA repair system or induce cellular apoptosis/senescence, therefore maintaining overall genomic integrity. Cancer cells, however, due to constitutive growth signaling and defective DDR, may exhibit “replication stress” —a phenomenon unique to cancer cells that is described as the perturbation of error-free DNA replication and slow-down of DNA synthesis. Although replication stress has been proven to induce genomic instability and tumorigenesis, recent studies have counterintuitively shown that enhancing replicative stress through further loosening of the remaining checkpoints in cancer cells to induce their catastrophic failure of proliferation may provide an alternative therapeutic approach. In this review, we discuss the rationale to enhance replicative stress in cancer cells, past approaches using traditional radiation and chemotherapy, and emerging approaches targeting the signaling cascades induced by DNA damage. We also summarize current clinical trials exploring these strategies and propose future research directions including the use of combination therapies, and the identification of potential new targets and biomarkers to track and predict treatment responses to targeting DNA replication stress. PMID:27548226
Oncogene v-jun modulates DNA replication.
Wasylyk, C; Schneikert, J; Wasylyk, B
1990-07-01
Cell transformation leads to alterations in both transcription and DNA replication. Activation of transcription by the expression of a number of transforming oncogenes is mediated by the transcription factor AP1 (Herrlich & Ponta, 1989; Imler & Wasylyk, 1989). AP1 is a composite transcription factor, consisting of members of the jun and fos gene-families. c-jun and c-fos are progenitors of oncogenes, suggestion that an important transcriptional event in cell transformation is altered activity of AP1, which may arise either indirectly by oncogene expression or directly by structural modification of AP1. We report here that the v-jun oncogene and its progenitor c-jun, as fusion proteins with the lex-A-repressor DNA binding domain, can activate DNA replication from the Polyoma virus (Py) origin of replication, linked to the lex-A operator. The transcription-activation region of v-jun is required for activation of replication. When excess v-jun is expressed in the cell, replication is inhibited or 'squelched'. These results suggest that one consequence of deregulated jun activity could be altered DNA replication and that there are similarities in the way v-jun activates replication and transcription.
Targeting DNA Replication Stress for Cancer Therapy
Directory of Open Access Journals (Sweden)
Jun Zhang
2016-08-01
Full Text Available The human cellular genome is under constant stress from extrinsic and intrinsic factors, which can lead to DNA damage and defective replication. In normal cells, DNA damage response (DDR mediated by various checkpoints will either activate the DNA repair system or induce cellular apoptosis/senescence, therefore maintaining overall genomic integrity. Cancer cells, however, due to constitutive growth signaling and defective DDR, may exhibit “replication stress” —a phenomenon unique to cancer cells that is described as the perturbation of error-free DNA replication and slow-down of DNA synthesis. Although replication stress has been proven to induce genomic instability and tumorigenesis, recent studies have counterintuitively shown that enhancing replicative stress through further loosening of the remaining checkpoints in cancer cells to induce their catastrophic failure of proliferation may provide an alternative therapeutic approach. In this review, we discuss the rationale to enhance replicative stress in cancer cells, past approaches using traditional radiation and chemotherapy, and emerging approaches targeting the signaling cascades induced by DNA damage. We also summarize current clinical trials exploring these strategies and propose future research directions including the use of combination therapies, and the identification of potential new targets and biomarkers to track and predict treatment responses to targeting DNA replication stress.
Assumptions of Multiple Regression: Correcting Two Misconceptions
Directory of Open Access Journals (Sweden)
Matt N. Williams
2013-09-01
Full Text Available In 2002, an article entitled - Four assumptions of multiple regression that researchers should always test- by.Osborne and Waters was published in PARE. This article has gone on to be viewed more than 275,000 times.(as of August 2013, and it is one of the first results displayed in a Google search for - regression.assumptions- . While Osborne and Waters' efforts in raising awareness of the need to check assumptions.when using regression are laudable, we note that the original article contained at least two fairly important.misconceptions about the assumptions of multiple regression: Firstly, that multiple regression requires the.assumption of normally distributed variables; and secondly, that measurement errors necessarily cause.underestimation of simple regression coefficients. In this article, we clarify that multiple regression models.estimated using ordinary least squares require the assumption of normally distributed errors in order for.trustworthy inferences, at least in small samples, but not the assumption of normally distributed response or.predictor variables. Secondly, we point out that regression coefficients in simple regression models will be.biased (toward zero estimates of the relationships between variables of interest when measurement error is.uncorrelated across those variables, but that when correlated measurement error is present, regression.coefficients may be either upwardly or downwardly biased. We conclude with a brief corrected summary of.the assumptions of multiple regression when using ordinary least squares.
Functional linear regression via canonical analysis
He, Guozhong; Wang, Jane-Ling; Yang, Wenjing; 10.3150/09-BEJ228
2011-01-01
We study regression models for the situation where both dependent and independent variables are square-integrable stochastic processes. Questions concerning the definition and existence of the corresponding functional linear regression models and some basic properties are explored for this situation. We derive a representation of the regression parameter function in terms of the canonical components of the processes involved. This representation establishes a connection between functional regression and functional canonical analysis and suggests alternative approaches for the implementation of functional linear regression analysis. A specific procedure for the estimation of the regression parameter function using canonical expansions is proposed and compared with an established functional principal component regression approach. As an example of an application, we present an analysis of mortality data for cohorts of medflies, obtained in experimental studies of aging and longevity.
A whole genome RNAi screen identifies replication stress response genes.
Kavanaugh, Gina; Ye, Fei; Mohni, Kareem N; Luzwick, Jessica W; Glick, Gloria; Cortez, David
2015-11-01
Proper DNA replication is critical to maintain genome stability. When the DNA replication machinery encounters obstacles to replication, replication forks stall and the replication stress response is activated. This response includes activation of cell cycle checkpoints, stabilization of the replication fork, and DNA damage repair and tolerance mechanisms. Defects in the replication stress response can result in alterations to the DNA sequence causing changes in protein function and expression, ultimately leading to disease states such as cancer. To identify additional genes that control the replication stress response, we performed a three-parameter, high content, whole genome siRNA screen measuring DNA replication before and after a challenge with replication stress as well as a marker of checkpoint kinase signalling. We identified over 200 replication stress response genes and subsequently analyzed how they influence cellular viability in response to replication stress. These data will serve as a useful resource for understanding the replication stress response.
Study on the micro-replication of shark skin
Institute of Scientific and Technical Information of China (English)
2008-01-01
Direct replication of creatural scarfskins to form biomimetic surfaces with relatively vivid morphology is a new attempt of the bio-replicated forming technology at animal body. Taking shark skins as the replication templates, and the micro-embossing and micro-molding as the material forming methods, the micro-replicating technology of the outward morphology on shark skins was demonstrated. The preliminary analysis on replication precision indicates that the bio-replicated forming technology can replicate the outward morphology of the shark scales with good precision, which validates the application of the bio-replicated forming technology in the direct morphology replication of the firm creatural scarfskins.
Regression in children with autism spectrum disorders.
Malhi, Prahbhjot; Singhi, Pratibha
2012-10-01
To understand the characteristics of autistic regression and to compare the clinical and developmental profile of children with autism spectrum disorders (ASD) in whom parents report developmental regression with age matched ASD children in whom no regression is reported. Participants were 35 (Mean age = 3.57 y, SD = 1.09) children with ASD in whom parents reported developmental regression before age 3 y and a group of age and IQ matched 35 ASD children in whom parents did not report regression. All children were recruited from the outpatient Child Psychology Clinic of the Department of Pediatrics of a tertiary care teaching hospital in North India. Multi-disciplinary evaluations including neurological, diagnostic, cognitive, and behavioral assessments were done. Parents were asked in detail about the age at onset of regression, type of regression, milestones lost, and event, if any, related to the regression. In addition, the Childhood Autism Rating Scale (CARS) was administered to assess symptom severity. The mean age at regression was 22.43 mo (SD = 6.57) and large majority (66.7%) of the parents reported regression between 12 and 24 mo. Most (75%) of the parents of the regression-autistic group reported regression in the language domain, particularly in the expressive language sector, usually between 18 and 24 mo of age. Regression of language was not an isolated phenomenon and regression in other domains was also reported including social skills (75%), cognition (31.25%). In majority of the cases (75%) the regression reported was slow and subtle. There were no significant differences in the motor, social, self help, and communication functioning between the two groups as measured by the DP II.There were also no significant differences between the two groups on the total CARS score and total number of DSM IV symptoms endorsed. However, the regressed children had significantly (t = 2.36, P = .021) more social deficits as per the DSM IV as
The replication origin of a repABC plasmid
Directory of Open Access Journals (Sweden)
Cevallos Miguel A
2011-06-01
Full Text Available Abstract Background repABC operons are present on large, low copy-number plasmids and on some secondary chromosomes in at least 19 α-proteobacterial genera, and are responsible for the replication and segregation properties of these replicons. These operons consist, with some variations, of three genes: repA, repB, and repC. RepA and RepB are involved in plasmid partitioning and in the negative regulation of their own transcription, and RepC is the limiting factor for replication. An antisense RNA encoded between the repB-repC genes modulates repC expression. Results To identify the minimal region of the Rhizobium etli p42d plasmid that is capable of autonomous replication, we amplified different regions of the repABC operon using PCR and cloned the regions into a suicide vector. The resulting vectors were then introduced into R. etli strains that did or did not contain p42d. The minimal replicon consisted of a repC open reading frame under the control of a constitutive promoter with a Shine-Dalgarno sequence that we designed. A sequence analysis of repC revealed the presence of a large A+T-rich region but no iterons or DnaA boxes. Silent mutations that modified the A+T content of this region eliminated the replication capability of the plasmid. The minimal replicon could not be introduced into R. etli strain containing p42d, but similar constructs that carried repC from Sinorhizobium meliloti pSymA or the linear chromosome of Agrobacterium tumefaciens replicated in the presence or absence of p42d, indicating that RepC is an incompatibility factor. A hybrid gene construct expressing a RepC protein with the first 362 amino acid residues from p42d RepC and the last 39 amino acid residues of RepC from SymA was able to replicate in the presence of p42d. Conclusions RepC is the only element encoded in the repABC operon of the R. etli p42d plasmid that is necessary and sufficient for plasmid replication and is probably the initiator protein. The ori
Chikungunya triggers an autophagic process which promotes viral replication
Directory of Open Access Journals (Sweden)
Briant Laurence
2011-09-01
. Conclusions Taken together, our results suggest that autophagy may play a promoting role in ChikV replication. Investigating in details the relationship between autophagy and viral replication will greatly improve our knowledge of the pathogenesis of ChikV and provide insight for the design of candidate antiviral therapeutics.
An Affine Invariant $k$-Nearest Neighbor Regression Estimate
Biau, Gérard; Dujmovic, Vida; Krzyzak, Adam
2012-01-01
We design a data-dependent metric in $\\mathbb R^d$ and use it to define the $k$-nearest neighbors of a given point. Our metric is invariant under all affine transformations. We show that, with this metric, the standard $k$-nearest neighbor regression estimate is asymptotically consistent under the usual conditions on $k$, and minimal requirements on the input data.
Coverage Accuracy of Confidence Intervals in Nonparametric Regression
Institute of Scientific and Technical Information of China (English)
Song-xi Chen; Yong-song Qin
2003-01-01
Point-wise confidence intervals for a nonparametric regression function with random design points are considered. The confidence intervals are those based on the traditional normal approximation and the empirical likelihood. Their coverage accuracy is assessed by developing the Edgeworth expansions for the coverage probabilities. It is shown that the empirical likelihood confidence intervals are Bartlett correctable.
Grades, Gender, and Encouragement: A Regression Discontinuity Analysis
Owen, Ann L.
2010-01-01
The author employs a regression discontinuity design to provide direct evidence on the effects of grades earned in economics principles classes on the decision to major in economics and finds a differential effect for male and female students. Specifically, for female students, receiving an A for a final grade in the first economics class is…
Replicated Data Management for Mobile Computing
Douglas, Terry
2008-01-01
Managing data in a mobile computing environment invariably involves caching or replication. In many cases, a mobile device has access only to data that is stored locally, and much of that data arrives via replication from other devices, PCs, and services. Given portable devices with limited resources, weak or intermittent connectivity, and security vulnerabilities, data replication serves to increase availability, reduce communication costs, foster sharing, and enhance survivability of critical information. Mobile systems have employed a variety of distributed architectures from client-server
DEFF Research Database (Denmark)
Jensen, Ole B.; Pettiway, Keon
2017-01-01
by designers, planners, etc. (staging from above) and mobile subjects (staging from below). A research agenda for studying situated practices of mobility and mobilities design is outlined in three directions: foci of studies, methods and approaches, and epistemologies and frames of thinking. Jensen begins...... with a brief description of how movement is studied within social sciences after the “mobilities turn” versus the idea of physical movement in transport geography and engineering. He then explains how “mobilities design” was derived from connections between traffic and architecture. Jensen concludes......In this chapter, Ole B. Jensen takes a situational approach to mobilities to examine how ordinary life activities are structured by technology and design. Using “staging mobilities” as a theoretical approach, Jensen considers mobilities as overlapping, actions, interactions and decisions...
DEFF Research Database (Denmark)
Volf, Mette
Design - proces & metode iBog® er enestående i sit fokus på afmystificering og operationalisering af designprocessens flygtige og komplekse karakter. Udgivelsen går bag om designerens daglige arbejde og giver et indblik i den kreative skabelsesproces, som designeren er en del af. Udover et bredt...... indblik i designerens arbejdsmetoder og designparametre giver Design - proces & metode en række eksempler fra anerkendte designvirksomheder, der gør det muligt at komme helt tæt på designerens virkelighed....
Learning a Nonnegative Sparse Graph for Linear Regression.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung
2015-09-01
Previous graph-based semisupervised learning (G-SSL) methods have the following drawbacks: 1) they usually predefine the graph structure and then use it to perform label prediction, which cannot guarantee an overall optimum and 2) they only focus on the label prediction or the graph structure construction but are not competent in handling new samples. To this end, a novel nonnegative sparse graph (NNSG) learning method was first proposed. Then, both the label prediction and projection learning were integrated into linear regression. Finally, the linear regression and graph structure learning were unified within the same framework to overcome these two drawbacks. Therefore, a novel method, named learning a NNSG for linear regression was presented, in which the linear regression and graph learning were simultaneously performed to guarantee an overall optimum. In the learning process, the label information can be accurately propagated via the graph structure so that the linear regression can learn a discriminative projection to better fit sample labels and accurately classify new samples. An effective algorithm was designed to solve the corresponding optimization problem with fast convergence. Furthermore, NNSG provides a unified perceptiveness for a number of graph-based learning methods and linear regression methods. The experimental results showed that NNSG can obtain very high classification accuracy and greatly outperforms conventional G-SSL methods, especially some conventional graph construction methods.
Using Regression Mixture Analysis in Educational Research
Directory of Open Access Journals (Sweden)
Cody S. Ding
2006-11-01
Full Text Available Conventional regression analysis is typically used in educational research. Usually such an analysis implicitly assumes that a common set of regression parameter estimates captures the population characteristics represented in the sample. In some situations, however, this implicit assumption may not be realistic, and the sample may contain several subpopulations such as high math achievers and low math achievers. In these cases, conventional regression models may provide biased estimates since the parameter estimates are constrained to be the same across subpopulations. This paper advocates the applications of regression mixture models, also known as latent class regression analysis, in educational research. Regression mixture analysis is more flexible than conventional regression analysis in that latent classes in the data can be identified and regression parameter estimates can vary within each latent class. An illustration of regression mixture analysis is provided based on a dataset of authentic data. The strengths and limitations of the regression mixture models are discussed in the context of educational research.
A Lightweight Distributed Solution to Content Replication in Mobile Networks
La, Chi-Anh; Casetti, Claudio; Chiasserini, Carla-Fabiana; Fiore, Marco
2009-01-01
Performance and reliability of content access in mobile networks is conditioned by the number and location of content replicas deployed at the network nodes. Facility location theory has been the traditional, centralized approach to study content replication: computing the number and placement of replicas in a network can be cast as an uncapacitated facility location problem. The endeavour of this work is to design a distributed, lightweight solution to the above joint optimization problem, while taking into account the network dynamics. In particular, we devise a mechanism that lets nodes share the burden of storing and providing content, so as to achieve load balancing, and decide whether to replicate or drop the information so as to adapt to a dynamic content demand and time-varying topology. We evaluate our mechanism through simulation, by exploring a wide range of settings and studying realistic content access mechanisms that go beyond the traditional assumptionmatching demand points to their closest con...
A transcription and translation-coupled DNA replication system using rolling-circle replication.
Sakatani, Yoshihiro; Ichihashi, Norikazu; Kazuta, Yasuaki; Yomo, Tetsuya
2015-05-27
All living organisms have a genome replication system in which genomic DNA is replicated by a DNA polymerase translated from mRNA transcribed from the genome. The artificial reconstitution of this genome replication system is a great challenge in in vitro synthetic biology. In this study, we attempted to construct a transcription- and translation-coupled DNA replication (TTcDR) system using circular genomic DNA encoding phi29 DNA polymerase and a reconstituted transcription and translation system. In this system, phi29 DNA polymerase was translated from the genome and replicated the genome in a rolling-circle manner. When using a traditional translation system composition, almost no DNA replication was observed, because the tRNA and nucleoside triphosphates included in the translation system significantly inhibited DNA replication. To minimize these inhibitory effects, we optimized the composition of the TTcDR system and improved replication by approximately 100-fold. Using our system, genomic DNA was replicated up to 10 times in 12 hours at 30 °C. This system provides a step toward the in vitro construction of an artificial genome replication system, which is a prerequisite for the construction of an artificial cell.
Buchanan, Richard; Cross, Nigel; Durling, David; Nelson, Harold; Owen, Charles; Valtonen, Anna; Boling, Elizabeth; Gibbons, Andrew; Visscher-Voerman, Irene
2013-01-01
Scholars representing the field of design were asked to identify what they considered to be the most exciting and imaginative work currently being done in their field, as well as how that work might change our understanding. The scholars included Richard Buchanan, Nigel Cross, David Durling, Harold Nelson, Charles Owen, and Anna Valtonen. Scholars…
Buchanan, Richard; Cross, Nigel; Durling, David; Nelson, Harold; Owen, Charles; Valtonen, Anna; Boling, Elizabeth; Gibbons, Andrew; Visscher-Voerman, Irene
2013-01-01
Scholars representing the field of design were asked to identify what they considered to be the most exciting and imaginative work currently being done in their field, as well as how that work might change our understanding. The scholars included Richard Buchanan, Nigel Cross, David Durling, Harold Nelson, Charles Owen, and Anna Valtonen. Scholars…
Using autonomous replication to physically and genetically define human origins of replication
Energy Technology Data Exchange (ETDEWEB)
Krysan, P.J.
1993-01-01
The author previously developed a system for studying autonomous replication in human cells involving the use of sequences from the Epstein-Barr virus (EBV) genome to provide extrachromosomal plasmids with a nuclear retention function. Using this system, it was demonstrated that large fragments of human genomic DNA could be isolated which replicate autonomously in human cells. In this study the DNA sequences which function as origins of replication in human cells are defined physically and genetically. These experiments demonstrated that replication initiates at multiple locations distributed throughout the plasmid. Another line of experiments addressed the DNA sequence requirements for autonomous replication in human cells. These experiments demonstrated that human DNA fragments have a higher replication activity than bacterial fragments do. It was also found, however, that the bacterial DNA sequence could support efficient replication if enough copies of it were present on the plasmid. These findings suggested that autonomous replication in human cells does not depend on extensive, specific DNA sequences. The autonomous replication system which the author has employed for these experiments utilizes a cis-acting sequence from the EBV origin and the trans-acting EBNA-1 protein to provide plasmids with a nuclear retention function. It was therefore relevant to verify that the autonomous replication of human DNA fragments did not depend on the replication activity associated with the EBV sequences utilized for nuclear retention. To accomplish this goal, the author demonstrated that plasmids carrying the EBV sequences and large fragments of human DNA could support long-term autonomous replication in hamster cells, which are not permissive for EBV replication.
Wu, LiHong; Liu, Yang; Kong, DaoChun
2014-05-01
Chromosomal DNA replication is one of the central biological events occurring inside cells. Due to its large size, the replication of genomic DNA in eukaryotes initiates at hundreds to tens of thousands of sites called DNA origins so that the replication could be completed in a limited time. Further, eukaryotic DNA replication is sophisticatedly regulated, and this regulation guarantees that each origin fires once per S phase and each segment of DNA gets duplication also once per cell cycle. The first step of replication initiation is the assembly of pre-replication complex (pre-RC). Since 1973, four proteins, Cdc6/Cdc18, MCM, ORC and Cdt1, have been extensively studied and proved to be pre-RC components. Recently, a novel pre-RC component called Sap1/Girdin was identified. Sap1/Girdin is required for loading Cdc18/Cdc6 to origins for pre-RC assembly in the fission yeast and human cells, respectively. At the transition of G1 to S phase, pre-RC is activated by the two kinases, cyclindependent kinase (CDK) and Dbf4-dependent kinase (DDK), and subsequently, RPA, primase-polα, PCNA, topoisomerase, Cdc45, polδ, and polɛ are recruited to DNA origins for creating two bi-directional replication forks and initiating DNA replication. As replication forks move along chromatin DNA, they frequently stall due to the presence of a great number of replication barriers on chromatin DNA, such as secondary DNA structures, protein/DNA complexes, DNA lesions, gene transcription. Stalled forks must require checkpoint regulation for their stabilization. Otherwise, stalled forks will collapse, which results in incomplete DNA replication and genomic instability. This short review gives a concise introduction regarding the current understanding of replication initiation and replication fork stabilization.
Replicating chromatin: a tale of histones
DEFF Research Database (Denmark)
Groth, Anja
2009-01-01
Chromatin serves structural and functional roles crucial for genome stability and correct gene expression. This organization must be reproduced on daughter strands during replication to maintain proper overlay of epigenetic fabric onto genetic sequence. Nucleosomes constitute the structural...
Control of chromosome replication in caulobacter crescentus.
Marczynski, Gregory T; Shapiro, Lucy
2002-01-01
Caulobacter crescentus permits detailed analysis of chromosome replication control during a developmental cell cycle. Its chromosome replication origin (Cori) may be prototypical of the large and diverse class of alpha-proteobacteria. Cori has features that both affiliate and distinguish it from the Escherichia coli chromosome replication origin. For example, requirements for DnaA protein and RNA transcription affiliate both origins. However, Cori is distinguished by several features, and especially by five binding sites for the CtrA response regulator protein. To selectively repress and limit chromosome replication, CtrA receives both protein degradation and protein phosphorylation signals. The signal mediators, proteases, response regulators, and kinases, as well as Cori DNA and the replisome, all show distinct patterns of temporal and spatial organization during cell cycle progression. Future studies should integrate our knowledge of biochemical activities at Cori with our emerging understanding of cytological dynamics in C. crescentus and other bacteria.
LHCb Data Replication During SC3
Smith, A
2006-01-01
LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to allow high bandwidth distribution of data across the grid in accordance with the computing model. To enable reliable bulk replication of data, LHCb's DIRAC system has been integrated with gLite's File Transfer Service middleware component to make use of dedicated network links between LHCb computing centres. DIRAC's Data Management tools previously allowed the replication, registration and deletion of files on the grid. For SC3 supplementary functionality has been added to allow bulk replication of data (using FTS) and efficient mass registration to the LFC replica catalog.Provisional performance results have shown that the system developed can meet the expected data replication rate required by the computing model in 2007. This paper details the experience and results of integration and utilisation of DIRAC with the SC3 transfer machinery.
Initiation of Replication in Escherichia coli
DEFF Research Database (Denmark)
Frimodt-Møller, Jakob
The circular chromosome of Escherichia coli is replicated by two replisomes assembled at the unique origin and moving in the opposite direction until they meet in the less well defined terminus. The key protein in initiation of replication, DnaA, facilitates the unwinding of double-stranded DNA...... to single-stranded DNA in oriC. Although DnaA is able to bind both ADP and ATP, DnaA is only active in initiation when bound to ATP. Although initiation of replication, and the regulation of this, is thoroughly investigated it is still not fully understood. The overall aim of the thesis was to investigate...... the regulation of initiation, the effect on the cell when regulation fails, and if regulation was interlinked to chromosomal organization. This thesis uncovers that there exists a subtle balance between chromosome replication and reactive oxygen species (ROS) inflicted DNA damage. Thus, failure in regulation...
Surface Micro Topography Replication in Injection Moulding
DEFF Research Database (Denmark)
Arlø, Uffe Rolf; Hansen, Hans Nørgaard; Kjær, Erik Michael
2005-01-01
carried out with rough EDM (electrical discharge machining) mould surfaces, a PS grade, and by applying established three-dimensional topography parameters. Significant quantitative relationships between process parameters and topography parameters were established. It further appeared that replication...
Regression modeling methods, theory, and computation with SAS
Panik, Michael
2009-01-01
Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,
Taharaguchi, Satoshi; Matsuhiro, Takahisa; Harima, Hayato; Sato, Atsuko; Ohe, Kyoko; Sakai, Sachi; Takahashi, Toshikazu; Hara, Motonobu
2012-06-01
Feline calicivirus (FCV) is a pathogenic microorganism that causes upper respiratory diseases in cats. Recently, an FCV infection with a high mortality rate has been confirmed, and there is need to develop a treatment for cases of acute infection. We evaluated whether the replication of FCV could be prevented by RNA interference. For this study, we designed an siRNA targeted to the polymerase region of the strain FCV-B isolated from a cat that died after exhibiting neurological symptoms. Cells transfected with siR-pol dose-dependently suppressed the replication of FCV-B. siR-pol suppressed its replication by suppressing the target viral RNA.
Self-assembly and Self-replication of Short Amphiphilic β-sheet Peptides
Bourbo, Valery; Matmor, Maayan; Shtelman, Elina; Rubinov, Boris; Ashkenasy, Nurit; Ashkenasy, Gonen
2011-12-01
Most self-replicating peptide systems are made of α-helix forming sequences. However, it has been postulated that shorter and simpler peptides may also serve as templates for replication when arranged into well-defined structures. We describe here the design and characterization of new peptides that form soluble β-sheet aggregates that serve to significantly accelerate their ligation and self-replication. We then discuss the relevance of these phenomena to early molecular evolution, in light of additional functionality associated with β-sheet assemblies.
Analyzing industrial energy use through ordinary least squares regression models
Golden, Allyson Katherine
Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and
Beta blockers & left ventricular hypertrophy regression.
George, Thomas; Ajit, Mullasari S; Abraham, Georgi
2010-01-01
Left ventricular hypertrophy (LVH) particularly in hypertensive patients is a strong predictor of adverse cardiovascular events. Identifying LVH not only helps in the prognostication but also in the choice of therapeutic drugs. The prevalence of LVH is age linked and has a direct correlation to the severity of hypertension. Adequate control of blood pressure, most importantly central aortic pressure and blocking the effects of cardiomyocyte stimulatory growth factors like Angiotensin II helps in regression of LVH. Among the various antihypertensives ACE-inhibitors and angiotensin receptor blockers are more potent than other drugs in regressing LVH. Beta blockers especially the newer cardio selective ones do still have a role in regressing LVH albeit a minor one. A meta-analysis of various studies on LVH regression shows many lacunae. There have been no consistent criteria for defining LVH and documenting LVH regression. This article reviews current evidence on the role of Beta Blockers in LVH regression.
Applied regression analysis a research tool
Pantula, Sastry; Dickey, David
1998-01-01
Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...
Commercial Building Partnerships Replication and Diffusion
Energy Technology Data Exchange (ETDEWEB)
Antonopoulos, Chrissi A.; Dillon, Heather E.; Baechler, Michael C.
2013-09-16
This study presents findings from survey and interview data investigating replication efforts of Commercial Building Partnership (CBP) partners that worked directly with the Pacific Northwest National Laboratory (PNNL). PNNL partnered directly with 12 organizations on new and retrofit construction projects, which represented approximately 28 percent of the entire U.S. Department of Energy (DOE) CBP program. Through a feedback survey mechanism, along with personal interviews, PNNL gathered quantitative and qualitative data relating to replication efforts by each organization. These data were analyzed to provide insight into two primary research areas: 1) CBP partners’ replication efforts of technologies and approaches used in the CBP project to the rest of the organization’s building portfolio (including replication verification), and, 2) the market potential for technology diffusion into the total U.S. commercial building stock, as a direct result of the CBP program. The first area of this research focused specifically on replication efforts underway or planned by each CBP program participant. Factors that impact replication include motivation, organizational structure and objectives firms have for implementation of energy efficient technologies. Comparing these factors between different CBP partners revealed patterns in motivation for constructing energy efficient buildings, along with better insight into market trends for green building practices. The second area of this research develops a diffusion of innovations model to analyze potential broad market impacts of the CBP program on the commercial building industry in the United States.
Mycobacterium tuberculosis replicates within necrotic human macrophages
Lerner, Thomas R.; Repnik, Urska; Herbst, Susanne; Collinson, Lucy M.; Griffiths, Gareth
2017-01-01
Mycobacterium tuberculosis modulation of macrophage cell death is a well-documented phenomenon, but its role during bacterial replication is less characterized. In this study, we investigate the impact of plasma membrane (PM) integrity on bacterial replication in different functional populations of human primary macrophages. We discovered that IFN-γ enhanced bacterial replication in macrophage colony-stimulating factor–differentiated macrophages more than in granulocyte–macrophage colony-stimulating factor–differentiated macrophages. We show that permissiveness in the different populations of macrophages to bacterial growth is the result of a differential ability to preserve PM integrity. By combining live-cell imaging, correlative light electron microscopy, and single-cell analysis, we found that after infection, a population of macrophages became necrotic, providing a niche for M. tuberculosis replication before escaping into the extracellular milieu. Thus, in addition to bacterial dissemination, necrotic cells provide first a niche for bacterial replication. Our results are relevant to understanding the environment of M. tuberculosis replication in the host. PMID:28242744
Institute of Scientific and Technical Information of China (English)
Ge-mai Chen; Jin-hong You
2005-01-01
Consider a repeated measurement partially linear regression model with an unknown vector pasemiparametric generalized least squares estimator (SGLSE) ofβ, we propose an iterative weighted semiparametric least squares estimator (IWSLSE) and show that it improves upon the SGLSE in terms of asymptotic covariance matrix. An adaptive procedure is given to determine the number of iterations. We also show that when the number of replicates is less than or equal to two, the IWSLSE can not improve upon the SGLSE.These results are generalizations of those in [2] to the case of semiparametric regressions.
Organization of Replication of Ribosomal DNA in Saccharomyces cerevisiae
Linskens, Maarten H.K.; Huberman, Joel A.
1988-01-01
Using recently developed replicon mapping techniques, we have analyzed the replication of the ribosomal DNA in Saccharomyces cerevisiae. The results show that (i) the functional origin of replication colocalizes with an autonomously replicating sequence element previously mapped to the
Dynamics of Escherichia coli Chromosome Segregation during Multifork Replication
DEFF Research Database (Denmark)
Nielsen, Henrik Jørck; Youngren, Brenda; Hansen, Flemming G.
2007-01-01
Slowly growing Escherichia coli cells have a simple cell cycle, with replication and progressive segregation of the chromosome completed before cell division. In rapidly growing cells, initiation of replication occurs before the previous replication rounds are complete. At cell division...
High-dimensional regression with unknown variance
Giraud, Christophe; Verzelen, Nicolas
2011-01-01
We review recent results for high-dimensional sparse linear regression in the practical case of unknown variance. Different sparsity settings are covered, including coordinate-sparsity, group-sparsity and variation-sparsity. The emphasize is put on non-asymptotic analyses and feasible procedures. In addition, a small numerical study compares the practical performance of three schemes for tuning the Lasso esti- mator and some references are collected for some more general models, including multivariate regression and nonparametric regression.
Deep Wavelet Scattering for Quantum Energy Regression
Hirn, Matthew
Physical functionals are usually computed as solutions of variational problems or from solutions of partial differential equations, which may require huge computations for complex systems. Quantum chemistry calculations of ground state molecular energies is such an example. Indeed, if x is a quantum molecular state, then the ground state energy E0 (x) is the minimum eigenvalue solution of the time independent Schrödinger Equation, which is computationally intensive for large systems. Machine learning algorithms do not simulate the physical system but estimate solutions by interpolating values provided by a training set of known examples {(xi ,E0 (xi) } i physical invariants. Linear regressions of E0 over a dictionary Φ ={ϕk } k compute an approximation E 0 as: E 0 (x) =∑kwkϕk (x) , where the weights {wk } k are selected to minimize the error between E0 and E 0 on the training set. The key to such a regression approach then lies in the design of the dictionary Φ. It must be intricate enough to capture the essential variability of E0 (x) over the molecular states x of interest, while simple enough so that evaluation of Φ (x) is significantly less intensive than a direct quantum mechanical computation (or approximation) of E0 (x) . In this talk we present a novel dictionary Φ for the regression of quantum mechanical energies based on the scattering transform of an intermediate, approximate electron density representation ρx of the state x. The scattering transform has the architecture of a deep convolutional network, composed of an alternating sequence of linear filters and nonlinear maps. Whereas in many deep learning tasks the linear filters are learned from the training data, here the physical properties of E0 (invariance to isometric transformations of the state x, stable to deformations of x) are leveraged to design a collection of linear filters ρx *ψλ for an appropriate wavelet ψ. These linear filters are composed with the nonlinear modulus
Institute of Scientific and Technical Information of China (English)
孙纯国; 陈丽
2012-01-01
There are many influencing factors in the chemical engineering production process and their mechanism are complex, so it is difficult to get function relation equation between impact factor and target value. The design method of quadratic regression rotation combination and its application in chemical engineering are introduced in this paper. The regressive equation can predict the effects of the factors on the target value under different conditions and better guide practice operation in the chemical engineering production process.%化工生产过程影响因素多,作用机理复杂,很难得到影响因子与目标值的函数关系方程.本文通过实例介绍了二次通用旋转组合设计方法在化工中的应用,预测回归方程可以较好预测各因素在不同条件下对目标值的影响效果,很好地用于指导实践操作.
Regression calibration with heteroscedastic error variance.
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses' Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice.
Enhanced piecewise regression based on deterministic annealing
Institute of Scientific and Technical Information of China (English)
ZHANG JiangShe; YANG YuQian; CHEN XiaoWen; ZHOU ChengHu
2008-01-01
Regression is one of the important problems in statistical learning theory. This paper proves the global convergence of the piecewise regression algorithm based on deterministic annealing and continuity of global minimum of free energy w.r.t temperature, and derives a new simplified formula to compute the initial critical temperature. A new enhanced piecewise regression algorithm by using "migration of prototypes" is proposed to eliminate "empty cell" in the annealing process. Numerical experiments on several benchmark datasets show that the new algo-rithm can remove redundancy and improve generalization of the piecewise regres-sion model.
Geodesic least squares regression on information manifolds
Energy Technology Data Exchange (ETDEWEB)
Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be [Department of Applied Physics, Ghent University, Ghent, Belgium and Laboratory for Plasma Physics, Royal Military Academy, Brussels (Belgium)
2014-12-05
We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply this to scaling laws in magnetic confinement fusion.
[From clinical judgment to linear regression model.
Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2013-01-01
When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R(2)) indicates the importance of independent variables in the outcome.
Logistic Regression for Evolving Data Streams Classification
Institute of Scientific and Technical Information of China (English)
YIN Zhi-wu; HUANG Shang-teng; XUE Gui-rong
2007-01-01
Logistic regression is a fast classifier and can achieve higher accuracy on small training data. Moreover,it can work on both discrete and continuous attributes with nonlinear patterns. Based on these properties of logistic regression, this paper proposed an algorithm, called evolutionary logistical regression classifier (ELRClass), to solve the classification of evolving data streams. This algorithm applies logistic regression repeatedly to a sliding window of samples in order to update the existing classifier, to keep this classifier if its performance is deteriorated by the reason of bursting noise, or to construct a new classifier if a major concept drift is detected. The intensive experimental results demonstrate the effectiveness of this algorithm.
New ridge parameters for ridge regression
Directory of Open Access Journals (Sweden)
A.V. Dorugade
2014-04-01
Full Text Available Hoerl and Kennard (1970a introduced the ridge regression estimator as an alternative to the ordinary least squares (OLS estimator in the presence of multicollinearity. In ridge regression, ridge parameter plays an important role in parameter estimation. In this article, a new method for estimating ridge parameters in both situations of ordinary ridge regression (ORR and generalized ridge regression (GRR is proposed. The simulation study evaluates the performance of the proposed estimator based on the mean squared error (MSE criterion and indicates that under certain conditions the proposed estimators perform well compared to OLS and other well-known estimators reviewed in this article.
Bulcock, J. W.
The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…
Catalysis of Strand Annealing by Replication Protein A Derives from Its Strand Melting Properties*
Bartos, Jeremy D.; Willmott, Lyndsay J.; Binz, Sara K.; Wold, Marc S.; Bambara, Robert A.
2008-01-01
Eukaryotic DNA-binding protein replication protein A (RPA) has a strand melting property that assists polymerases and helicases in resolving DNA secondary structures. Curiously, previous results suggested that human RPA (hRPA) promotes undesirable recombination by facilitating annealing of flaps produced transiently during DNA replication; however, the mechanism was not understood. We designed a series of substrates, representing displaced DNA flaps generated during ma...
DEFF Research Database (Denmark)
Wang, Yunpeng; Thompson, Wesley K; Schork, Andrew J
2016-01-01
, pleiotropy) for each single nucleotide polymorphism (SNP) to enable more accurate estimation of replication probabilities, conditional on the observed test statistic ("z-score") of the SNP. We use a multiple logistic regression on z-scores to combine information from auxiliary information to derive...... a "relative enrichment score" for each SNP. For each stratum of these relative enrichment scores, we obtain nonparametric estimates of posterior expected test statistics and replication probabilities as a function of discovery z-scores, using a resampling-based approach that repeatedly and randomly partitions...... meta-analysis sub-studies into training and replication samples. We fit a scale mixture of two Gaussians model to each stratum, obtaining parameter estimates that minimize the sum of squared differences of the scale-mixture model with the stratified nonparametric estimates. We apply this approach...
Multiple regression for physiological data analysis: the problem of multicollinearity.
Slinker, B K; Glantz, S A
1985-07-01
Multiple linear regression, in which several predictor variables are related to a response variable, is a powerful statistical tool for gaining quantitative insight into complex in vivo physiological systems. For these insights to be correct, all predictor variables must be uncorrelated. However, in many physiological experiments the predictor variables cannot be precisely controlled and thus change in parallel (i.e., they are highly correlated). There is a redundancy of information about the response, a situation called multicollinearity, that leads to numerical problems in estimating the parameters in regression equations; the parameters are often of incorrect magnitude or sign or have large standard errors. Although multicollinearity can be avoided with good experimental design, not all interesting physiological questions can be studied without encountering multicollinearity. In these cases various ad hoc procedures have been proposed to mitigate multicollinearity. Although many of these procedures are controversial, they can be helpful in applying multiple linear regression to some physiological problems.
A Comprehensive Family-Based Replication Study of Schizophrenia Genes
Aberg, Karolina A.; Liu, Youfang; Bukszár, Jozsef; McClay, Joseph L.; Khachane, Amit N.; Andreassen, Ole A.; Blackwood, Douglas; Corvin, Aiden; Djurovic, Srdjan; Gurling, Hugh; Ophoff, Roel; Pato, Carlos N.; Pato, Michele T.; Riley, Brien; Webb, Todd; Kendler, Kenneth; O’Donovan, Mick; Craddock, Nick; Kirov, George; Owen, Mike; Rujescu, Dan; St Clair, David; Werge, Thomas; Hultman, Christina M.; Delisi, Lynn E.; Sullivan, Patrick; van den Oord, Edwin J.
2017-01-01
Importance Schizophrenia (SCZ) is a devastating psychiatric condition. Identifying the specific genetic variants and pathways that increase susceptibility to SCZ is critical to improve disease understanding and address the urgent need for new drug targets. Objective To identify SCZ susceptibility genes. Design We integrated results from a meta-analysis of 18 genome-wide association studies (GWAS) involving 1 085 772 single-nucleotide polymorphisms (SNPs) and 6 databases that showed significant informativeness for SCZ. The 9380 most promising SNPs were then specifically genotyped in an independent family-based replication study that, after quality control, consisted of 8107 SNPs. Setting Linkage meta-analysis, brain transcriptome meta-analysis, candidate gene database, OMIM, relevant mouse studies, and expression quantitative trait locus databases. Patients We included 11 185 cases and 10 768 control subjects from 6 databases and, after quality control 6298 individuals (including 3286 cases) from 1811 nuclear families. Main Outcomes and Measures Case-control status for SCZ. Results Replication results showed a highly significant enrichment of SNPs with small P values. Of the SNPs with replication values of P<.01, the proportion of SNPs that had the same direction of effects as in the GWAS meta-analysis was 89% in the combined ancestry group (sign test, P<2.20×10−16) and 93% in subjects of European ancestry only (P<2.20×10−16). Our results supported the major histocompatibility complex region showing a 3.7-fold overall enrichment of replication values of P<.01 in subjects from European ancestry. We replicated SNPs in TCF4 (P=2.53×10−10) and NOTCH4 (P=3.16×10−7) that are among the most robust SCZ findings. More novel findings included POM121L2 (P=3.51×10−7), AS3MT (P=9.01×10−7), CNNM2 (P=6.07×10−7), and NT5C2 (P=4.09×10−7). To explore the many small effects, we performed pathway analyses. The most significant pathways involved neuronal function
Automation of Flight Software Regression Testing
Tashakkor, Scott B.
2016-01-01
NASA is developing the Space Launch System (SLS) to be a heavy lift launch vehicle supporting human and scientific exploration beyond earth orbit. SLS will have a common core stage, an upper stage, and different permutations of boosters and fairings to perform various crewed or cargo missions. Marshall Space Flight Center (MSFC) is writing the Flight Software (FSW) that will operate the SLS launch vehicle. The FSW is developed in an incremental manner based on "Agile" software techniques. As the FSW is incrementally developed, testing the functionality of the code needs to be performed continually to ensure that the integrity of the software is maintained. Manually testing the functionality on an ever-growing set of requirements and features is not an efficient solution and therefore needs to be done automatically to ensure testing is comprehensive. To support test automation, a framework for a regression test harness has been developed and used on SLS FSW. The test harness provides a modular design approach that can compile or read in the required information specified by the developer of the test. The modularity provides independence between groups of tests and the ability to add and remove tests without disturbing others. This provides the SLS FSW team a time saving feature that is essential to meeting SLS Program technical and programmatic requirements. During development of SLS FSW, this technique has proved to be a useful tool to ensure all requirements have been tested, and that desired functionality is maintained, as changes occur. It also provides a mechanism for developers to check functionality of the code that they have developed. With this system, automation of regression testing is accomplished through a scheduling tool and/or commit hooks. Key advantages of this test harness capability includes execution support for multiple independent test cases, the ability for developers to specify precisely what they are testing and how, the ability to add
Dynamic molecular networks: from synthetic receptors to self-replicators.
Otto, Sijbren
2012-12-18
Dynamic combinatorial libraries (DCLs) are molecular networks in which the network members exchange building blocks. The resulting product distribution is initially under thermodynamic control. Addition of a guest or template molecule tends to shift the equilibrium towards compounds that are receptors for the guest. This Account gives an overview of our work in this area. We have demonstrated the template-induced amplification of synthetic receptors, which has given rise to several high-affinity binders for cationic and anionic guests in highly competitive aqueous solution. The dynamic combinatorial approach allows for the identification of new receptors unlikely to be obtained through rational design. Receptor discovery is possible and more efficient in larger libraries. The dynamic combinatorial approach has the attractive characteristic of revealing interesting structures, such as catenanes, even when they are not specifically targeted. Using a transition-state analogue as a guest we can identify receptors with catalytic activity. Although DCLs were initially used with the reductionistic view of identifying new synthetic receptors or catalysts, it is becoming increasingly apparent that DCLs are also of interest in their own right. We performed detailed computational studies of the effect of templates on the product distributions of DCLs using DCLSim software. Template effects can be rationalized by considering the entire network: the system tends to maximize global host-guest binding energy. A data-fitting analysis of the response of the global position of the DCLs to the addition of the template using DCLFit software allowed us to disentangle individual host-guest binding constants. This powerful procedure eliminates the need for isolation and purification of the various individual receptors. Furthermore, local network binding events tend to propagate through the entire network and may be harnessed for transmitting and processing of information. We demonstrated
Incremental Net Effects in Multiple Regression
Lipovetsky, Stan; Conklin, Michael
2005-01-01
A regular problem in regression analysis is estimating the comparative importance of the predictors in the model. This work considers the 'net effects', or shares of the predictors in the coefficient of the multiple determination, which is a widely used characteristic of the quality of a regression model. Estimation of the net effects can be a…
Regression Analysis and the Sociological Imagination
De Maio, Fernando
2014-01-01
Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.
Dealing with Outliers: Robust, Resistant Regression
Glasser, Leslie
2007-01-01
Least-squares linear regression is the best of statistics and it is the worst of statistics. The reasons for this paradoxical claim, arising from possible inapplicability of the method and the excessive influence of "outliers", are discussed and substitute regression methods based on median selection, which is both robust and resistant, are…
Competing Risks Quantile Regression at Work
DEFF Research Database (Denmark)
Dlugosz, Stephan; Lo, Simon M. S.; Wilke, Ralf
2017-01-01
Despite its emergence as a frequently used method for the empirical analysis of multivariate data, quantile regression is yet to become a mainstream tool for the analysis of duration data. We present a pioneering empirical study on the grounds of a competing risks quantile regression model. We use...
Implementing Variable Selection Techniques in Regression.
Thayer, Jerome D.
Variable selection techniques in stepwise regression analysis are discussed. In stepwise regression, variables are added or deleted from a model in sequence to produce a final "good" or "best" predictive model. Stepwise computer programs are discussed and four different variable selection strategies are described. These…
Regression Model With Elliptically Contoured Errors
Arashi, M; Tabatabaey, S M M
2012-01-01
For the regression model where the errors follow the elliptically contoured distribution (ECD), we consider the least squares (LS), restricted LS (RLS), preliminary test (PT), Stein-type shrinkage (S) and positive-rule shrinkage (PRS) estimators for the regression parameters. We compare the quadratic risks of the estimators to determine the relative dominance properties of the five estimators.
Regression Analysis and the Sociological Imagination
De Maio, Fernando
2014-01-01
Regression analysis is an important aspect of most introductory statistics courses in sociology but is often presented in contexts divorced from the central concerns that bring students into the discipline. Consequently, we present five lesson ideas that emerge from a regression analysis of income inequality and mortality in the USA and Canada.
A Simulation Investigation of Principal Component Regression.
Allen, David E.
Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…
Replication of Space-Shuttle Computers in FPGAs and ASICs
Ferguson, Roscoe C.
2008-01-01
A document discusses the replication of the functionality of the onboard space-shuttle general-purpose computers (GPCs) in field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The purpose of the replication effort is to enable utilization of proven space-shuttle flight software and software-development facilities to the extent possible during development of software for flight computers for a new generation of launch vehicles derived from the space shuttles. The replication involves specifying the instruction set of the central processing unit and the input/output processor (IOP) of the space-shuttle GPC in a hardware description language (HDL). The HDL is synthesized to form a "core" processor in an FPGA or, less preferably, in an ASIC. The core processor can be used to create a flight-control card to be inserted into a new avionics computer. The IOP of the GPC as implemented in the core processor could be designed to support data-bus protocols other than that of a multiplexer interface adapter (MIA) used in the space shuttle. Hence, a computer containing the core processor could be tailored to communicate via the space-shuttle GPC bus and/or one or more other buses.
Should metacognition be measured by logistic regression?
Rausch, Manuel; Zehetleitner, Michael
2017-03-01
Are logistic regression slopes suitable to quantify metacognitive sensitivity, i.e. the efficiency with which subjective reports differentiate between correct and incorrect task responses? We analytically show that logistic regression slopes are independent from rating criteria in one specific model of metacognition, which assumes (i) that rating decisions are based on sensory evidence generated independently of the sensory evidence used for primary task responses and (ii) that the distributions of evidence are logistic. Given a hierarchical model of metacognition, logistic regression slopes depend on rating criteria. According to all considered models, regression slopes depend on the primary task criterion. A reanalysis of previous data revealed that massive numbers of trials are required to distinguish between hierarchical and independent models with tolerable accuracy. It is argued that researchers who wish to use logistic regression as measure of metacognitive sensitivity need to control the primary task criterion and rating criteria. Copyright © 2017 Elsevier Inc. All rights reserved.
Atherosclerotic plaque regression: fact or fiction?
Shanmugam, Nesan; Román-Rego, Ana; Ong, Peter; Kaski, Juan Carlos
2010-08-01
Coronary artery disease is the major cause of death in the western world. The formation and rapid progression of atheromatous plaques can lead to serious cardiovascular events in patients with atherosclerosis. The better understanding, in recent years, of the mechanisms leading to atheromatous plaque growth and disruption and the availability of powerful HMG CoA-reductase inhibitors (statins) has permitted the consideration of plaque regression as a realistic therapeutic goal. This article reviews the existing evidence underpinning current therapeutic strategies aimed at achieving atherosclerotic plaque regression. In this review we also discuss imaging modalities for the assessment of plaque regression, predictors of regression and whether plaque regression is associated with a survival benefit.
Pathological assessment of liver fibrosis regression
Directory of Open Access Journals (Sweden)
WANG Bingqiong
2017-03-01
Full Text Available Hepatic fibrosis is the common pathological outcome of chronic hepatic diseases. An accurate assessment of fibrosis degree provides an important reference for a definite diagnosis of diseases, treatment decision-making, treatment outcome monitoring, and prognostic evaluation. At present, many clinical studies have proven that regression of hepatic fibrosis and early-stage liver cirrhosis can be achieved by effective treatment, and a correct evaluation of fibrosis regression has become a hot topic in clinical research. Liver biopsy has long been regarded as the gold standard for the assessment of hepatic fibrosis, and thus it plays an important role in the evaluation of fibrosis regression. This article reviews the clinical application of current pathological staging systems in the evaluation of fibrosis regression from the perspectives of semi-quantitative scoring system, quantitative approach, and qualitative approach, in order to propose a better pathological evaluation system for the assessment of fibrosis regression.
Dynamics of Escherichia coli chromosome segregation during multifork replication.
Nielsen, Henrik J; Youngren, Brenda; Hansen, Flemming G; Austin, Stuart
2007-12-01
Slowly growing Escherichia coli cells have a simple cell cycle, with replication and progressive segregation of the chromosome completed before cell division. In rapidly growing cells, initiation of replication occurs before the previous replication rounds are complete. At cell division, the chromosomes contain multiple replication forks and must be segregated while this complex pattern of replication is still ongoing. Here, we show that replication and segregation continue in step, starting at the origin and progressing to the replication terminus. Thus, early-replicated markers on the multiple-branched chromosomes continue to separate soon after replication to form separate protonucleoids, even though they are not segregated into different daughter cells until later generations. The segregation pattern follows the pattern of chromosome replication and does not follow the cell division cycle. No extensive cohesion of sister DNA regions was seen at any growth rate. We conclude that segregation is driven by the progression of the replication forks.
Quantile regression provides a fuller analysis of speed data.
Hewson, Paul
2008-03-01
Considerable interest already exists in terms of assessing percentiles of speed distributions, for example monitoring the 85th percentile speed is a common feature of the investigation of many road safety interventions. However, unlike the mean, where t-tests and ANOVA can be used to provide evidence of a statistically significant change, inference on these percentiles is much less common. This paper examines the potential role of quantile regression for modelling the 85th percentile, or any other quantile. Given that crash risk may increase disproportionately with increasing relative speed, it may be argued these quantiles are of more interest than the conditional mean. In common with the more usual linear regression, quantile regression admits a simple test as to whether the 85th percentile speed has changed following an intervention in an analogous way to using the t-test to determine if the mean speed has changed by considering the significance of parameters fitted to a design matrix. Having briefly outlined the technique and briefly examined an application with a widely published dataset concerning speed measurements taken around the introduction of signs in Cambridgeshire, this paper will demonstrate the potential for quantile regression modelling by examining recent data from Northamptonshire collected in conjunction with a "community speed watch" programme. Freely available software is used to fit these models and it is hoped that the potential benefits of using quantile regression methods when examining and analysing speed data are demonstrated.
A Self-Replicating Ligase Ribozyme
Paul, Natasha; Joyce, Gerald F.
2002-01-01
A self-replicating molecule directs the covalent assembly of component molecules to form a product that is of identical composition to the parent. When the newly formed product also is able to direct the assembly of product molecules, the self-replicating system can be termed autocatalytic. A self-replicating system was developed based on a ribozyme that catalyzes the assembly of additional copies of Itself through an RNA-catalyzed RNA ligation reaction. The R3C ligase ribozyme was redesigned so that it would ligate two substrates to generate an exact copy of itself, which then would behave in a similar manner. This self-replicating system depends on the catalytic nature of the RNA for the generation of copies. A linear dependence was observed between the initial rate of formation of new copies and the starting concentration of ribozyme, consistent with exponential growth. The autocatalytic rate constant was 0.011 per min, whereas the initial rate of reaction in the absence of pre-existing ribozyme was only 3.3 x 10(exp -11) M per min. Exponential growth was limited, however, because newly formed ribozyme molecules had greater difficulty forming a productive complex with the two substrates. Further optimization of the system may lead to the sustained exponential growth of ribozymes that undergo self-replication.
COPI is required for enterovirus 71 replication.
Directory of Open Access Journals (Sweden)
Jianmin Wang
Full Text Available Enterovirus 71 (EV71, a member of the Picornaviridae family, is found in Asian countries where it causes a wide range of human diseases. No effective therapy is available for the treatment of these infections. Picornaviruses undergo RNA replication in association with membranes of infected cells. COPI and COPII have been shown to be involved in the formation of picornavirus-induced vesicles. Replication of several picornaviruses, including poliovirus and Echovirus 11 (EV11, is dependent on COPI or COPII. Here, we report that COPI, but not COPII, is required for EV71 replication. Replication of EV71 was inhibited by brefeldin A and golgicide A, inhibitors of COPI activity. Furthermore, we found EV71 2C protein interacted with COPI subunits by co-immunoprecipitation and GST pull-down assay, indicating that COPI coatomer might be directed to the viral replication complex through viral 2C protein. Additionally, because the pathway is conserved among different species of enteroviruses, it may represent a novel target for antiviral therapies.
Extremal dynamics in random replicator ecosystems
Energy Technology Data Exchange (ETDEWEB)
Kärenlampi, Petri P., E-mail: petri.karenlampi@uef.fi
2015-10-02
The seminal numerical experiment by Bak and Sneppen (BS) is repeated, along with computations with replicator models, including a greater amount of features. Both types of models do self-organize, and do obey power-law scaling for the size distribution of activity cycles. However species extinction within the replicator models interferes with the BS self-organized critical (SOC) activity. Speciation–extinction dynamics ruins any stationary state which might contain a steady size distribution of activity cycles. The BS-type activity appears as a dissimilar phenomenon in comparison to speciation–extinction dynamics in the replicator system. No criticality is found from the speciation–extinction dynamics. Neither are speciations and extinctions in real biological macroevolution known to contain any diverging distributions, or self-organization towards any critical state. Consequently, biological macroevolution probably is not a self-organized critical phenomenon. - Highlights: • Extremal Dynamics organizes random replicator ecosystems to two phases in fitness space. • Replicator systems show power-law scaling of activity. • Species extinction interferes with Bak–Sneppen type mutation activity. • Speciation–extinction dynamics does not show any critical phase transition. • Biological macroevolution probably is not a self-organized critical phenomenon.
Unlocking the potential of metagenomics through replicated experimental design
Knight, R.; Jansson, J.; Field, D.; Fierer, N.; Desai, N.; Fuhrman, J.A.; Hugenholtz, P.; Van der Lelie, D.; Meyer, F.; Stevens, R.; Bailey, M.J.; Gordon, J.I.; Kowalchuk, G.A.; Gilbert, J.A.
2012-01-01
Metagenomics holds enormous promise for discovering novel enzymes and organisms that are biomarkers or drivers of processes relevant to disease, industry and the environment. In the past two years, we have seen a paradigm shift in metagenomics to the application of cross-sectional and longitudinal s
Estimation in partial linear EV models with replicated observations
Institute of Scientific and Technical Information of China (English)
CUI; Hengjian
2004-01-01
The aim of this work is to construct the parameter estimators in the partial linear errors-in-variables (EV) models and explore their asymptotic properties. Unlike other related References, the assumption of known error covariance matrix is removed when the sample can be repeatedly drawn at each designed point from the model. The estimators of interested regression parameters, and the model error variance, as well as the nonparametric function, are constructed. Under some regular conditions, all of the estimators prove strongly consistent. Meanwhile, the asymptotic normality for the estimator of regression parameter is also presented. A simulation study is reported to illustrate our asymptotic results.
A Frisch-Newton Algorithm for Sparse Quantile Regression
Institute of Scientific and Technical Information of China (English)
Roger Koenker; Pin Ng
2005-01-01
Recent experience has shown that interior-point methods using a log barrier approach are far superior to classical simplex methods for computing solutions to large parametric quantile regression problems.In many large empirical applications, the design matrix has a very sparse structure. A typical example is the classical fixed-effect model for panel data where the parametric dimension of the model can be quite large, but the number of non-zero elements is quite small. Adopting recent developments in sparse linear algebra we introduce a modified version of the Frisch-Newton algorithm for quantile regression described in Portnoy and Koenker[28].The new algorithm substantially reduces the storage (memory) requirements and increases computational speed.The modified algorithm also facilitates the development of nonparametric quantile regression methods. The pseudo design matrices employed in nonparametric quantile regression smoothing are inherently sparse in both the fidelity and roughness penalty components. Exploiting the sparse structure of these problems opens up a whole range of new possibilities for multivariate smoothing on large data sets via ANOVA-type decomposition and partial linear models.
Smith, Owen K.; Aladjem, Mirit I.
2014-01-01
The DNA replication program is, in part, determined by the epigenetic landscape that governs local chromosome architecture and directs chromosome duplication. Replication must coordinate with other biochemical processes occurring concomitantly on chromatin, such as transcription and remodeling, to insure accurate duplication of both genetic and epigenetic features and to preserve genomic stability. The importance of genome architecture and chromatin looping in coordinating cellular processes ...
Evolution of Database Replication Technologies for WLCG
Baranowski, Zbigniew; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca
2015-01-01
In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.
Replicating Cardiovascular Condition-Birth Month Associations
Li, Li; Boland, Mary Regina; Miotto, Riccardo; Tatonetti, Nicholas P.; Dudley, Joel T.
2016-01-01
Independent replication is vital for study findings drawn from Electronic Health Records (EHR). This replication study evaluates the relationship between seasonal effects at birth and lifetime cardiovascular condition risk. We performed a Season-wide Association Study on 1,169,599 patients from Mount Sinai Hospital (MSH) to compute phenome-wide associations between birth month and CVD. We then evaluated if seasonal patterns found at MSH matched those reported at Columbia University Medical Center. Coronary arteriosclerosis, essential hypertension, angina, and pre-infarction syndrome passed phenome-wide significance and their seasonal patterns matched those previously reported. Atrial fibrillation, cardiomyopathy, and chronic myocardial ischemia had consistent patterns but were not phenome-wide significant. We confirm that CVD risk peaks for those born in the late winter/early spring among the evaluated patient populations. The replication findings bolster evidence for a seasonal birth month effect in CVD. Further study is required to identify the environmental and developmental mechanisms. PMID:27624541
Synchronization of DNA array replication kinetics
Manturov, Alexey O.; Grigoryev, Anton V.
2016-04-01
In the present work we discuss the features of the DNA replication kinetics at the case of multiplicity of simultaneously elongated DNA fragments. The interaction between replicated DNA fragments is carried out by free protons that appears at the every nucleotide attachment at the free end of elongated DNA fragment. So there is feedback between free protons concentration and DNA-polymerase activity that appears as elongation rate dependence. We develop the numerical model based on a cellular automaton, which can simulate the elongation stage (growth of DNA strands) for DNA elongation process with conditions pointed above and we study the possibility of the DNA polymerases movement synchronization. The results obtained numerically can be useful for DNA polymerase movement detection and visualization of the elongation process in the case of massive DNA replication, eg, under PCR condition or for DNA "sequencing by synthesis" sequencing devices evaluation.
GFLV replication in electroporated grapevine protoplasts.
Valat; Toutain; Courtois; Gaire; Decout; Pinck; Mauro; Burrus
2000-06-29
Grapevine fanleaf virus (GFLV), responsible for the economically important court-noué disease, is exclusively transmitted to its natural host in the vineyards through Xiphinema nematodes. We have developed direct inoculation of GFLV into grapevine through protoplast electroporation. Protoplasts were isolated from mesophyll of in vitro-grown plants and from embryogenic cell suspensions. Permeation conditions were determined by monitoring calcein uptake. Low salt poration medium was selected. Electrical conditions leading to strong transient gene expression were also tested for GFLV inoculation (isolate F13). GFLV replication was detected with either virus particles (2 µg) or viral RNA (10 ng) in both protoplast populations, as shown by anti-P38 Western blotting. Direct inoculation and replication were also observed with Arabis mosaic virus (ArMV), a closely related nepovirus, as well as with another GFLV isolate. These results will be valuable in grapevine biotechnology, for GFLV replication studies, transgenic plant screening for GFLV resistance, and biorisk evaluation.
Quantile regression applied to spectral distance decay
Rocchini, D.; Cade, B.S.
2008-01-01
Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.
Hypotheses testing for fuzzy robust regression parameters
Energy Technology Data Exchange (ETDEWEB)
Kula, Kamile Sanli [Ahi Evran University, Department of Mathematics, 40200 Kirsehir (Turkey)], E-mail: sanli2004@hotmail.com; Apaydin, Aysen [Ankara University, Department of Statistics, 06100 Ankara (Turkey)], E-mail: apaydin@science.ankara.edu.tr
2009-11-30
The classical least squares (LS) method is widely used in regression analysis because computing its estimate is easy and traditional. However, LS estimators are very sensitive to outliers and to other deviations from basic assumptions of normal theory [Huynh H. A comparison of four approaches to robust regression. Psychol Bull 1982;92:505-12; Stephenson D. 2000. Available from: (http://folk.uib.no/ngbnk/kurs/notes/node38.html); Xu R, Li C. Multidimensional least-squares fitting with a fuzzy model. Fuzzy Sets and Systems 2001;119:215-23.]. If there exists outliers in the data set, robust methods are preferred to estimate parameters values. We proposed a fuzzy robust regression method by using fuzzy numbers when x is crisp and Y is a triangular fuzzy number and in case of outliers in the data set, a weight matrix was defined by the membership function of the residuals. In the fuzzy robust regression, fuzzy sets and fuzzy regression analysis was used in ranking of residuals and in estimation of regression parameters, respectively [Sanli K, Apaydin A. Fuzzy robust regression analysis based on the ranking of fuzzy sets. Inernat. J. Uncertainty Fuzziness and Knowledge-Based Syst 2008;16:663-81.]. In this study, standard deviation estimations are obtained for the parameters by the defined weight matrix. Moreover, we propose another point of view in hypotheses testing for parameters.
Regression modeling of ground-water flow
Cooley, R.L.; Naff, R.L.
1985-01-01
Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)
Enders, Felicity
2013-12-01
Although regression is widely used for reading and publishing in the medical literature, no instruments were previously available to assess students' understanding. The goal of this study was to design and assess such an instrument for graduate students in Clinical and Translational Science and Public Health. A 27-item REsearch on Global Regression Expectations in StatisticS (REGRESS) quiz was developed through an iterative process. Consenting students taking a course on linear regression in a Clinical and Translational Science program completed the quiz pre- and postcourse. Student results were compared to practicing statisticians with a master's or doctoral degree in statistics or a closely related field. Fifty-two students responded precourse, 59 postcourse , and 22 practicing statisticians completed the quiz. The mean (SD) score was 9.3 (4.3) for students precourse and 19.0 (3.5) postcourse (P REGRESS quiz was internally reliable (Cronbach's alpha 0.89). The initial validation is quite promising with statistically significant and meaningful differences across time and study populations. Further work is needed to validate the quiz across multiple institutions. © 2013 Wiley Periodicals, Inc.
Suppression of Adenovirus Replication by Cardiotonic Steroids.
Grosso, Filomena; Stoilov, Peter; Lingwood, Clifford; Brown, Martha; Cochrane, Alan
2017-02-01
The dependence of adenovirus on the host pre-RNA splicing machinery for expression of its complete genome potentially makes it vulnerable to modulators of RNA splicing, such as digoxin and digitoxin. Both drugs reduced the yields of four human adenoviruses (HAdV-A31, -B35, and -C5 and a species D conjunctivitis isolate) by at least 2 to 3 logs by affecting one or more steps needed for genome replication. Immediate early E1A protein levels are unaffected by the drugs, but synthesis of the delayed protein E4orf6 and the major late capsid protein hexon is compromised. Quantitative reverse transcription-PCR (qRT-PCR) analyses revealed that both drugs altered E1A RNA splicing (favoring the production of 13S over 12S RNA) early in infection and partially blocked the transition from 12S and 13S to 9S RNA at late stages of virus replication. Expression of multiple late viral protein mRNAs was lost in the presence of either drug, consistent with the observed block in viral DNA replication. The antiviral effect was dependent on the continued presence of the drug and was rapidly reversible. RIDK34, a derivative of convallotoxin, although having more potent antiviral activity, did not show an improved selectivity index. All three drugs reduced metabolic activity to some degree without evidence of cell death. By blocking adenovirus replication at one or more steps beyond the onset of E1A expression and prior to genome replication, digoxin and digitoxin show potential as antiviral agents for treatment of serious adenovirus infections. Furthermore, understanding the mechanism(s) by which digoxin and digitoxin inhibit adenovirus replication will guide the development of novel antiviral therapies.
Deep Human Parsing with Active Template Regression.
Liang, Xiaodan; Liu, Si; Shen, Xiaohui; Yang, Jianchao; Liu, Luoqi; Dong, Jian; Lin, Liang; Yan, Shuicheng
2015-12-01
In this work, the human parsing task, namely decomposing a human image into semantic fashion/body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion/body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches 64.38 percent by our ATR framework, significantly higher than 44.76 percent based on the state-of-the-art algorithm [28].
Variable and subset selection in PLS regression
DEFF Research Database (Denmark)
Høskuldsson, Agnar
2001-01-01
The purpose of this paper is to present some useful methods for introductory analysis of variables and subsets in relation to PLS regression. We present here methods that are efficient in finding the appropriate variables or subset to use in the PLS regression. The general conclusion...... is that variable selection is important for successful analysis of chemometric data. An important aspect of the results presented is that lack of variable selection can spoil the PLS regression, and that cross-validation measures using a test set can show larger variation, when we use different subsets of X, than...
Applied Regression Modeling A Business Approach
Pardoe, Iain
2012-01-01
An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a
Regressive language in severe head injury.
Thomsen, I V; Skinhoj, E
1976-09-01
In a follow-up study of 50 patients with severe head injuries three patients had echolalia. One patient with initially global aphasia had echolalia for some weeks when he started talking. Another patient with severe diffuse brain damage, dementia, and emotional regression had echolalia. The dysfunction was considered a detour performance. In the third patient echolalia and palilalia were details in a total pattern of regression lasting for months. The patient, who had extensive frontal atrophy secondary to a very severe head trauma, presented an extreme state of regression returning to a foetal-body pattern and behaving like a baby.
Regression of altitude-produced cardiac hypertrophy.
Sizemore, D. A.; Mcintyre, T. W.; Van Liere, E. J.; Wilson , M. F.
1973-01-01
The rate of regression of cardiac hypertrophy with time has been determined in adult male albino rats. The hypertrophy was induced by intermittent exposure to simulated high altitude. The percentage hypertrophy was much greater (46%) in the right ventricle than in the left (16%). The regression could be adequately fitted to a single exponential function with a half-time of 6.73 plus or minus 0.71 days (90% CI). There was no significant difference in the rates of regression for the two ventricles.
Regression of altitude-produced cardiac hypertrophy.
Sizemore, D. A.; Mcintyre, T. W.; Van Liere, E. J.; Wilson , M. F.
1973-01-01
The rate of regression of cardiac hypertrophy with time has been determined in adult male albino rats. The hypertrophy was induced by intermittent exposure to simulated high altitude. The percentage hypertrophy was much greater (46%) in the right ventricle than in the left (16%). The regression could be adequately fitted to a single exponential function with a half-time of 6.73 plus or minus 0.71 days (90% CI). There was no significant difference in the rates of regression for the two ventricles.
Chromatin challenges during DNA replication and repair
DEFF Research Database (Denmark)
Groth, Anja; Rocha, Walter; Verreault, Alain
2007-01-01
Inheritance and maintenance of the DNA sequence and its organization into chromatin are central for eukaryotic life. To orchestrate DNA-replication and -repair processes in the context of chromatin is a challenge, both in terms of accessibility and maintenance of chromatin organization. To meet...... the challenge of maintenance, cells have evolved efficient nucleosome-assembly pathways and chromatin-maturation mechanisms that reproduce chromatin organization in the wake of DNA replication and repair. The aim of this Review is to describe how these pathways operate and to highlight how the epigenetic...
Involvement of Autophagy in Coronavirus Replication
Directory of Open Access Journals (Sweden)
Paul Britton
2012-11-01
Full Text Available Coronaviruses are single stranded, positive sense RNA viruses, which induce the rearrangement of cellular membranes upon infection of a host cell. This provides the virus with a platform for the assembly of viral replication complexes, improving efficiency of RNA synthesis. The membranes observed in coronavirus infected cells include double membrane vesicles. By nature of their double membrane, these vesicles resemble cellular autophagosomes, generated during the cellular autophagy pathway. In addition, coronavirus infection has been demonstrated to induce autophagy. Here we review current knowledge of coronavirus induced membrane rearrangements and the involvement of autophagy or autophagy protein microtubule associated protein 1B light chain 3 (LC3 in coronavirus replication.
Replication, recombination, and repair: going for the gold.
Klein, Hannah L; Kreuzer, Kenneth N
2002-03-01
DNA recombination is now appreciated to be integral to DNA replication and cell survival. Recombination allows replication to successfully maneuver through the roadblocks of damaged or collapsed replication forks. The signals and controls that permit cells to transition between replication and recombination modes are now being identified.
Direct visualization of replication dynamics in early zebrafish embryos.
Kuriya, Kenji; Higashiyama, Eriko; Avşar-Ban, Eriko; Okochi, Nanami; Hattori, Kaede; Ogata, Shin; Takebayashi, Shin-Ichiro; Ogata, Masato; Tamaru, Yutaka; Okumura, Katsuzumi
2016-05-01
We analyzed DNA replication in early zebrafish embryos. The replicating DNA of whole embryos was labeled with the thymidine analog 5-ethynyl-2'-deoxyuridine (EdU), and spatial regulation of replication sites was visualized in single embryo-derived cells. The results unveiled uncharacterized replication dynamics during zebrafish early embryogenesis.
In precision agriculture regression has been used widely to quality the relationship between soil attributes and other environmental variables. However, spatial correlation existing in soil samples usually makes the regression model suboptimal. In this study, a regression-kriging method was attemp...
Spatial regulation and organization of DNA replication within the nucleus.
Natsume, Toyoaki; Tanaka, Tomoyuki U
2010-01-01
Duplication of chromosomal DNA is a temporally and spatially regulated process. The timing of DNA replication initiation at various origins is highly coordinated; some origins fire early and others late during S phase. Moreover, inside the nuclei, the bulk of DNA replication is physically organized in replication factories, consisting of DNA polymerases and other replication proteins. In this review article, we discuss how DNA replication is organized and regulated spatially within the nucleus and how this spatial organization is linked to temporal regulation. We focus on DNA replication in budding yeast and fission yeast and, where applicable, compare yeast DNA replication with that in bacteria and metazoans.
Regulation of eukaryotic DNA replication and nuclear structure
Institute of Scientific and Technical Information of China (English)
WUJIARUI
1999-01-01
In eukaryote,nuclear structure is a key component for the functions of eukaryotic cells.More and more evidences show that the nuclear structure plays important role in regulating DNA replication.The nuclear structure provides a physical barrier for the replication licensing,participates in the decision where DNA replication initiates,and organizes replication proteins as replication factory for DNA replication.Through these works,new concepts on the regulation of DNA replication have emerged,which will be discussed in this minireview.
Support vector regression-based internal model control
Institute of Scientific and Technical Information of China (English)
HUANG Yan-wei; PENG Tie-gen
2007-01-01
This paper proposes a design of internal model control systems for process with delay by using support vector regression (SVR). The proposed system fully uses the excellent nonlinear estimation performance of SVR with the structural risk minimization principle. Closed-system stability and steady error are analyzed for the existence of modeling errors. The simulations show that the proposed control systems have the better control performance than that by neural networks in the cases of the training samples with small size and noises.
Assembly of Slx4 signaling complexes behind DNA replication forks.
Balint, Attila; Kim, TaeHyung; Gallo, David; Cussiol, Jose Renato; Bastos de Oliveira, Francisco M; Yimit, Askar; Ou, Jiongwen; Nakato, Ryuichiro; Gurevich, Alexey; Shirahige, Katsuhiko; Smolka, Marcus B; Zhang, Zhaolei; Brown, Grant W
2015-08-13
Obstructions to replication fork progression, referred to collectively as DNA replication stress, challenge genome stability. In Saccharomyces cerevisiae, cells lacking RTT107 or SLX4 show genome instability and sensitivity to DNA replication stress and are defective in the completion of DNA replication during recovery from replication stress. We demonstrate that Slx4 is recruited to chromatin behind stressed replication forks, in a region that is spatially distinct from that occupied by the replication machinery. Slx4 complex formation is nucleated by Mec1 phosphorylation of histone H2A, which is recognized by the constitutive Slx4 binding partner Rtt107. Slx4 is essential for recruiting the Mec1 activator Dpb11 behind stressed replication forks, and Slx4 complexes are important for full activity of Mec1. We propose that Slx4 complexes promote robust checkpoint signaling by Mec1 by stably recruiting Dpb11 within a discrete domain behind the replication fork, during DNA replication stress.
Multiple Instance Regression with Structured Data
Wagstaff, Kiri L.; Lane, Terran; Roper, Alex
2008-01-01
This slide presentation reviews the use of multiple instance regression with structured data from multiple and related data sets. It applies the concept to a practical problem, that of estimating crop yield using remote sensed country wide weekly observations.