Variable selection for multiply-imputed data with application to dioxin exposure study.
Chen, Qixuan; Wang, Sijian
2013-09-20
Multiple imputation (MI) is a commonly used technique for handling missing data in large-scale medical and public health studies. However, variable selection on multiply-imputed data remains an important and longstanding statistical problem. If a variable selection method is applied to each imputed dataset separately, it may select different variables for different imputed datasets, which makes it difficult to interpret the final model or draw scientific conclusions. In this paper, we propose a novel multiple imputation-least absolute shrinkage and selection operator (MI-LASSO) variable selection method as an extension of the least absolute shrinkage and selection operator (LASSO) method to multiply-imputed data. The MI-LASSO method treats the estimated regression coefficients of the same variable across all imputed datasets as a group and applies the group LASSO penalty to yield a consistent variable selection across multiple-imputed datasets. We use a simulation study to demonstrate the advantage of the MI-LASSO method compared with the alternatives. We also apply the MI-LASSO method to the University of Michigan Dioxin Exposure Study to identify important circumstances and exposure factors that are associated with human serum dioxin concentration in Midland, Michigan.
Differential network analysis with multiply imputed lipidomic data.
Maiju Kujala
Full Text Available The importance of lipids for cell function and health has been widely recognized, e.g., a disorder in the lipid composition of cells has been related to atherosclerosis caused cardiovascular disease (CVD. Lipidomics analyses are characterized by large yet not a huge number of mutually correlated variables measured and their associations to outcomes are potentially of a complex nature. Differential network analysis provides a formal statistical method capable of inferential analysis to examine differences in network structures of the lipids under two biological conditions. It also guides us to identify potential relationships requiring further biological investigation. We provide a recipe to conduct permutation test on association scores resulted from partial least square regression with multiple imputed lipidomic data from the LUdwigshafen RIsk and Cardiovascular Health (LURIC study, particularly paying attention to the left-censored missing values typical for a wide range of data sets in life sciences. Left-censored missing values are low-level concentrations that are known to exist somewhere between zero and a lower limit of quantification. To make full use of the LURIC data with the missing values, we utilize state of the art multiple imputation techniques and propose solutions to the challenges that incomplete data sets bring to differential network analysis. The customized network analysis helps us to understand the complexities of the underlying biological processes by identifying lipids and lipid classes that interact with each other, and by recognizing the most important differentially expressed lipids between two subgroups of coronary artery disease (CAD patients, the patients that had a fatal CVD event and the ones who remained stable during two year follow-up.
Assessing the Fit of Structural Equation Models With Multiply Imputed Data.
Enders, Craig K; Mansolf, Maxwell
2016-11-28
Multiple imputation has enjoyed widespread use in social science applications, yet the application of imputation-based inference to structural equation modeling has received virtually no attention in the literature. Thus, this study has 2 overarching goals: evaluate the application of Meng and Rubin's (1992) pooling procedure for likelihood ratio statistic to the SEM test of model fit, and explore the possibility of using this test statistic to define imputation-based versions of common fit indices such as the TLI, CFI, and RMSEA. Computer simulation results suggested that, when applied to a correctly specified model, the pooled likelihood ratio statistic performed well as a global test of model fit and was closely calibrated to the corresponding full information maximum likelihood (FIML) test statistic. However, when applied to misspecified models with high rates of missingness (30%-40%), the imputation-based test statistic generally exhibited lower power than that of FIML. Using the pooled test statistic to construct imputation-based versions of the TLI, CFI, and RMSEA worked well and produced indices that were well-calibrated with those of full information maximum likelihood estimation. This article gives Mplus and R code to implement the pooled test statistic, and it offers a number of recommendations for future research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Synthetic Multiple-Imputation Procedure for Multistage Complex Samples
Zhou Hanzhi
2016-03-01
Full Text Available Multiple imputation (MI is commonly used when item-level missing data are present. However, MI requires that survey design information be built into the imputation models. For multistage stratified clustered designs, this requires dummy variables to represent strata as well as primary sampling units (PSUs nested within each stratum in the imputation model. Such a modeling strategy is not only operationally burdensome but also inferentially inefficient when there are many strata in the sample design. Complexity only increases when sampling weights need to be modeled. This article develops a generalpurpose analytic strategy for population inference from complex sample designs with item-level missingness. In a simulation study, the proposed procedures demonstrate efficient estimation and good coverage properties. We also consider an application to accommodate missing body mass index (BMI data in the analysis of BMI percentiles using National Health and Nutrition Examination Survey (NHANES III data. We argue that the proposed methods offer an easy-to-implement solution to problems that are not well-handled by current MI techniques. Note that, while the proposed method borrows from the MI framework to develop its inferential methods, it is not designed as an alternative strategy to release multiply imputed datasets for complex sample design data, but rather as an analytic strategy in and of itself.
Multiple imputation using chained equations: Issues and guidance for practice.
White, Ian R; Royston, Patrick; Wood, Angela M
2011-02-20
Multiple imputation by chained equations is a flexible and practical approach to handling missing data. We describe the principles of the method and show how to impute categorical and quantitative variables, including skewed variables. We give guidance on how to specify the imputation model and how many imputations are needed. We describe the practical analysis of multiply imputed data, including model building and model checking. We stress the limitations of the method and discuss the possible pitfalls. We illustrate the ideas using a data set in mental health, giving Stata code fragments.
Comprehensive evaluation of imputation performance in African Americans.
Chanda, Pritam; Yuhki, Naoya; Li, Man; Bader, Joel S; Hartz, Alex; Boerwinkle, Eric; Kao, W H Linda; Arking, Dan E
2012-07-01
Imputation of genome-wide single-nucleotide polymorphism (SNP) arrays to a larger known reference panel of SNPs has become a standard and an essential part of genome-wide association studies. However, little is known about the behavior of imputation in African Americans with respect to the different imputation algorithms, the reference population(s) and the reference SNP panels used. Genome-wide SNP data (Affymetrix 6.0) from 3207 African American samples in the Atherosclerosis Risk in Communities Study (ARIC) was used to systematically evaluate imputation quality and yield. Imputation was performed with the imputation algorithms MACH, IMPUTE and BEAGLE using several combinations of three reference panels of HapMap III (ASW, YRI and CEU) and 1000 Genomes Project (pilot 1 YRI June 2010 release, EUR and AFR August 2010 and June 2011 releases) panels with SNP data on chromosomes 18, 20 and 22. About 10% of the directly genotyped SNPs from each chromosome were masked, and SNPs common between the reference panels were used for evaluating the imputation quality using two statistical metrics-concordance accuracy and Cohen's kappa (κ) coefficient. The dependencies of these metrics on the minor allele frequencies (MAF) and specific genotype categories (minor allele homozygotes, heterozygotes and major allele homozygotes) were thoroughly investigated to determine the best panel and method for imputation in African Americans. In addition, the power to detect imputed SNPs associated with simulated phenotypes was studied using the mean genotype of each masked SNP in the imputed data. Our results indicate that the genotype concordances after stratification into each genotype category and Cohen's κ coefficient are considerably better equipped to differentiate imputation performance compared with the traditionally used total concordance statistic, and both statistics improved with increasing MAF irrespective of the imputation method. We also find that both MACH and IMPUTE
Schlatholter, T; Hoekstra, R; Morgenstern, R
1997-01-01
We investigate fragmentation of CO molecules by collisions of He2+ ions at energies between 2 and 11 keV/amu by means of a reflectron time-of-flight (TOF) spectrometer. The kinetic-energy-release (KER) in the center of mass system of the molecule can be determined from the flight times of these
Molgenis-impute: imputation pipeline in a box.
Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A
2015-08-19
Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional
Public Undertakings and Imputability
Ølykke, Grith Skovgaard
2013-01-01
in Article 107(1) TFEU is analysed. It is concluded that where the public undertaking transgresses the control system put in place by the State, conditions for imputability are not fulfilled, and it is argued that in the current state of law, there is no conditional link between the level of control...... that this is not the case. Lastly, it is discussed whether other legal instruments, namely competition law, public procurement law, or the Transparency Directive, regulate public undertakings’ market behaviour. It is found that those rules are not sufficient to mend the gap created by the imputability requirement. Legal......In this article, the issue of impuability to the State of public undertakings’ decision-making is analysed and discussed in the context of the DSBFirst case. DSBFirst is owned by the independent public undertaking DSB and the private undertaking FirstGroup plc and won the contracts in the 2008...
2013-01-01
A few weeks ago, I had a vague notion of what TED was, and how it worked, but now I’m a confirmed fan. It was my privilege to host CERN’s first TEDx event last Friday, and I can honestly say that I can’t remember a time when I was exposed to so much brilliance in such a short time. TEDxCERN was designed to give a platform to science. That’s why we called it Multiplying Dimensions – a nod towards the work we do here, while pointing to the broader importance of science in society. We had talks ranging from the most subtle pondering on the nature of consciousness to an eighteen year old researcher urging us to be patient, and to learn from our mistakes. We had musical interludes that included encounters between the choirs of local schools and will.i.am, between an Israeli pianist and an Iranian percussionist, and between Grand Opera and high humour. And although I opened the event by announcing it as a day off from physics, we had a quite brill...
Multiple imputation for threshold-crossing data with interval censoring.
Dorey, F J; Little, R J; Schenker, N
1993-09-15
Medical statistics often involve measurements of the time when a variable crosses a threshold value. The time to threshold crossing may be the outcome variable in a survival analysis, or a time-dependent covariate in the analysis of a subsequent event. This paper presents new methods for analysing threshold-crossing data that are interval censored in that the time of threshold crossing is known only within a specified interval. Such data typically arise in event-history studies when the threshold is crossed at some time between data-collection points, such as visits to a clinic. We propose methods based on multiple imputation of the threshold-crossing time with use of models that take into account values recorded at the times of visits. We apply the methods to two real data sets, one involving hip replacements and the other on the prostate specific antigen (PSA) assay for prostate cancer. In addition, we compare our methods with the common practice of imputing the threshold-crossing time as the right endpoint of the interval. The two examples require different imputation models, but both lead to simple analyses of the multiply imputed data that automatically take into account variability due to imputation.
Imputation of missing genotypes: an empirical evaluation of IMPUTE
Steinberg Martin H
2008-12-01
Full Text Available Abstract Background Imputation of missing genotypes is becoming a very popular solution for synchronizing genotype data collected with different microarray platforms but the effect of ethnic background, subject ascertainment, and amount of missing data on the accuracy of imputation are not well understood. Results We evaluated the accuracy of the program IMPUTE to generate the genotype data of partially or fully untyped single nucleotide polymorphisms (SNPs. The program uses a model-based approach to imputation that reconstructs the genotype distribution given a set of referent haplotypes and the observed data, and uses this distribution to compute the marginal probability of each missing genotype for each individual subject that is used to impute the missing data. We assembled genome-wide data from five different studies and three different ethnic groups comprising Caucasians, African Americans and Asians. We randomly removed genotype data and then compared the observed genotypes with those generated by IMPUTE. Our analysis shows 97% median accuracy in Caucasian subjects when less than 10% of the SNPs are untyped and missing genotypes are accepted regardless of their posterior probability. The median accuracy increases to 99% when we require 0.95 minimum posterior probability for an imputed genotype to be acceptable. The accuracy decreases to 86% or 94% when subjects are African Americans or Asians. We propose a strategy to improve the accuracy by leveraging the level of admixture in African Americans. Conclusion Our analysis suggests that IMPUTE is very accurate in samples of Caucasians origin, it is slightly less accurate in samples of Asians background, but substantially less accurate in samples of admixed background such as African Americans. Sample size and ascertainment do not seem to affect the accuracy of imputation.
Performance of genotype imputations using data from the 1000 Genomes Project.
Sung, Yun Ju; Wang, Lihua; Rankinen, Tuomo; Bouchard, Claude; Rao, D C
2012-01-01
Genotype imputations based on 1000 Genomes (1KG) Project data have the advantage of imputing many more SNPs than imputations based on HapMap data. It also provides an opportunity to discover associations with relatively rare variants. Recent investigations are increasingly using 1KG data for genotype imputations, but only limited evaluations of the performance of this approach are available. In this paper, we empirically evaluated imputation performance using 1KG data by comparing imputation results to those using the HapMap Phase II data that have been widely used. We used three reference panels: the CEU panel consisting of 120 haplotypes from HapMap II and 1KG data (June 2010 release) and the EUR panel consisting of 566 haplotypes also from 1KG data (August 2010 release). We used Illumina 324,607 autosomal SNPs genotyped in 501 individuals of European ancestry. Our most important finding was that both 1KG reference panels provided much higher imputation yield than the HapMap II panel. There were more than twice as many successfully imputed SNPs as there were using the HapMap II panel (6.7 million vs. 2.5 million). Our second most important finding was that accuracy using both 1KG panels was high and almost identical to accuracy using the HapMap II panel. Furthermore, after removing SNPs with MACH Rsq Project is still underway, we expect that later versions will provide even better imputation performance.
Restrictive Imputation of Incomplete Survey Data
Vink, G.
2015-01-01
This dissertation focuses on finding plausible imputations when there is some restriction posed on the imputation model. In these restrictive situations, current imputation methodology does not lead to satisfactory imputations. The restrictions, and the resulting missing data problems are real-life
A two-step semiparametric method to accommodate sampling weights in multiple imputation.
Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trviellore E
2016-03-01
Multiple imputation (MI) is a well-established method to handle item-nonresponse in sample surveys. Survey data obtained from complex sampling designs often involve features that include unequal probability of selection. MI requires imputation to be congenial, that is, for the imputations to come from a Bayesian predictive distribution and for the observed and complete data estimator to equal the posterior mean given the observed or complete data, and similarly for the observed and complete variance estimator to equal the posterior variance given the observed or complete data; more colloquially, the analyst and imputer make similar modeling assumptions. Yet multiply imputed data sets from complex sample designs with unequal sampling weights are typically imputed under simple random sampling assumptions and then analyzed using methods that account for the sampling weights. This is a setting in which the analyst assumes more than the imputer, which can led to biased estimates and anti-conservative inference. Less commonly used alternatives such as including case weights as predictors in the imputation model typically require interaction terms for more complex estimators such as regression coefficients, and can be vulnerable to model misspecification and difficult to implement. We develop a simple two-step MI framework that accounts for sampling weights using a weighted finite population Bayesian bootstrap method to validly impute the whole population (including item nonresponse) from the observed data. In the second step, having generated posterior predictive distributions of the entire population, we use standard IID imputation to handle the item nonresponse. Simulation results show that the proposed method has good frequentist properties and is robust to model misspecification compared to alternative approaches. We apply the proposed method to accommodate missing data in the Behavioral Risk Factor Surveillance System when estimating means and parameters of
Holder Roger L
2009-07-01
Full Text Available Abstract Background Multiple imputation (MI provides an effective approach to handle missing covariate data within prognostic modelling studies, as it can properly account for the missing data uncertainty. The multiply imputed datasets are each analysed using standard prognostic modelling techniques to obtain the estimates of interest. The estimates from each imputed dataset are then combined into one overall estimate and variance, incorporating both the within and between imputation variability. Rubin's rules for combining these multiply imputed estimates are based on asymptotic theory. The resulting combined estimates may be more accurate if the posterior distribution of the population parameter of interest is better approximated by the normal distribution. However, the normality assumption may not be appropriate for all the parameters of interest when analysing prognostic modelling studies, such as predicted survival probabilities and model performance measures. Methods Guidelines for combining the estimates of interest when analysing prognostic modelling studies are provided. A literature review is performed to identify current practice for combining such estimates in prognostic modelling studies. Results Methods for combining all reported estimates after MI were not well reported in the current literature. Rubin's rules without applying any transformations were the standard approach used, when any method was stated. Conclusion The proposed simple guidelines for combining estimates after MI may lead to a wider and more appropriate use of MI in future prognostic modelling studies.
Multiple imputation and its application
Carpenter, James
2013-01-01
A practical guide to analysing partially observed data. Collecting, analysing and drawing inferences from data is central to research in the medical and social sciences. Unfortunately, it is rarely possible to collect all the intended data. The literature on inference from the resulting incomplete data is now huge, and continues to grow both as methods are developed for large and complex data structures, and as increasing computer power and suitable software enable researchers to apply these methods. This book focuses on a particular statistical method for analysing and drawing inferences from incomplete data, called Multiple Imputation (MI). MI is attractive because it is both practical and widely applicable. The authors aim is to clarify the issues raised by missing data, describing the rationale for MI, the relationship between the various imputation models and associated algorithms and its application to increasingly complex data structures. Multiple Imputation and its Application: Discusses the issues ...
Assessment of genotype imputation performance using 1000 Genomes in African American studies.
Dana B Hancock
Full Text Available Genotype imputation, used in genome-wide association studies to expand coverage of single nucleotide polymorphisms (SNPs, has performed poorly in African Americans compared to less admixed populations. Overall, imputation has typically relied on HapMap reference haplotype panels from Africans (YRI, European Americans (CEU, and Asians (CHB/JPT. The 1000 Genomes project offers a wider range of reference populations, such as African Americans (ASW, but their imputation performance has had limited evaluation. Using 595 African Americans genotyped on Illumina's HumanHap550v3 BeadChip, we compared imputation results from four software programs (IMPUTE2, BEAGLE, MaCH, and MaCH-Admix and three reference panels consisting of different combinations of 1000 Genomes populations (February 2012 release: (1 3 specifically selected populations (YRI, CEU, and ASW; (2 8 populations of diverse African (AFR or European (AFR descent; and (3 all 14 available populations (ALL. Based on chromosome 22, we calculated three performance metrics: (1 concordance (percentage of masked genotyped SNPs with imputed and true genotype agreement; (2 imputation quality score (IQS; concordance adjusted for chance agreement, which is particularly informative for low minor allele frequency [MAF] SNPs; and (3 average r2hat (estimated correlation between the imputed and true genotypes, for all imputed SNPs. Across the reference panels, IMPUTE2 and MaCH had the highest concordance (91%-93%, but IMPUTE2 had the highest IQS (81%-83% and average r2hat (0.68 using YRI+ASW+CEU, 0.62 using AFR+EUR, and 0.55 using ALL. Imputation quality for most programs was reduced by the addition of more distantly related reference populations, due entirely to the introduction of low frequency SNPs (MAF≤2% that are monomorphic in the more closely related panels. While imputation was optimized by using IMPUTE2 with reference to the ALL panel (average r2hat = 0.86 for SNPs with MAF>2%, use of the ALL
Predictive mean matching imputation of semicontinuous variables
Vink, G.; Frank, L.E.; Pannekoek, J.; Buuren, S. van
2014-01-01
Multiple imputation methods properly account for the uncertainty of missing data. One of those methods for creating multiple imputations is predictive mean matching (PMM), a general purpose method. Little is known about the performance of PMM in imputing non-normal semicontinuous data (skewed data w
Effect of Genome-Wide Genotyping and Reference Panels on Rare Variants Imputation
Hou-Feng Zheng; Martin Ladouceur; Celia M.T. Greenwood; J.Brent Richards
2012-01-01
Common variants explain little of the variance of most common disease,prompting large-scale sequencing studies to understand the contribution of rare variants to these diseases.Imputation of rare variants from genome-wide genotypic arrays offers a cost-efficient strategy to achieve necessary sample sizes required for adequate statistical power.To estimate the performance of imputation of rare variants,we imputed 153 individuals,each of whom was genotyped on 3 different genotype arrays including 317k,610k and 1 million single nucleotide polymorphisms (SNPs),to two different reference panels:HapMap2 and 1000 Genomes pilot March 2010 release (1KGpilot) by using IMPUTE version 2.We found that more than 94％ and 84％ of all SNPs yield acceptable accuracy (info ＞ 0.4) in HapMap2 and 1KGpilot-based imputation,respectively.For rare variants (minor allele frequency (MAF) ≤5％),the proportion of well-imputed SNPs increased as the MAF increased from 0.3％ to 5％ across all 3 genome-wide association study (GWAS) datasets.The proportion of well-imputed SNPs was 69％,60％ and 49％ for SNPs with a MAF from 0.3％ to 5％ for 1M,610k and 317k,respectively.None of the very rare variants (MAF ≤ 0.3％) were well imputed.We conclude that the imputation accuracy of rare variants increases with higher density of genome-wide genotyping arrays when the size of the reference panel is small.Variants with lower MAF are more difficult to impute.These findings have important implications in the design and replication of large-scale sequencing studies.
Machelle D. Wilson
2014-01-01
Full Text Available The imputation of missing data is often a crucial step in the analysis of survey data. This study reviews typical problems with missing data and discusses a method for the imputation of missing survey data with a large number of categorical variables which do not have a monotone missing pattern. We develop a method for constructing a monotone missing pattern that allows for imputation of categorical data in data sets with a large number of variables using a model-based MCMC approach. We report the results of imputing the missing data from a case study, using educational, sociopsychological, and socioeconomic data from the National Latino and Asian American Study (NLAAS. We report the results of multiply imputed data on a substantive logistic regression analysis predicting socioeconomic success from several educational, sociopsychological, and familial variables. We compare the results of conducting inference using a single imputed data set to those using a combined test over several imputations. Findings indicate that, for all variables in the model, all of the single tests were consistent with the combined test.
Pierce, Paul E.
1986-01-01
A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.
Missing Data Imputation for Supervised Learning
Poulos, Jason; Valle, Rafael
2016-01-01
This paper compares methods for imputing missing categorical data for supervised learning tasks. The ability of researchers to accurately fit a model and yield unbiased estimates may be compromised by missing data, which are prevalent in survey-based social science research. We experiment on two machine learning benchmark datasets with missing categorical data, comparing classifiers trained on non-imputed (i.e., one-hot encoded) or imputed data with different degrees of missing-data perturbat...
16 CFR 1115.11 - Imputed knowledge.
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Imputed knowledge. 1115.11 Section 1115.11... PRODUCT HAZARD REPORTS General Interpretation § 1115.11 Imputed knowledge. (a) In evaluating whether or... care to ascertain the truth of complaints or other representations. This includes the knowledge a...
Dual imputation model for incomplete longitudinal data
Jolani, S.; Frank, L.E.; Buuren, S. van
2014-01-01
Missing values are a practical issue in the analysis of longitudinal data. Multiple imputation (MI) is a well-known likelihood-based method that has optimal properties in terms of efficiency and consistency if the imputation model is correctly specified. Doubly robust (DR) weighing-based methods pro
Buckley, T. N.
2008-12-01
The application of optimisation theory to vegetation processes has rarely extended beyond the context of diurnal to intra-annual gas exchange of individual leaves and crowns. One reason is that the Lagrange multipliers in the leaf-scale solutions, which are marginal products for allocatable photosynthetic resource inputs (water and nitrogen), are mysterious in origin, and their numerical values are difficult to measure -- let alone to predict or interpret in concrete physiological or ecological terms. These difficulties disappear, however, when the optimisation paradigm itself is extended to encompass carbon allocation and growth at the lifespan scale. The trajectories of leaf (and canopy) level marginal products are then implicit in the trajectory of plant and stand structure predicted by optimal carbon allocation. Furthermore, because the input and product are the same resource -- carbon -- in the whole plant optimisation, the product in one time step defines the input constraint, and hence implicitly the marginal product for carbon, in the next time step. This effectively converts the problem from a constrained optimisation of a definite integral, in which the multipliers are undetermined, to an unconstrained maximisation of a state, in which the multipliers are all implicit. This talk will explore how the marginal products for photosynthetic inputs as well as the marginal product for carbon -- i.e., the 'final multiplier,' omega -- are predicted to vary over time and in relation to environmental change during tree growth.
Nelson, Jane Bray
2012-01-01
As a new physics teacher, I was explaining how to find the weight of an object sitting on a table near the surface of the Earth. It bothered me when a student asked, "The object is not accelerating so why do you multiply the mass of the object by the acceleration due to gravity?" I answered something like, "That's true, but if the table were not…
Nelson, Jane Bray
2012-01-01
As a new physics teacher, I was explaining how to find the weight of an object sitting on a table near the surface of the Earth. It bothered me when a student asked, "The object is not accelerating so why do you multiply the mass of the object by the acceleration due to gravity?" I answered something like, "That's true, but if the table were not…
Comparison of HLA allelic imputation programs
Shaffer, Christian M.; Bastarache, Lisa; Gaudieri, Silvana; Glazer, Andrew M.; Steiner, Heidi E.; Mosley, Jonathan D.; Mallal, Simon; Denny, Joshua C.; Phillips, Elizabeth J.; Roden, Dan M.
2017-01-01
Imputation of human leukocyte antigen (HLA) alleles from SNP-level data is attractive due to importance of HLA alleles in human disease, widespread availability of genome-wide association study (GWAS) data, and expertise required for HLA sequencing. However, comprehensive evaluations of HLA imputations programs are limited. We compared HLA imputation results of HIBAG, SNP2HLA, and HLA*IMP:02 to sequenced HLA alleles in 3,265 samples from BioVU, a de-identified electronic health record database coupled to a DNA biorepository. We performed four-digit HLA sequencing for HLA-A, -B, -C, -DRB1, -DPB1, and -DQB1 using long-read 454 FLX sequencing. All samples were genotyped using both the Illumina HumanExome BeadChip platform and a GWAS platform. Call rates and concordance rates were compared by platform, frequency of allele, and race/ethnicity. Overall concordance rates were similar between programs in European Americans (EA) (0.975 [SNP2HLA]; 0.939 [HLA*IMP:02]; 0.976 [HIBAG]). SNP2HLA provided a significant advantage in terms of call rate and the number of alleles imputed. Concordance rates were lower overall for African Americans (AAs). These observations were consistent when accuracy was compared across HLA loci. All imputation programs performed similarly for low frequency HLA alleles. Higher concordance rates were observed when HLA alleles were imputed from GWAS platforms versus the HumanExome BeadChip, suggesting that high genomic coverage is preferred as input for HLA allelic imputation. These findings provide guidance on the best use of HLA imputation methods and elucidate their limitations. PMID:28207879
Multi-population classical HLA type imputation.
Alexander Dilthey
Full Text Available Statistical imputation of classical HLA alleles in case-control studies has become established as a valuable tool for identifying and fine-mapping signals of disease association in the MHC. Imputation into diverse populations has, however, remained challenging, mainly because of the additional haplotypic heterogeneity introduced by combining reference panels of different sources. We present an HLA type imputation model, HLA*IMP:02, designed to operate on a multi-population reference panel. HLA*IMP:02 is based on a graphical representation of haplotype structure. We present a probabilistic algorithm to build such models for the HLA region, accommodating genotyping error, haplotypic heterogeneity and the need for maximum accuracy at the HLA loci, generalizing the work of Browning and Browning (2007 and Ron et al. (1998. HLA*IMP:02 achieves an average 4-digit imputation accuracy on diverse European panels of 97% (call rate 97%. On non-European samples, 2-digit performance is over 90% for most loci and ethnicities where data available. HLA*IMP:02 supports imputation of HLA-DPB1 and HLA-DRB3-5, is highly tolerant of missing data in the imputation panel and works on standard genotype data from popular genotyping chips. It is publicly available in source code and as a user-friendly web service framework.
Jones, Rachael M; Stayner, Leslie T; Demirtas, Hakan
2014-10-01
Drinking water may contain pollutants that harm human health. The frequency of pollutant monitoring may occur quarterly, annually, or less frequently, depending upon the pollutant, the pollutant concentration, and community water system. However, birth and other health outcomes are associated with narrow time-windows of exposure. Infrequent monitoring impedes linkage between water quality and health outcomes for epidemiological analyses. To evaluate the performance of multiple imputation to fill in water quality values between measurements in community water systems (CWSs). The multiple imputation method was implemented in a simulated setting using data from the Atrazine Monitoring Program (AMP, 2006-2009 in five Midwestern states). Values were deleted from the AMP data to leave one measurement per month. Four patterns reflecting drinking water monitoring regulations were used to delete months of data in each CWS: three patterns were missing at random and one pattern was missing not at random. Synthetic health outcome data were created using a linear and a Poisson exposure-response relationship with five levels of hypothesized association, respectively. The multiple imputation method was evaluated by comparing the exposure-response relationships estimated based on multiply imputed data with the hypothesized association. The four patterns deleted 65-92% months of atrazine observations in AMP data. Even with these high rates of missing information, our procedure was able to recover most of the missing information when the synthetic health outcome was included for missing at random patterns and for missing not at random patterns with low-to-moderate exposure-response relationships. Multiple imputation appears to be an effective method for filling in water quality values between measurements. Copyright © 2014 Elsevier Inc. All rights reserved.
McElwee Joshua
2009-06-01
Full Text Available Abstract Background Although high-throughput genotyping arrays have made whole-genome association studies (WGAS feasible, only a small proportion of SNPs in the human genome are actually surveyed in such studies. In addition, various SNP arrays assay different sets of SNPs, which leads to challenges in comparing results and merging data for meta-analyses. Genome-wide imputation of untyped markers allows us to address these issues in a direct fashion. Methods 384 Caucasian American liver donors were genotyped using Illumina 650Y (Ilmn650Y arrays, from which we also derived genotypes from the Ilmn317K array. On these data, we compared two imputation methods: MACH and BEAGLE. We imputed 2.5 million HapMap Release22 SNPs, and conducted GWAS on ~40,000 liver mRNA expression traits (eQTL analysis. In addition, 200 Caucasian American and 200 African American subjects were genotyped using the Affymetrix 500 K array plus a custom 164 K fill-in chip. We then imputed the HapMap SNPs and quantified the accuracy by randomly masking observed SNPs. Results MACH and BEAGLE perform similarly with respect to imputation accuracy. The Ilmn650Y results in excellent imputation performance, and it outperforms Affx500K or Ilmn317K sets. For Caucasian Americans, 90% of the HapMap SNPs were imputed at 98% accuracy. As expected, imputation of poorly tagged SNPs (untyped SNPs in weak LD with typed markers was not as successful. It was more challenging to impute genotypes in the African American population, given (1 shorter LD blocks and (2 admixture with Caucasian populations in this population. To address issue (2, we pooled HapMap CEU and YRI data as an imputation reference set, which greatly improved overall performance. The approximate 40,000 phenotypes scored in these populations provide a path to determine empirically how the power to detect associations is affected by the imputation procedures. That is, at a fixed false discovery rate, the number of cis
Multiple imputation: dealing with missing data.
de Goeij, Moniek C M; van Diepen, Merel; Jager, Kitty J; Tripepi, Giovanni; Zoccali, Carmine; Dekker, Friedo W
2013-10-01
In many fields, including the field of nephrology, missing data are unfortunately an unavoidable problem in clinical/epidemiological research. The most common methods for dealing with missing data are complete case analysis-excluding patients with missing data--mean substitution--replacing missing values of a variable with the average of known values for that variable-and last observation carried forward. However, these methods have severe drawbacks potentially resulting in biased estimates and/or standard errors. In recent years, a new method has arisen for dealing with missing data called multiple imputation. This method predicts missing values based on other data present in the same patient. This procedure is repeated several times, resulting in multiple imputed data sets. Thereafter, estimates and standard errors are calculated in each imputation set and pooled into one overall estimate and standard error. The main advantage of this method is that missing data uncertainty is taken into account. Another advantage is that the method of multiple imputation gives unbiased results when data are missing at random, which is the most common type of missing data in clinical practice, whereas conventional methods do not. However, the method of multiple imputation has scarcely been used in medical literature. We, therefore, encourage authors to do so in the future when possible.
Estimating the accuracy of geographical imputation
Boscoe Francis P
2008-01-01
Full Text Available Abstract Background To reduce the number of non-geocoded cases researchers and organizations sometimes include cases geocoded to postal code centroids along with cases geocoded with the greater precision of a full street address. Some analysts then use the postal code to assign information to the cases from finer-level geographies such as a census tract. Assignment is commonly completed using either a postal centroid or by a geographical imputation method which assigns a location by using both the demographic characteristics of the case and the population characteristics of the postal delivery area. To date no systematic evaluation of geographical imputation methods ("geo-imputation" has been completed. The objective of this study was to determine the accuracy of census tract assignment using geo-imputation. Methods Using a large dataset of breast, prostate and colorectal cancer cases reported to the New Jersey Cancer Registry, we determined how often cases were assigned to the correct census tract using alternate strategies of demographic based geo-imputation, and using assignments obtained from postal code centroids. Assignment accuracy was measured by comparing the tract assigned with the tract originally identified from the full street address. Results Assigning cases to census tracts using the race/ethnicity population distribution within a postal code resulted in more correctly assigned cases than when using postal code centroids. The addition of age characteristics increased the match rates even further. Match rates were highly dependent on both the geographic distribution of race/ethnicity groups and population density. Conclusion Geo-imputation appears to offer some advantages and no serious drawbacks as compared with the alternative of assigning cases to census tracts based on postal code centroids. For a specific analysis, researchers will still need to consider the potential impact of geocoding quality on their results and evaluate
A Radix-10 Combinational Multiplier
Lang, Tomas; Nannarelli, Alberto
2006-01-01
reduces the number of partial product precomputations and uses counters to eliminate the need of the decimal equivalent of a 4:2 adder. The results of the implementation show that the combinational decimal multiplier offers a good compromise between latency and area when compared to other decimal multiply...
Missing value imputation for epistatic MAPs
Ryan, Colm
2010-04-20
Abstract Background Epistatic miniarray profiling (E-MAPs) is a high-throughput approach capable of quantifying aggravating or alleviating genetic interactions between gene pairs. The datasets resulting from E-MAP experiments typically take the form of a symmetric pairwise matrix of interaction scores. These datasets have a significant number of missing values - up to 35% - that can reduce the effectiveness of some data analysis techniques and prevent the use of others. An effective method for imputing interactions would therefore increase the types of possible analysis, as well as increase the potential to identify novel functional interactions between gene pairs. Several methods have been developed to handle missing values in microarray data, but it is unclear how applicable these methods are to E-MAP data because of their pairwise nature and the significantly larger number of missing values. Here we evaluate four alternative imputation strategies, three local (Nearest neighbor-based) and one global (PCA-based), that have been modified to work with symmetric pairwise data. Results We identify different categories for the missing data based on their underlying cause, and show that values from the largest category can be imputed effectively. We compare local and global imputation approaches across a variety of distinct E-MAP datasets, showing that both are competitive and preferable to filling in with zeros. In addition we show that these methods are effective in an E-MAP from a different species, suggesting that pairwise imputation techniques will be increasingly useful as analogous epistasis mapping techniques are developed in different species. We show that strongly alleviating interactions are significantly more difficult to predict than strongly aggravating interactions. Finally we show that imputed interactions, generated using nearest neighbor methods, are enriched for annotations in the same manner as measured interactions. Therefore our method potentially
Calibrated hot deck imputation for numerical data under edit restrictions
de Waal, A.G.; Coutinho, Wieger; Shlomo, Natalie
2017-01-01
We develop a non-parametric imputation method for item non-response based on the well-known hot-deck approach. The proposed imputation method is developed for imputing numerical data that ensure that all record-level edit rules are satisfied and previously estimated or known totals are exactly prese
Data driven estimation of imputation error-a strategy for imputation with a reject option
Bak, Nikolaj; Hansen, Lars Kai
2016-01-01
with missing values by weighing the "true errors" by similarity. The method can also be used to test the performance of different imputation methods. A universal numerical threshold of acceptable error cannot be set since this will differ according to the data, research question, and analysis method...... indiscriminately. We note that the effects of imputation can be strongly dependent on what is missing. To help make decisions about which records should be imputed, we propose to use a machine learning approach to estimate the imputation error for each case with missing data. The method is thought....... The effect of threshold can be estimated using the complete cases. The user can set an a priori relevant threshold for what is acceptable or use cross validation with the final analysis to choose the threshold. The choice can be presented along with argumentation for the choice rather than holding...
Nicolazzi, E L; Biffani, S; Jansen, G
2013-04-01
Routine genomic evaluations frequently include a preliminary imputation step, requiring high accuracy and reduced computing time. A new algorithm, PedImpute (http://dekoppel.eu/pedimpute/), was developed and compared with findhap (http://aipl.arsusda.gov/software/findhap/) and BEAGLE (http://faculty.washington.edu/browning/beagle/beagle.html), using 19,904 Holstein genotypes from a 4-country international collaboration (United States, Canada, UK, and Italy). Different scenarios were evaluated on a sample subset that included only single nucleotide polymorphism from the Bovine low-density (LD) Illumina BeadChip (Illumina Inc., San Diego, CA). Comparative criteria were computing time, percentage of missing alleles, percentage of wrongly imputed alleles, and the allelic squared correlation. Imputation accuracy on ungenotyped animals was also analyzed. The algorithm PedImpute was slightly more accurate and faster than findhap and BEAGLE when sire, dam, and maternal grandsire were genotyped at high density. On the other hand, BEAGLE performed better than both PedImpute and findhap for animals with at least one close relative not genotyped or genotyped at low density. However, computing time and resources using BEAGLE were incompatible with routine genomic evaluations in Italy. Error rate and allelic squared correlation attained by PedImpute ranged from 0.2 to 1.1% and from 96.6 to 99.3%, respectively. When complete genomic information on sire, dam, and maternal grandsire are available, as expected to be the case in the close future in (at least) dairy cattle, and considering accuracies obtained and computation time required, PedImpute represents a valuable choice in routine evaluations among the algorithms tested.
Improving accuracy of rare variant imputation with a two-step imputation approach
Kreiner-Møller, Eskil; Medina-Gomez, Carolina; Uitterlinden, André G;
2015-01-01
Genotype imputation has been the pillar of the success of genome-wide association studies (GWAS) for identifying common variants associated with common diseases. However, most GWAS have been run using only 60 HapMap samples as reference for imputation, meaning less frequent and rare variants not ...... in the low-frequency spectrum and is a cost-effective strategy in large epidemiological studies....
Comparing performance of modern genotype imputation methods in different ethnicities
Roshyara, Nab Raj; Horn, Katrin; Kirsten, Holger; Ahnert, Peter; Scholz, Markus
2016-10-01
A variety of modern software packages are available for genotype imputation relying on advanced concepts such as pre-phasing of the target dataset or utilization of admixed reference panels. In this study, we performed a comprehensive evaluation of the accuracy of modern imputation methods on the basis of the publicly available POPRES samples. Good quality genotypes were masked and re-imputed by different imputation frameworks: namely MaCH, IMPUTE2, MaCH-Minimac, SHAPEIT-IMPUTE2 and MaCH-Admix. Results were compared to evaluate the relative merit of pre-phasing and the usage of admixed references. We showed that the pre-phasing framework SHAPEIT-IMPUTE2 can overestimate the certainty of genotype distributions resulting in the lowest percentage of correctly imputed genotypes in our case. MaCH-Minimac performed better than SHAPEIT-IMPUTE2. Pre-phasing always reduced imputation accuracy. IMPUTE2 and MaCH-Admix, both relying on admixed-reference panels, showed comparable results. MaCH showed superior results if well-matched references were available (Nei’s GST ≤ 0.010). For small to medium datasets, frameworks using genetically closest reference panel are recommended if the genetic distance between target and reference data set is small. Our results are valid for small to medium data sets. As shown on a larger data set of population based German samples, the disadvantage of pre-phasing decreases for larger sample sizes.
NULL Convention Floating Point Multiplier
Anitha Juliette Albert
2015-01-01
Full Text Available Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
NULL convention floating point multiplier.
Albert, Anitha Juliette; Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
NULL Convention Floating Point Multiplier
Anitha Juliette Albert; Seshasayanan Ramachandran
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to p...
Low Power CMOS Analog Multiplier
Shipra Sachan
2015-12-01
Full Text Available In this paper Low power low voltage CMOS analog multiplier circuit is proposed. It is based on flipped voltage follower. It consists of four voltage adders and a multiplier core. The circuit is analyzed and designed in 0.18um CMOS process model and simulation results have shown that, under single 0.9V supply voltage, and it consumes only 31.8µW quiescent power and 110MHZ bandwidth.
Microwave Frequency Multiplier
Velazco, J. E.
2017-02-01
High-power microwave radiation is used in the Deep Space Network (DSN) and Goldstone Solar System Radar (GSSR) for uplink communications with spacecraft and for monitoring asteroids and space debris, respectively. Intense X-band (7.1 to 8.6 GHz) microwave signals are produced for these applications via klystron and traveling-wave microwave vacuum tubes. In order to achieve higher data rate communications with spacecraft, the DSN is planning to gradually furnish several of its deep space stations with uplink systems that employ Ka-band (34-GHz) radiation. Also, the next generation of planetary radar, such as Ka-Band Objects Observation and Monitoring (KaBOOM), is considering frequencies in the Ka-band range (34 to 36 GHz) in order to achieve higher target resolution. Current commercial Ka-band sources are limited to power levels that range from hundreds of watts up to a kilowatt and, at the high-power end, tend to suffer from poor reliability. In either case, there is a clear need for stable Ka-band sources that can produce kilowatts of power with high reliability. In this article, we present a new concept for high-power, high-frequency generation (including Ka-band) that we refer to as the microwave frequency multiplier (MFM). The MFM is a two-cavity vacuum tube concept where low-frequency (2 to 8 GHz) power is fed into the input cavity to modulate and accelerate an electron beam. In the second cavity, the modulated electron beam excites and amplifies high-power microwaves at a frequency that is a multiple integer of the input cavity's frequency. Frequency multiplication factors in the 4 to 10 range are being considered for the current application, although higher multiplication factors are feasible. This novel beam-wave interaction allows the MFM to produce high-power, high-frequency radiation with high efficiency. A key feature of the MFM is that it uses significantly larger cavities than its klystron counterparts, thus greatly reducing power density and arcing
Krithika S
2012-05-01
Full Text Available Abstract Background We explored the imputation performance of the program IMPUTE in an admixed sample from Mexico City. The following issues were evaluated: (a the impact of different reference panels (HapMap vs. 1000 Genomes on imputation; (b potential differences in imputation performance between single-step vs. two-step (phasing and imputation approaches; (c the effect of different INFO score thresholds on imputation performance and (d imputation performance in common vs. rare markers. Methods The sample from Mexico City comprised 1,310 individuals genotyped with the Affymetrix 5.0 array. We randomly masked 5% of the markers directly genotyped on chromosome 12 (n = 1,046 and compared the imputed genotypes with the microarray genotype calls. Imputation was carried out with the program IMPUTE. The concordance rates between the imputed and observed genotypes were used as a measure of imputation accuracy and the proportion of non-missing genotypes as a measure of imputation efficacy. Results The single-step imputation approach produced slightly higher concordance rates than the two-step strategy (99.1% vs. 98.4% when using the HapMap phase II combined panel, but at the expense of a lower proportion of non-missing genotypes (85.5% vs. 90.1%. The 1,000 Genomes reference sample produced similar concordance rates to the HapMap phase II panel (98.4% for both datasets, using the two-step strategy. However, the 1000 Genomes reference sample increased substantially the proportion of non-missing genotypes (94.7% vs. 90.1%. Rare variants ( Conclusions The program IMPUTE had an excellent imputation performance for common alleles in an admixed sample from Mexico City, which has primarily Native American (62% and European (33% contributions. Genotype concordances were higher than 98.4% using all the imputation strategies, in spite of the fact that no Native American samples are present in the HapMap and 1000 Genomes reference panels. The best balance of
Clustering with Missing Values: No Imputation Required
Wagstaff, Kiri
2004-01-01
Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.
An imputation approach for oligonucleotide microarrays.
Ming Li
Full Text Available Oligonucleotide microarrays are commonly adopted for detecting and qualifying the abundance of molecules in biological samples. Analysis of microarray data starts with recording and interpreting hybridization signals from CEL images. However, many CEL images may be blemished by noises from various sources, observed as "bright spots", "dark clouds", and "shadowy circles", etc. It is crucial that these image defects are correctly identified and properly processed. Existing approaches mainly focus on detecting defect areas and removing affected intensities. In this article, we propose to use a mixed effect model for imputing the affected intensities. The proposed imputation procedure is a single-array-based approach which does not require any biological replicate or between-array normalization. We further examine its performance by using Affymetrix high-density SNP arrays. The results show that this imputation procedure significantly reduces genotyping error rates. We also discuss the necessary adjustments for its potential extension to other oligonucleotide microarrays, such as gene expression profiling. The R source code for the implementation of approach is freely available upon request.
Kamatani Naoyuki
2011-05-01
Full Text Available Abstract Background Use of missing genotype imputations and haplotype reconstructions are valuable in genome-wide association studies (GWASs. By modeling the patterns of linkage disequilibrium in a reference panel, genotypes not directly measured in the study samples can be imputed and used for GWASs. Since millions of single nucleotide polymorphisms need to be imputed in a GWAS, faster methods for genotype imputation and haplotype reconstruction are required. Results We developed a program package for parallel computation of genotype imputation and haplotype reconstruction. Our program package, ParaHaplo 3.0, is intended for use in workstation clusters using the Intel Message Passing Interface. We compared the performance of ParaHaplo 3.0 on the Japanese in Tokyo, Japan and Han Chinese in Beijing, and Chinese in the HapMap dataset. A parallel version of ParaHaplo 3.0 can conduct genotype imputation 20 times faster than a non-parallel version of ParaHaplo. Conclusions ParaHaplo 3.0 is an invaluable tool for conducting haplotype-based GWASs. The need for faster genotype imputation and haplotype reconstruction using parallel computing will become increasingly important as the data sizes of such projects continue to increase. ParaHaplo executable binaries and program sources are available at http://en.sourceforge.jp/projects/parallelgwas/releases/.
Last Multipliers on Lie Algebroids
Mircea Crasmareanu; Cristina-Elena Hreţcanu
2009-06-01
In this paper we extend the theory of last multipliers as solutions of the Liouville’s transport equation to Lie algebroids with their top exterior power as trivial line bundle (previously developed for vector fields and multivectors). We define the notion of exact section and the Liouville equation on Lie algebroids. The aim of the present work is to develop the theory of this extension from the tangent bundle algebroid to a general Lie algebroid (e.g. the set of sections with a prescribed last multiplier is still a Gerstenhaber subalgebra). We present some characterizations of this extension in terms of Witten and Marsden differentials.
On combining reference data to improve imputation accuracy.
Jun Chen
Full Text Available Genotype imputation is an important tool in human genetics studies, which uses reference sets with known genotypes and prior knowledge on linkage disequilibrium and recombination rates to infer un-typed alleles for human genetic variations at a low cost. The reference sets used by current imputation approaches are based on HapMap data, and/or based on recently available next-generation sequencing (NGS data such as data generated by the 1000 Genomes Project. However, with different coverage and call rates for different NGS data sets, how to integrate NGS data sets of different accuracy as well as previously available reference data as references in imputation is not an easy task and has not been systematically investigated. In this study, we performed a comprehensive assessment of three strategies on using NGS data and previously available reference data in genotype imputation for both simulated data and empirical data, in order to obtain guidelines for optimal reference set construction. Briefly, we considered three strategies: strategy 1 uses one NGS data as a reference; strategy 2 imputes samples by using multiple individual data sets of different accuracy as independent references and then combines the imputed samples with samples based on the high accuracy reference selected when overlapping occurs; and strategy 3 combines multiple available data sets as a single reference after imputing each other. We used three software (MACH, IMPUTE2 and BEAGLE for assessing the performances of these three strategies. Our results show that strategy 2 and strategy 3 have higher imputation accuracy than strategy 1. Particularly, strategy 2 is the best strategy across all the conditions that we have investigated, producing the best accuracy of imputation for rare variant. Our study is helpful in guiding application of imputation methods in next generation association analyses.
El tratamiento penal del delincuente imputable peligroso
Armaza Armaza, Emilio José
2011-01-01
XI, 529 p. No cabe duda que una de las cuestiones que en el ámbito de la política-criminal ha venido adquiriendo mayor importancia durante los últimos lustros, es la constituida por el tratamiento penal que el Estado debe dispensar al delincuente imputable peligroso de criminalidad grave. En este sentido, a pesar de que a lo largo de la historia las diversas sociedades humanas han tenido que bregar con la actuación de estas personas, no ha sido sino hasta bien entrada la segunda mitad del ...
Cost reduction for web-based data imputation
Li, Zhixu
2014-01-01
Web-based Data Imputation enables the completion of incomplete data sets by retrieving absent field values from the Web. In particular, complete fields can be used as keywords in imputation queries for absent fields. However, due to the ambiguity of these keywords and the data complexity on the Web, different queries may retrieve different answers to the same absent field value. To decide the most probable right answer to each absent filed value, existing method issues quite a few available imputation queries for each absent value, and then vote on deciding the most probable right answer. As a result, we have to issue a large number of imputation queries for filling all absent values in an incomplete data set, which brings a large overhead. In this paper, we work on reducing the cost of Web-based Data Imputation in two aspects: First, we propose a query execution scheme which can secure the most probable right answer to an absent field value by issuing as few imputation queries as possible. Second, we recognize and prune queries that probably will fail to return any answers a priori. Our extensive experimental evaluation shows that our proposed techniques substantially reduce the cost of Web-based Imputation without hurting its high imputation accuracy. © 2014 Springer International Publishing Switzerland.
12 CFR 367.9 - Imputation of causes.
2010-01-01
... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Imputation of causes. 367.9 Section 367.9 Banks... SUSPENSION AND EXCLUSION OF CONTRACTOR AND TERMINATION OF CONTRACTS § 367.9 Imputation of causes. (a) Where there is cause to suspend and/or exclude any affiliated business entity of the contractor, that...
Short communication: Imputation of markers on the bovine X chromosome.
Mao, Xiaowei; Johansson, Anna Maria; Sahana, Goutam; Guldbrandtsen, Bernt; De Koning, Dirk-Jan
2016-09-01
Imputation is a cost-effective approach to augment marker data for genomic selection and genome-wide association studies. However, most imputation studies have focused on autosomes. Here, we assessed the imputation of markers on the X chromosome in Holstein cattle for nongenotyped animals and animals genotyped with low-density (Illumina BovineLD, Illumina Inc., San Diego, CA) chips, using animals genotyped with medium-density (Illumina BovineSNP50) chips. A total of 26,884 genotyped Holstein individuals genotyped with medium-density chips were used in this study. Imputation was carried out using FImpute V2.2. The following parameters were examined: treating the pseudoautosomal region as autosomal or as X specific, different sizes of reference groups, different male/female proportions in the reference group, and cumulated degree of relationship between the reference group and target group. The imputation accuracy of markers on the X chromosome was improved if the pseudoautosomal region was treated as autosomal. Increasing the proportion of females in the reference group improved the imputation accuracy for the X chromosome. Imputation for nongenotyped animals in general had lower accuracy compared with animals genotyped with the low-density single nucleotide polymorphism array. In addition, higher cumulative pedigree relationships between the reference group and the target animal led to higher imputation accuracy. In the future, better marker coverage of the X chromosome should be developed to facilitate genomic studies involving the X chromosome.
mice: Multivariate Imputation by Chained Equations in R
van Buuren, Stef; Groothuis-Oudshoorn, Catharina Gerarda Maria
2011-01-01
The R package mice imputes incomplete multivariate data by chained equations. The software mice 1.0 appeared in the year 2000 as an S-PLUS library, and in 2001 as an R package. mice 1.0 introduced predictor selection, passive imputation and automatic pooling. This article documents mice, which
MULTIPLE IMPUTATION OF MISSING DATA IN SUSTAINABLE DEVELOPMENT MODELLING
Roberto Benedetti; Rita Lima; Alessandro Pandimiglio
2006-01-01
A multiple imputation technique is proposed to measure sustainable development using models of structural equations (LISREL) for the treatment of missing data. The reliability of such technique is verified comparing the estimation model with missing data to the estimation model with imputed data. The results show that the missing data problem significantly affect the estimation.
A Comparison of Imputation Methods for Bayesian Factor Analysis Models
Merkle, Edgar C.
2011-01-01
Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…
Money Multiplier under Reserve Option Mechanism
Halit AKTURK; Gocen, Hasan; Duran, Suleyman
2015-01-01
This paper introduces a generalized money (M2) multiplier formula to the literature for a monetary system with Reserve Option Mechanism (ROM). Various features of the proposed multiplier are then explored using monthly Turkish data during the decade 2005 to 2015. We report a step increase in the magnitude and a slight upward adjustment in the long-run trend of the multiplier with the adoption of ROM. We provide evidence for substantial change in the seasonal pattern of the multiplier, cash ra...
The multipliers of multiple trigonometric Fourier series
Ydyrys, Aizhan; Sarybekova, Lyazzat; Tleukhanova, Nazerke
2016-11-01
We study the multipliers of multiple Fourier series for a regular system on anisotropic Lorentz spaces. In particular, the sufficient conditions for a sequence of complex numbers {λk}k∈Zn in order to make it a multiplier of multiple trigonometric Fourier series from Lp[0; 1]n to Lq[0; 1]n , p > q. These conditions include conditions Lizorkin theorem on multipliers.
Multiplier theorems for special Hermite expansions on
张震球; 郑维行
2000-01-01
The weak type (1,1) estimate for special Hermite expansions on Cn is proved by using the Calderon-Zygmund decomposition. Then the multiplier theorem in Lp(1 < p < ω ) is obtained. The special Hermite expansions in twisted Hardy space are also considered. As an application, the multipli-ers for a certain kind of Laguerre expansions are given in Lp space.
Interregional multipliers : looking backward, looking forward
Dietzenbacher, Erik
2002-01-01
Backward linkages are usually measured using output multipliers as based on the input matrix. Similarly, value-added and import multipliers are derived by additionally using the corresponding primary input coefficients. For measuring forward linkages, input multipliers have been frequently used. Wit
Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry
2014-03-15
Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.
Pipelined Vedic-Array Multiplier Architecture
Vaijyanath Kunchigik
2014-05-01
Full Text Available In this paper, pipelined Vedic-Array multiplier architecture is proposed. The most significant aspect of the proposed multiplier architecture method is that, the developed multiplier architecture is designed based on the Vedic and Array methods of multiplier architecture. The multiplier architecture is optimized in terms of multiplication and addition to achieve efficiency in terms of area, delay and power. This also gives chances for modular design where smaller block can be used to design the bigger one. So the design complexity gets reduced for inputs of larger number of bits and modularity gets increased. The proposed Vedic-Array multiplier is coded in Verilog, synthesized and simulated using EDA (Electronic Design Automation tool - XilinxISE12.3, Spartan 3E, Speed Grade-4. Finally the results are compared with array and booth multiplier architectures. Proposed multiplier is better in terms of delay and area as compared to booth multiplier and array multiplier respectively. The proposed multiplier architecture can be used for high-speed requirements.
Imputing Gene Expression in Uncollected Tissues Within and Beyond GTEx
Wang, Jiebiao; Gamazon, Eric R.; Pierce, Brandon L.; Stranger, Barbara E.; Im, Hae Kyung; Gibbons, Robert D.; Cox, Nancy J.; Nicolae, Dan L.; Chen, Lin S.
2016-01-01
Gene expression and its regulation can vary substantially across tissue types. In order to generate knowledge about gene expression in human tissues, the Genotype-Tissue Expression (GTEx) program has collected transcriptome data in a wide variety of tissue types from post-mortem donors. However, many tissue types are difficult to access and are not collected in every GTEx individual. Furthermore, in non-GTEx studies, the accessibility of certain tissue types greatly limits the feasibility and scale of studies of multi-tissue expression. In this work, we developed multi-tissue imputation methods to impute gene expression in uncollected or inaccessible tissues. Via simulation studies, we showed that the proposed methods outperform existing imputation methods in multi-tissue expression imputation and that incorporating imputed expression data can improve power to detect phenotype-expression correlations. By analyzing data from nine selected tissue types in the GTEx pilot project, we demonstrated that harnessing expression quantitative trait loci (eQTLs) and tissue-tissue expression-level correlations can aid imputation of transcriptome data from uncollected GTEx tissues. More importantly, we showed that by using GTEx data as a reference, one can impute expression levels in inaccessible tissues in non-GTEx expression studies. PMID:27040689
TR01: Time-continuous Sparse Imputation
Gemmeke, J F
2009-01-01
An effective way to increase the noise robustness of automatic speech recognition is to label noisy speech features as either reliable or unreliable (missing) prior to decoding, and to replace the missing ones by clean speech estimates. We present a novel method to obtain such clean speech estimates. Unlike previous imputation frameworks which work on a frame-by-frame basis, our method focuses on exploiting information from a large time-context. Using a sliding window approach, denoised speech representations are constructed using a sparse representation of the reliable features in an overcomplete basis of fixed-length exemplar fragments. We demonstrate the potential of our approach with experiments on the AURORA-2 connected digit database.
Mistler, Stephen A.; Enders, Craig K.
2017-01-01
Multiple imputation methods can generally be divided into two broad frameworks: joint model (JM) imputation and fully conditional specification (FCS) imputation. JM draws missing values simultaneously for all incomplete variables using a multivariate distribution, whereas FCS imputes variables one at a time from a series of univariate conditional…
missForest: Nonparametric missing value imputation using random forest
Stekhoven, Daniel J.
2015-05-01
missForest imputes missing values particularly in the case of mixed-type data. It uses a random forest trained on the observed values of a data matrix to predict the missing values. It can be used to impute continuous and/or categorical data including complex interactions and non-linear relations. It yields an out-of-bag (OOB) imputation error estimate without the need of a test set or elaborate cross-validation and can be run in parallel to save computation time. missForest has been used to, among other things, impute variable star colors in an All-Sky Automated Survey (ASAS) dataset of variable stars with no NOMAD match.
Faster and Low Power Twin Precision Multiplier
Sreedeep, V; Kittur, Harish M
2011-01-01
In this work faster unsigned multiplication has been achieved by using a combination of High Performance Multiplication [HPM] column reduction technique and implementing a N-bit multiplier using 4 N/2-bit multipliers (recursive multiplication) and acceleration of the final addition using a hybrid adder. Low power has been achieved by using clock gating technique. Based on the proposed technique 16 and 32-bit multipliers are developed. The performance of the proposed multiplier is analyzed by evaluating the delay, area and power, with TCBNPHP 90 nm process technology on interconnect and layout using Cadence NC launch, RTL compiler and ENCOUNTER tools. The results show that the 32-bit proposed multiplier is as much as 22% faster, occupies only 3% more area and consumes 30% lesser power with respect to the recently reported twin precision multiplier.
Traditional and Truncation schemes for Different Multiplier
Yogesh M. Motey
2013-03-01
Full Text Available A rapid and proficient in power requirement multiplier is always vital in electronics industry like DSP, image processing and ALU in microprocessors. Multiplier is such an imperative block w ith respect to power consumption and area occupied in the system. In order to meet the demand for high speed, various parallel array multiplication algorithms have been proposed by a number of authors. The array multipliers use a large amount of hardware, consequently consuming a large amount of power. One of the methods for multiplication is based on Indian Vedic mathematics. The total Vedic mathematics is based on sixteen sutras (word formulae and manifests a merged structure of mathematics. The parallel multipliers for example radix 2 and radix 4 booth multiplier does the computations using less number of adders and less number of iterative steps that results in, they occupy less space to that of serial multiplier. Truncated multipliers offer noteworthy enhancements in area, delay, and power. Truncated multiplication provides different method for reducing the power dissipation and area of rounded parallel multipliers in DSP systems. Since in a truncated multiplier the x less significant bits of the full-width product are discarded thus partial products are removed and replaced by a suit- able compensation equations, match the accuracy with hardware cost. A pseudo-carry compensation truncation (PCT scheme, it is for the multiplexer based array multiplier, which yields less average error among existing truncation methods.After studying many research papers it’s found that some of the schemes for multiplier are suitable because their own uniqueness of multiplication. Such schemes are listed in this paper for example the different truncation schemes like constant-correction truncation (CCT, variable -correction truncation (VCT, pseudo-carry compensation truncation (PCT are most suitable for truncated multiplier.
Analysis of Gilbert Multiplier Using Pspice
Mayank Kumar,
2014-05-01
Full Text Available In this paper, the implementation of gilbert multiplier has been done using pspice. In this paper the three analysis of Gilbert Multiplier have been done i.e. DC Analysis, AC Analysis and TRANSIENT analysis with the help of SPICE software. So Spice is a general purpose circuit program that simulates electronic circuits and can perform various analysis of electronic circuits. So with the help of pspice, the analysis of gilbert multiplier has been proposed in this paper.
WIMP: web server tool for missing data imputation.
Urda, D; Subirats, J L; García-Laencina, P J; Franco, L; Sancho-Gómez, J L; Jerez, J M
2012-12-01
The imputation of unknown or missing data is a crucial task on the analysis of biomedical datasets. There are several situations where it is necessary to classify or identify instances given incomplete vectors, and the existence of missing values can much degrade the performance of the algorithms used for the classification/recognition. The task of learning accurately from incomplete data raises a number of issues some of which have not been completely solved in machine learning applications. In this sense, effective missing value estimation methods are required. Different methods for missing data imputations exist but most of the times the selection of the appropriate technique involves testing several methods, comparing them and choosing the right one. Furthermore, applying these methods, in most cases, is not straightforward, as they involve several technical details, and in particular in cases such as when dealing with microarray datasets, the application of the methods requires huge computational resources. As far as we know, there is not a public software application that can provide the computing capabilities required for carrying the task of data imputation. This paper presents a new public tool for missing data imputation that is attached to a computer cluster in order to execute high computational tasks. The software WIMP (Web IMPutation) is a public available web site where registered users can create, execute, analyze and store their simulations related to missing data imputation.
A web-based approach to data imputation
Li, Zhixu
2013-10-24
In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques. © 2013 Springer Science+Business Media New York.
A SPATIOTEMPORAL APPROACH FOR HIGH RESOLUTION TRAFFIC FLOW IMPUTATION
Han, Lee [University of Tennessee, Knoxville (UTK); Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL
2016-01-01
Along with the rapid development of Intelligent Transportation Systems (ITS), traffic data collection technologies have been evolving dramatically. The emergence of innovative data collection technologies such as Remote Traffic Microwave Sensor (RTMS), Bluetooth sensor, GPS-based Floating Car method, automated license plate recognition (ALPR) (1), etc., creates an explosion of traffic data, which brings transportation engineering into the new era of Big Data. However, despite the advance of technologies, the missing data issue is still inevitable and has posed great challenges for research such as traffic forecasting, real-time incident detection and management, dynamic route guidance, and massive evacuation optimization, because the degree of success of these endeavors depends on the timely availability of relatively complete and reasonably accurate traffic data. A thorough literature review suggests most current imputation models, if not all, focus largely on the temporal nature of the traffic data and fail to consider the fact that traffic stream characteristics at a certain location are closely related to those at neighboring locations and utilize these correlations for data imputation. To this end, this paper presents a Kriging based spatiotemporal data imputation approach that is able to fully utilize the spatiotemporal information underlying in traffic data. Imputation performance of the proposed approach was tested using simulated scenarios and achieved stable imputation accuracy. Moreover, the proposed Kriging imputation model is more flexible compared to current models.
High speed multiplier design using Decomposition Logic
Ramanathan Palaniappan
2009-01-01
Full Text Available The multiplier forms the core of a Digital Signal Processor and is a major source of power dissipation. Often, the multiplier forms the limiting factor for the maximum speed of operation of a Digital Signal Processor. Due to continuing integrating intensity and the growing needs of portable devices, low-power, high-performance design is of prime importance. A new technique of implementing a multiplier circuit using Decomposition Logic is proposed here which improves speed with very little increase in power dissipation when compared to tree structured Dadda multipliers. Tanner EDA was used for simulation in the TSMC 180nm technology.
Multiplier phenomenology in random multiplicative cascade processes
Jouault, B; Greiner, M; Jouault, Bruno; Lipa, Peter; Greiner, Martin
1999-01-01
We demonstrate that the correlations observed in conditioned multiplier distributions of the energy dissipation in fully developed turbulence can be understood as an unavoidable artefact of the observation procedure. Taking the latter into account, all reported properties of both unconditioned and conditioned multiplier distributions can be reproduced by cascade models with uncorrelated random weights if their bivariate splitting function is non-energy conserving. For the alpha-model we show that the simulated multiplier distributions converge to a limiting form, which is very close to the experimentally observed one. If random translations of the observation window are accounted for, also the subtle effects found in conditioned multiplier distributions are precisely reproduced.
Ma, Yan; Zhang, Wei; Lyman, Stephen; Huang, Yihe
2017-05-04
To identify the most appropriate imputation method for missing data in the HCUP State Inpatient Databases (SID) and assess the impact of different missing data methods on racial disparities research. HCUP SID. A novel simulation study compared four imputation methods (random draw, hot deck, joint multiple imputation [MI], conditional MI) for missing values for multiple variables, including race, gender, admission source, median household income, and total charges. The simulation was built on real data from the SID to retain their hierarchical data structures and missing data patterns. Additional predictive information from the U.S. Census and American Hospital Association (AHA) database was incorporated into the imputation. Conditional MI prediction was equivalent or superior to the best performing alternatives for all missing data structures and substantially outperformed each of the alternatives in various scenarios. Conditional MI substantially improved statistical inferences for racial health disparities research with the SID. © Health Research and Educational Trust.
Multiply Phased Traveling BPS Vortex
Kimm, Kyoungtae; Cho, Y M
2016-01-01
We present the multiply phased current carrying vortex solutions in the U(1) gauge theory coupled to an $(N+1)$-component SU(N+1) scalar multiplet in the Bogomolny limit. Our vortex solutions correspond to the static vortex dressed with traveling waves along the axis of symmetry. What is notable in our vortex solutions is that the frequencies of traveling waves in each component of the scalar field can have different values. The energy of the static vortex is proportional to the topological charge of $CP^N$ model in the BPS limit, and the multiple phase of the vortex supplies additional energy contribution which is proportional to the Noether charge associated to the remaining symmetry.
A CMOS floating point multiplier
Uya, M.; Kaneko, K.; Yasui, J.
1984-10-01
This paper describes a 32-bit CMOS floating point multiplier. The chip can perform 32-bit floating point multiplication (based on the proposed IEEE Standard format) and 24-bit fixed point multiplication (two's complement format) in less than 78.7 and 71.1 ns, respectively, and the typical power dissipation is 195 mW at 10 million operations per second. High-speed multiplication techniques - a modified Booth's allgorithm, a carry save adder scheme, a high-speed CMOS full adder, and a modified carry select adder - are used to achieve the above high performance. The chip is designed for compatibility with 16-bit microcomputer systems, and is fabricated in 2 micron n-well CMOS technology; it contains about 23000 transistors of 5.75 x 5.67 sq mm in size.
Design of optimized Interval Arithmetic Multiplier
Rajashekar B.Shettar
2011-07-01
Full Text Available Many DSP and Control applications that require the user to know how various numericalerrors(uncertainty affect the result. This uncertainty is eliminated by replacing non-interval values withintervals. Since most DSPs operate in real time environments, fast processors are required to implementinterval arithmetic. The goal is to develop a platform in which Interval Arithmetic operations areperformed at the same computational speed as present day signal processors. So we have proposed thedesign and implementation of Interval Arithmetic multiplier, which operates with IEEE 754 numbers. Theproposed unit consists of a floating point CSD multiplier, Interval operation selector. This architectureimplements an algorithm which is faster than conventional algorithm of Interval multiplier . The costoverhead of the proposed unit is 30% with respect to a conventional floating point multiplier. Theperformance of proposed architecture is better than that of a conventional CSD floating-point multiplier,as it can perform both interval multiplication and floating-point multiplication as well as Intervalcomparisons
The Utility of Nonparametric Transformations for Imputation of Survey Data
Robbins Michael W.
2014-12-01
Full Text Available Missing values present a prevalent problem in the analysis of establishment survey data. Multivariate imputation algorithms (which are used to fill in missing observations tend to have the common limitation that imputations for continuous variables are sampled from Gaussian distributions. This limitation is addressed here through the use of robust marginal transformations. Specifically, kernel-density and empirical distribution-type transformations are discussed and are shown to have favorable properties when used for imputation of complex survey data. Although such techniques have wide applicability (i.e., they may be easily applied in conjunction with a wide array of imputation techniques, the proposed methodology is applied here with an algorithm for imputation in the USDA’s Agricultural Resource Management Survey. Data analysis and simulation results are used to illustrate the specific advantages of the robust methods when compared to the fully parametric techniques and to other relevant techniques such as predictive mean matching. To summarize, transformations based upon parametric densities are shown to distort several data characteristics in circumstances where the parametric model is ill fit; however, no circumstances are found in which the transformations based upon parametric models outperform the nonparametric transformations. As a result, the transformation based upon the empirical distribution (which is the most computationally efficient is recommended over the other transformation procedures in practice.
Doubly robust and multiple-imputation-based generalized estimating equations.
Birhanu, Teshome; Molenberghs, Geert; Sotto, Cristina; Kenward, Michael G
2011-03-01
Generalized estimating equations (GEE), proposed by Liang and Zeger (1986), provide a popular method to analyze correlated non-Gaussian data. When data are incomplete, the GEE method suffers from its frequentist nature and inferences under this method are valid only under the strong assumption that the missing data are missing completely at random. When response data are missing at random, two modifications of GEE can be considered, based on inverse-probability weighting or on multiple imputation. The weighted GEE (WGEE) method involves weighting observations by the inverse of their probability of being observed. Imputation methods involve filling in missing observations with values predicted by an assumed imputation model, multiple times. The so-called doubly robust (DR) methods involve both a model for the weights and a predictive model for the missing observations given the observed ones. To yield consistent estimates, WGEE needs correct specification of the dropout model while imputation-based methodology needs a correctly specified imputation model. DR methods need correct specification of either the weight or the predictive model, but not necessarily both. Focusing on incomplete binary repeated measures, we study the relative performance of the singly robust and doubly robust versions of GEE in a variety of correctly and incorrectly specified models using simulation studies. Data from a clinical trial in onychomycosis further illustrate the method.
The multiple imputation method: a case study involving secondary data analysis.
Walani, Salimah R; Cleland, Charles M
2015-05-01
To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.
TRIP: An interactive retrieving-inferring data imputation approach
Li, Zhixu
2016-06-25
Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.
Variable selection under multiple imputation using the bootstrap in a prognostic study
Heymans, M.W.; Buuren, S. van; Knol, D.L.; Mechelen, W. van; Vet, H.C.W. de
2007-01-01
Background. Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable s
Variable selection under multiple imputation using the bootstrap in a prognostic study
Heymans, M.W.; Buuren, S. van; Knol, D.L.; Mechelen, W. van; Vet, H.C.W. de
2007-01-01
Background. Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable s
Using imputation to provide location information for nongeocoded addresses.
Frank C Curriero
Full Text Available BACKGROUND: The importance of geography as a source of variation in health research continues to receive sustained attention in the literature. The inclusion of geographic information in such research often begins by adding data to a map which is predicated by some knowledge of location. A precise level of spatial information is conventionally achieved through geocoding, the geographic information system (GIS process of translating mailing address information to coordinates on a map. The geocoding process is not without its limitations, though, since there is always a percentage of addresses which cannot be converted successfully (nongeocodable. This raises concerns regarding bias since traditionally the practice has been to exclude nongeocoded data records from analysis. METHODOLOGY/PRINCIPAL FINDINGS: In this manuscript we develop and evaluate a set of imputation strategies for dealing with missing spatial information from nongeocoded addresses. The strategies are developed assuming a known zip code with increasing use of collateral information, namely the spatial distribution of the population at risk. Strategies are evaluated using prostate cancer data obtained from the Maryland Cancer Registry. We consider total case enumerations at the Census county, tract, and block group level as the outcome of interest when applying and evaluating the methods. Multiple imputation is used to provide estimated total case counts based on complete data (geocodes plus imputed nongeocodes with a measure of uncertainty. Results indicate that the imputation strategy based on using available population-based age, gender, and race information performed the best overall at the county, tract, and block group levels. CONCLUSIONS/SIGNIFICANCE: The procedure allows for the potentially biased and likely under reported outcome, case enumerations based on only the geocoded records, to be presented with a statistically adjusted count (imputed count with a measure of
Multiple imputation for handling missing outcome data when estimating the relative risk.
Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B
2017-09-06
Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its
Designing a Novel Ternary Multiplier Using CNTFET
Nooshin Azimi
2014-11-01
Full Text Available Today, multipliers are included as substantial keys of many systems with high efficiency such as FIR filters, microprocessors and processors of digital signals. The efficiency of the systems are mainly evaluated by their multipliers capability since multipliers are generally the slowest components of a system while occupying the most space. Multiple Valued Logic reduces the number of the required operations to implement a function and decreases the chip surface. Carbon Nanotube Field Effect Transistors (CNTFET are considered as good substitutes for Silicon Transistors (MOSFET. Combining the abilities of Carbon Nanotubes Transistors with the advantages of Multiple Valued can provide a unique design which has a higher speed and less complexity. In this paper, a new multiplier is presented by nanotechnology using a ternary logic that improves the consuming power, raises the speed and decreased the chip surface as well. The presented design is simulated using CNTFET of Stanford University and HSPICE software, and the results are compared with other instances.
IMPLEMENTATION OF VEDIC MULTIPLIER USING REVERSIBLE GATES
P. Koti Lakshmi
2015-07-01
Full Text Available With DSP applications evolving continuously, there is continuous need for improved multipliers which are faster and power efficient. Reversible logic is a new and promising field which addresses the problem of power dissipation. It has been shown to consume zero power theoretically. Vedic mathematics techniques have always proven to be fast and efficient for solving various problems. Therefore, in this paper we implement Urdhva Tiryagbhyam algorithm using reversible logic thereby addressing two important issues – speed and power consumption of implementation of multipliers. In this work, the design of 4x4 Vedic multiplier is optimized by reducing the number of logic gates, constant inputs, and garbage outputs. This multiplier can find its application in various fields like convolution, filter applications, cryptography, and communication.
A Comparison of Item-Level and Scale-Level Multiple Imputation for Questionnaire Batteries
Gottschall, Amanda C.; West, Stephen G.; Enders, Craig K.
2012-01-01
Behavioral science researchers routinely use scale scores that sum or average a set of questionnaire items to address their substantive questions. A researcher applying multiple imputation to incomplete questionnaire data can either impute the incomplete items prior to computing scale scores or impute the scale scores directly from other scale…
Hyperbolicity of semigroups and Fourier multipliers
Latushkin, Yuri; Shvidkoy, Roman
2001-01-01
We present a characterization of hyperbolicity for strongly continuous semigroups on Banach spaces in terms of Fourier multiplier properties of the resolvent of the generator. Hyperbolicity with respect to classical solutions is also considered. Our approach unifies and simplifies the M. Kaashoek-- S. Verduyn Lunel theory and multiplier-type results previously obtained by S. Clark, M. Hieber, S. Montgomery-Smith, F. R\\"{a}biger, T. Randolph, and L. Weis.
Multiple Imputation Strategies for Multiple Group Structural Equation Models
Enders, Craig K.; Gottschall, Amanda C.
2011-01-01
Although structural equation modeling software packages use maximum likelihood estimation by default, there are situations where one might prefer to use multiple imputation to handle missing data rather than maximum likelihood estimation (e.g., when incorporating auxiliary variables). The selection of variables is one of the nuances associated…
Multiple Imputation of Predictor Variables Using Generalized Additive Models
de Jong, Roel; van Buuren, Stef; Spiess, Martin
2016-01-01
The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The performanc
Imputing amino acid polymorphisms in human leukocyte antigens.
Xiaoming Jia
Full Text Available DNA sequence variation within human leukocyte antigen (HLA genes mediate susceptibility to a wide range of human diseases. The complex genetic structure of the major histocompatibility complex (MHC makes it difficult, however, to collect genotyping data in large cohorts. Long-range linkage disequilibrium between HLA loci and SNP markers across the major histocompatibility complex (MHC region offers an alternative approach through imputation to interrogate HLA variation in existing GWAS data sets. Here we describe a computational strategy, SNP2HLA, to impute classical alleles and amino acid polymorphisms at class I (HLA-A, -B, -C and class II (-DPA1, -DPB1, -DQA1, -DQB1, and -DRB1 loci. To characterize performance of SNP2HLA, we constructed two European ancestry reference panels, one based on data collected in HapMap-CEPH pedigrees (90 individuals and another based on data collected by the Type 1 Diabetes Genetics Consortium (T1DGC, 5,225 individuals. We imputed HLA alleles in an independent data set from the British 1958 Birth Cohort (N = 918 with gold standard four-digit HLA types and SNPs genotyped using the Affymetrix GeneChip 500 K and Illumina Immunochip microarrays. We demonstrate that the sample size of the reference panel, rather than SNP density of the genotyping platform, is critical to achieve high imputation accuracy. Using the larger T1DGC reference panel, the average accuracy at four-digit resolution is 94.7% using the low-density Affymetrix GeneChip 500 K, and 96.7% using the high-density Illumina Immunochip. For amino acid polymorphisms within HLA genes, we achieve 98.6% and 99.3% accuracy using the Affymetrix GeneChip 500 K and Illumina Immunochip, respectively. Finally, we demonstrate how imputation and association testing at amino acid resolution can facilitate fine-mapping of primary MHC association signals, giving a specific example from type 1 diabetes.
Sequence imputation of HPV16 genomes for genetic association studies.
Benjamin Smith
Full Text Available BACKGROUND: Human Papillomavirus type 16 (HPV16 causes over half of all cervical cancer and some HPV16 variants are more oncogenic than others. The genetic basis for the extraordinary oncogenic properties of HPV16 compared to other HPVs is unknown. In addition, we neither know which nucleotides vary across and within HPV types and lineages, nor which of the single nucleotide polymorphisms (SNPs determine oncogenicity. METHODS: A reference set of 62 HPV16 complete genome sequences was established and used to examine patterns of evolutionary relatedness amongst variants using a pairwise identity heatmap and HPV16 phylogeny. A BLAST-based algorithm was developed to impute complete genome data from partial sequence information using the reference database. To interrogate the oncogenic risk of determined and imputed HPV16 SNPs, odds-ratios for each SNP were calculated in a case-control viral genome-wide association study (VWAS using biopsy confirmed high-grade cervix neoplasia and self-limited HPV16 infections from Guanacaste, Costa Rica. RESULTS: HPV16 variants display evolutionarily stable lineages that contain conserved diagnostic SNPs. The imputation algorithm indicated that an average of 97.5±1.03% of SNPs could be accurately imputed. The VWAS revealed specific HPV16 viral SNPs associated with variant lineages and elevated odds ratios; however, individual causal SNPs could not be distinguished with certainty due to the nature of HPV evolution. CONCLUSIONS: Conserved and lineage-specific SNPs can be imputed with a high degree of accuracy from limited viral polymorphic data due to the lack of recombination and the stochastic mechanism of variation accumulation in the HPV genome. However, to determine the role of novel variants or non-lineage-specific SNPs by VWAS will require direct sequence analysis. The investigation of patterns of genetic variation and the identification of diagnostic SNPs for lineages of HPV16 variants provides a valuable
Application of imputation methods to genomic selection in Chinese Holstein cattle
Weng Ziqing
2012-02-01
Full Text Available Abstract Missing genotypes are a common feature of high density SNP datasets obtained using SNP chip technology and this is likely to decrease the accuracy of genomic selection. This problem can be circumvented by imputing the missing genotypes with estimated genotypes. When implementing imputation, the criteria used for SNP data quality control and whether to perform imputation before or after data quality control need to consider. In this paper, we compared six strategies of imputation and quality control using different imputation methods, different quality control criteria and by changing the order of imputation and quality control, against a real dataset of milk production traits in Chinese Holstein cattle. The results demonstrated that, no matter what imputation method and quality control criteria were used, strategies with imputation before quality control performed better than strategies with imputation after quality control in terms of accuracy of genomic selection. The different imputation methods and quality control criteria did not significantly influence the accuracy of genomic selection. We concluded that performing imputation before quality control could increase the accuracy of genomic selection, especially when the rate of missing genotypes is high and the reference population is small.
Multiple Imputation with Diagnostics ( mi in R : Opening Windows into the Black Box
Yu-Sung Su
2011-12-01
Full Text Available Our mi package in R has several features that allow the user to get inside the imputation process and evaluate the reasonableness of the resulting models and imputations. These features include: choice of predictors, models, and transformations for chained imputation models; standard and binned residual plots for checking the t of the conditional distributions used for imputation; and plots for comparing the distributions of observed and imputed data. In addition, we use Bayesian models and weakly informative prior distributions to construct more stable estimates of imputation models. Our goal is tohave a demonstration package that (a avoids many of the practical problems that arise with existing multivariate imputation programs, and (b demonstrates state-of-the-art diagnostics that can be applied more generally and can be incorporated into the software of others.
Efficient Realization of BCD Multipliers Using FPGAs
Shuli Gao
2017-01-01
Full Text Available In this paper, a novel BCD multiplier approach is proposed. The main highlight of the proposed architecture is the generation of the partial products and parallel binary operations based on 2-digit columns. 1 × 1-digit multipliers used for the partial product generation are implemented directly by 4-bit binary multipliers without any code conversion. The binary results of the 1 × 1-digit multiplications are organized according to their two-digit positions to generate the 2-digit column-based partial products. A binary-decimal compressor structure is developed and used for partial product reduction. These reduced partial products are added in optimized 6-LUT BCD adders. The parallel binary operations and the improved BCD addition result in improved performance and reduced resource usage. The proposed approach was implemented on Xilinx Virtex-5 and Virtex-6 FPGAs with emphasis on the critical path delay reduction. Pipelined BCD multipliers were implemented for 4 × 4, 8 × 8, and 16 × 16-digit multipliers. Our realizations achieve an increase in speed by up to 22% and a reduction of LUT count by up to 14% over previously reported results.
Low Power Complex Multiplier based FFT Processor
V.Sarada
2015-08-01
Full Text Available High speed processing of signals has led to the requirement of very high speed conversion of signals from time domain to frequency domain. Recent years there has been increasing demand for low power designs in the field of Digital signal processing. Power consumption is the most important aspect while considering the system performance. In order to design high performance Fast Fourier Transform (FFT and realization, efficient internal structure is required. In this paper we present FFT Single Path Delay feedback (SDF pipeline architecture using radix -24 algorithm .The complex multiplier is realized by using Digit Slicing Concept multiplier less architecture. To reduce computation complexity radix 24 algorithms is used. The proposed design has been coded in Verilog HDL and synthesizes by Cadence tool. The result demonstrates that the power is reduced compared with complex multiplication used CSD (Canonic Signed Digit multiplier.
Lotka-Volterra system with Volterra multiplier.
Gürlebeck, Klaus; Ji, Xinhua
2011-01-01
With the aid of Volterra multiplier, we study ecological equations for both tree system and cycle system. We obtain a set of sufficient conditions for the ultimate boundedness to nonautonomous n-dimensional Lotka-Volterra tree systems with continuous time delay. The criteria are applicable to cooperative model, competition model, and predator-prey model. As to cycle system, we consider a three-dimensional predator-prey Lotka-Volterra system. In order to get a condition under which the system is globally asymptotic stable, we obtain a Volterra multiplier, so that in a parameter region the system is with the Volterra multiplier it is globally stable. We have also proved that in regions in which the condition is not satisfied, the system is unstable or at least it is not globally stable. Therefore, we say that the three-dimensional cycle system is with global bifurcation.
Problem of Electromagnetoviscoelasticity for Multiply Connected Plates
Kaloerov, S. A.; Samodurov, A. A.
2015-11-01
A method for solving the problem of electromagnetoviscoelasticity for multiply connected plates is proposed. The small-parameter method is used to reduce this problem to a recursive sequence of problems of electromagnetoelasticity, which are solved by using complex potentials. A procedure is developed to determine, using complex potentials, approximations of the basic characteristics (stresses, electromagnetic-field strength, electromagnetic-flux density) of the electromagnetoelastic state at any time after application of a load. A plate with an elliptic hole is considered as an example. The variation in the electromagnetoelastic state of the multiply connected plate with time is studied
An Imputation Model for Dropouts in Unemployment Data
Nilsson Petra
2016-09-01
Full Text Available Incomplete unemployment data is a fundamental problem when evaluating labour market policies in several countries. Many unemployment spells end for unknown reasons; in the Swedish Public Employment Service’s register as many as 20 percent. This leads to an ambiguity regarding destination states (employment, unemployment, retired, etc.. According to complete combined administrative data, the employment rate among dropouts was close to 50 for the years 1992 to 2006, but from 2007 the employment rate has dropped to 40 or less. This article explores an imputation approach. We investigate imputation models estimated both on survey data from 2005/2006 and on complete combined administrative data from 2005/2006 and 2011/2012. The models are evaluated in terms of their ability to make correct predictions. The models have relatively high predictive power.
Missing Data and Multiple Imputation: An Unbiased Approach
Foy, M.; VanBaalen, M.; Wear, M.; Mendez, C.; Mason, S.; Meyers, V.; Alexander, D.; Law, J.
2014-01-01
The default method of dealing with missing data in statistical analyses is to only use the complete observations (complete case analysis), which can lead to unexpected bias when data do not meet the assumption of missing completely at random (MCAR). For the assumption of MCAR to be met, missingness cannot be related to either the observed or unobserved variables. A less stringent assumption, missing at random (MAR), requires that missingness not be associated with the value of the missing variable itself, but can be associated with the other observed variables. When data are truly MAR as opposed to MCAR, the default complete case analysis method can lead to biased results. There are statistical options available to adjust for data that are MAR, including multiple imputation (MI) which is consistent and efficient at estimating effects. Multiple imputation uses informing variables to determine statistical distributions for each piece of missing data. Then multiple datasets are created by randomly drawing on the distributions for each piece of missing data. Since MI is efficient, only a limited number, usually less than 20, of imputed datasets are required to get stable estimates. Each imputed dataset is analyzed using standard statistical techniques, and then results are combined to get overall estimates of effect. A simulation study will be demonstrated to show the results of using the default complete case analysis, and MI in a linear regression of MCAR and MAR simulated data. Further, MI was successfully applied to the association study of CO2 levels and headaches when initial analysis showed there may be an underlying association between missing CO2 levels and reported headaches. Through MI, we were able to show that there is a strong association between average CO2 levels and the risk of headaches. Each unit increase in CO2 (mmHg) resulted in a doubling in the odds of reported headaches.
Weigh-In-Motion Data Checking and Imputation
Wei, Ting; Fricker, Jon D.
2003-01-01
There are about 46 weigh-in-motion (WIM) stations in Indiana. When operating properly, they provide valuable information on traffic volumes, vehicle classifications, and axle weights. Because there are great amounts of WIM data collected everyday, the quality of these data should be monitor without further delay. The first objective of this study is to develop effective and efficient methods to identify missing or erroneous WIM data. The second objective is to develop a data imputation method...
Scott, Paul
2009-01-01
These days, multiplying two numbers together is a breeze. One just enters the two numbers into one's calculator, press a button, and there is the answer! It never used to be this easy. Generations of students struggled with tables of logarithms, and thought it was a miracle when the slide rule first appeared. In this article, the author discusses…
Delay Reduction in Optimized Reversible Multiplier Circuit
Mohammad Assarian
2012-01-01
Full Text Available In this study a novel reversible multiplier is presented. Reversible logic can play a significant role in computer domain. This logic can be applied in quantum computing, optical computing processing, DNA computing, and nanotechnology. One condition for reversibility of a computable model is that the number of input equate with the output. Reversible multiplier circuits are the circuits used frequently in computer system. For this reason, optimization in one reversible multiplier circuit can reduce its volume of hardware on one hand and increases the speed in a reversible system on the other hand. One of the important parameters that optimize a reversible circuit is reduction of delays in performance of the circuit. This paper investigates the performance characteristics of the gates, the circuits and methods of optimizing the performance of reversible multiplier circuits. Results showed that reduction of the reversible circuit layers has lead to improved performance due to the reduction of the propagation delay between input and output period. All the designs are in the nanometric scales.
Imputation of missing data in time series for air pollutants
Junger, W. L.; Ponce de Leon, A.
2015-02-01
Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.
Pipelined C2 Mos Register High Speed Modified Booth Multiplier
N.Ravi
2011-07-01
Full Text Available This paper presents C2 Mos register Pipelined Modified Booth Multiplier (PMBM to improve the speed of the multiplier by allowing the data parallel. The pipeline registers are designed with two p-mos and two n-mos transistors in series which is C2 Mos. Wallace multiplier also used to improve the speed of the multiplier with Carry Save Addition. 16-Transitor Full adders are used for better performance of the multiplier. The PMBM is 28.51% more speed than the Modified Booth Multiplier (MBM. This is calculated with TSMC 0.18um technology using Hspice.
Missing value imputation in multi-environment trials: Reconsidering the Krzanowski method
Sergio Arciniegas-Alarcón
2016-07-01
Full Text Available We propose a new methodology for multiple imputation when faced with missing data in multi-environmental trials with genotype-by-environment interaction, based on the imputation system developed by Krzanowski that uses the singular value decomposition (SVD of a matrix. Several different iterative variants are described; differential weights can also be included in each variant to represent the influence of different components of SVD in the imputation process. The methods are compared through a simulation study based on three real data matrices that have values deleted randomly at different percentages, using as measure of overall accuracy a combination of the variance between imputations and their mean square deviations relative to the deleted values. The best results are shown by two of the iterative schemes that use weights belonging to the interval [0.75, 1]. These schemes provide imputations that have higher quality when compared with other multiple imputation methods based on the Krzanowski method.
Methods and Strategies to Impute Missing Genotypes for Improving Genomic Prediction
Ma, Peipei
Genomic prediction has been widely used in dairy cattle breeding. Genotype imputation is a key procedure to efficently utilize marker data from different chips and obtain high density marker data with minimizing cost. This thesis investigated methods and strategies to genotype imputation for impr......Genomic prediction has been widely used in dairy cattle breeding. Genotype imputation is a key procedure to efficently utilize marker data from different chips and obtain high density marker data with minimizing cost. This thesis investigated methods and strategies to genotype imputation...
Mikhchi, Abbas; Honarvar, Mahmood; Kashan, Nasser Emam Jomeh; Aminafshar, Mehdi
2016-06-21
Genotype imputation is an important tool for prediction of unknown genotypes for both unrelated individuals and parent-offspring trios. Several imputation methods are available and can either employ universal machine learning methods, or deploy algorithms dedicated to infer missing genotypes. In this research the performance of eight machine learning methods: Support Vector Machine, K-Nearest Neighbors, Extreme Learning Machine, Radial Basis Function, Random Forest, AdaBoost, LogitBoost, and TotalBoost compared in terms of the imputation accuracy, computation time and the factors affecting imputation accuracy. The methods employed using real and simulated datasets to impute the un-typed SNPs in parent-offspring trios. The tested methods show that imputation of parent-offspring trios can be accurate. The Random Forest and Support Vector Machine were more accurate than the other machine learning methods. The TotalBoost performed slightly worse than the other methods.The running times were different between methods. The ELM was always most fast algorithm. In case of increasing the sample size, the RBF requires long imputation time.The tested methods in this research can be an alternative for imputation of un-typed SNPs in low missing rate of data. However, it is recommended that other machine learning methods to be used for imputation.
Imputation and quality control steps for combining multiple genome-wide datasets
Shefali S Verma
2014-12-01
Full Text Available The electronic MEdical Records and GEnomics (eMERGE network brings together DNA biobanks linked to electronic health records (EHRs from multiple institutions. Approximately 52,000 DNA samples from distinct individuals have been genotyped using genome-wide SNP arrays across the nine sites of the network. The eMERGE Coordinating Center and the Genomics Workgroup developed a pipeline to impute and merge genomic data across the different SNP arrays to maximize sample size and power to detect associations with a variety of clinical endpoints. The 1000 Genomes cosmopolitan reference panel was used for imputation. Imputation results were evaluated using the following metrics: accuracy of imputation, allelic R2 (estimated correlation between the imputed and true genotypes, and the relationship between allelic R2 and minor allele frequency. Computation time and memory resources required by two different software packages (BEAGLE and IMPUTE2 were also evaluated. A number of challenges were encountered due to the complexity of using two different imputation software packages, multiple ancestral populations, and many different genotyping platforms. We present lessons learned and describe the pipeline implemented here to impute and merge genomic data sets. The eMERGE imputed dataset will serve as a valuable resource for discovery, leveraging the clinical data that can be mined from the EHR.
Design of a High Speed Multiplier (Ancient Vedic Mathematics Approach)
2013-01-01
In this paper, an area efficient multiplier architecture is presented. The architecture is based on Ancient algorithms of the Vedas, propounded in the Vedic Mathematics scripture of Sri Bharati Krishna Tirthaji Maharaja. The multiplication algorithm used here is called Nikhilam Navatascaramam Dasatah. The multiplier based on the ancient technique is compared with the modern multiplier to highlight the speed and power superiority of the Vedic Multipliers.
Noncommutative Figa-Talamanca-Herz algebras for Schur multipliers
Arhancet, Cédric
2009-01-01
We introduce a noncommutative analogue of the Fig\\'a-Talamanca-Herz algebra $A_p(G)$ on the natural predual of the operator space $\\frak{M}_{p,cb}$ of completely bounded Schur multipliers on Schatten space $S_p$. We determine the isometric Schur multipliers and prove that the space $\\frak{M}_{p}$ of bounded Schur multipliers on Schatten space $S_p$ is the closure in the weak operator topology of the span of the isometric multipliers.
Design of a High Speed Multiplier (Ancient Vedic Mathematics Approach
R. Sridevi, Anirudh Palakurthi, Akhila Sadhula, Hafsa Mahreen
2013-07-01
Full Text Available In this paper, an area efficient multiplier architecture is presented. The architecture is based on Ancient algorithms of the Vedas, propounded in the Vedic Mathematics scripture of Sri Bharati Krishna Tirthaji Maharaja. The multiplication algorithm used here is called Nikhilam Navatascaramam Dasatah. The multiplier based on the ancient technique is compared with the modern multiplier to highlight the speed and power superiority of the Vedic Multipliers.
Design of a High Performance Reversible Multiplier
Md.Belayet Ali
2011-11-01
Full Text Available Reversible logic circuits are increasingly used in power minimization having applications such as low power CMOS design, optical information processing, DNA computing, bioinformatics, quantum computing and nanotechnology. The problem of minimizing the number of garbage outputs is an important issue in reversible logic design. In this paper we propose a new 44 universal reversible logic gate. The proposed reversible gate can be used to synthesize any given Boolean functions. The proposed reversible gate also can be used as a full adder circuit. In this paper we have used Peres gate and the proposed Modified HNG (MHNG gate to construct the reversible fault tolerant multiplier circuit. We show that the proposed 44 reversible multiplier circuit has lower hardware complexity and it is much better and optimized in terms of number of reversible gates and number of garbage outputs with compared to the existing counterparts.
ALU Using Area Optimized Vedic Multiplier
Anshul Khare
2014-07-01
Full Text Available —The load on general processor is increasing. For Fast Operations it is an extreme importance in Arithmetic Unit. The performance of Arithmetic Unit depends greatly on it multipliers. So, researchers are continuous searching for new approaches and hardware to implement arithmetic operation in huge efficient way in the terms of speed and area. Vedic Mathematics is the old system of mathematics which has a different technique of calculations based on total 16 Sutras. Proposed work has discussion of the quality of Urdhva Triyakbhyam Vedic approach for multiplication which uses different way than actual process of multiplication itself. It allows parallel generation of elements of products also eliminates undesired multiplication steps with zeros and mapped to higher level of bit using Karatsuba technique with processors, the compatibility to various data types. It is been observed that lot of delay is required by the conventional adders which are needed to have the partial products so in the work it is further optimized the Vedic multiplier type Urdhva Triyakbhyam by replacing the traditional adder with Carry save Adder to have more Delay Optimization. The proposed work shows improvement of speed as compare with the traditional designs. After the proposal discussion of the Vedic multiplier in the paper, It is been used for the implementation of Arithmetic unit using proposed efficient Vedic Multiplier it is not only useful for the improve efficiency the arithmetic module of ALU but also it is useful in the area of digital signal processing. The RTL entry of proposed Arithmetic unit done in VHDL it is synthesized and simulated with Xilinx ISE EDA tool. At the last the proposed Arithmetic Unit is validated on a FPGA device Vertex-IV.
Multiply manifolded molten carbonate fuel cells
Krumpelt, M.; Roche, M.F.; Geyer, H.K.; Johnson, S.A.
1994-08-01
This study consists of research and development activities related to the concept of a molten carbonate fuel cell (MCFC) with multiple manifolds. Objective is to develop an MCFC having a higher power density and a longer life than other MCFC designs. The higher power density will result from thinner gas flow channels; the extended life will result from reduced temperature gradients. Simplification of the gas flow channels and current collectors may also significantly reduce cost for the multiply manifolded MCFC.
Multiplier-free filters for wideband SAR
Dall, Jørgen; Christensen, Erik Lintz
2001-01-01
This paper derives a set of parameters to be optimized when designing filters for digital demodulation and range prefiltering in SAR systems. Aiming at an implementation in field programmable gate arrays (FPGAs), an approach for the design of multiplier-free filters is outlined. Design results ar...... are presented in terms of filter complexity and performance. One filter has been coded in VHDL and preliminary results indicate that the filter can meet a 2 GHz input sample rate....
Automobile Industry Retail Price Equivalent and Indirect Cost Multipliers
This report develops a modified multiplier, referred to as an indirect cost (IC) multiplier, which specifically evaluates the components of indirect costs that are likely to be affected by vehicle modifications associated with environmental regulation. A range of IC multipliers a...
Fu, Yong-Bi
2014-03-13
Genotyping by sequencing (GBS) recently has emerged as a promising genomic approach for assessing genetic diversity on a genome-wide scale. However, concerns are not lacking about the uniquely large unbalance in GBS genotype data. Although some genotype imputation has been proposed to infer missing observations, little is known about the reliability of a genetic diversity analysis of GBS data, with up to 90% of observations missing. Here we performed an empirical assessment of accuracy in genetic diversity analysis of highly incomplete single nucleotide polymorphism genotypes with imputations. Three large single-nucleotide polymorphism genotype data sets for corn, wheat, and rice were acquired, and missing data with up to 90% of missing observations were randomly generated and then imputed for missing genotypes with three map-independent imputation methods. Estimating heterozygosity and inbreeding coefficient from original, missing, and imputed data revealed variable patterns of bias from assessed levels of missingness and genotype imputation, but the estimation biases were smaller for missing data without genotype imputation. The estimates of genetic differentiation were rather robust up to 90% of missing observations but became substantially biased when missing genotypes were imputed. The estimates of topology accuracy for four representative samples of interested groups generally were reduced with increased levels of missing genotypes. Probabilistic principal component analysis based imputation performed better in terms of topology accuracy than those analyses of missing data without genotype imputation. These findings are not only significant for understanding the reliability of the genetic diversity analysis with respect to large missing data and genotype imputation but also are instructive for performing a proper genetic diversity analysis of highly incomplete GBS or other genotype data.
Imputation strategies for missing binary outcomes in cluster randomized trials
Akhtar-Danesh Noori
2011-02-01
Full Text Available Abstract Background Attrition, which leads to missing data, is a common problem in cluster randomized trials (CRTs, where groups of patients rather than individuals are randomized. Standard multiple imputation (MI strategies may not be appropriate to impute missing data from CRTs since they assume independent data. In this paper, under the assumption of missing completely at random and covariate dependent missing, we compared six MI strategies which account for the intra-cluster correlation for missing binary outcomes in CRTs with the standard imputation strategies and complete case analysis approach using a simulation study. Method We considered three within-cluster and three across-cluster MI strategies for missing binary outcomes in CRTs. The three within-cluster MI strategies are logistic regression method, propensity score method, and Markov chain Monte Carlo (MCMC method, which apply standard MI strategies within each cluster. The three across-cluster MI strategies are propensity score method, random-effects (RE logistic regression approach, and logistic regression with cluster as a fixed effect. Based on the community hypertension assessment trial (CHAT which has complete data, we designed a simulation study to investigate the performance of above MI strategies. Results The estimated treatment effect and its 95% confidence interval (CI from generalized estimating equations (GEE model based on the CHAT complete dataset are 1.14 (0.76 1.70. When 30% of binary outcome are missing completely at random, a simulation study shows that the estimated treatment effects and the corresponding 95% CIs from GEE model are 1.15 (0.76 1.75 if complete case analysis is used, 1.12 (0.72 1.73 if within-cluster MCMC method is used, 1.21 (0.80 1.81 if across-cluster RE logistic regression is used, and 1.16 (0.82 1.64 if standard logistic regression which does not account for clustering is used. Conclusion When the percentage of missing data is low or intra
Ma, Peipei; Brøndum, Rasmus Froberg; Qin, Zahng
2013-01-01
This study investigated the imputation accuracy of different methods, considering both the minor allele frequency and relatedness between individuals in the reference and test data sets. Two data sets from the combined population of Swedish and Finnish Red Cattle were used to test the influence...... of these factors on the accuracy of imputation. Data set 1 consisted of 2,931 reference bulls and 971 test bulls, and was used for validation of imputation from 3,000 markers (3K) to 54,000 markers (54K). Data set 2 contained 341 bulls in the reference set and 117 in the test set, and was used for validation...... of imputation from 54K to high density [777,000 markers (777K)]. Both test sets were divided into 4 groups according to their relationship to the reference population. Five imputation methods (Beagle, IMPUTE2, findhap, AlphaImpute, and FImpute) were used in this study. Imputation accuracy was measured...
Systolic multipliers for finite fields GF(2 exp m)
Yeh, C.-S.; Reed, I. S.; Truong, T. K.
1984-01-01
Two systolic architectures are developed for performing the product-sum computation AB + C in the finite field GF(2 exp m) of 2 exp m elements, where A, B, and C are arbitrary elements of GF(2 exp m). The first multiplier is a serial-in, serial-out one-dimensional systolic array, while the second multiplier is a parallel-in, parallel-out two-dimensional systolic array. The first multiplier requires a smaller number of basic cells than the second multiplier. The second multiplier needs less average time per computation than the first multiplier, if a number of computations are performed consecutively. To perform single computations both multipliers require the same computational time. In both cases the architectures are simple and regular and possess the properties of concurrency and modularity. As a consequence, they are well suited for use in VLSI systems.
Consequences of Splitting Sequencing Effort over Multiple Breeds on Imputation Accuracy
Bouwman, A.C.; Veerkamp, R.F.
2014-01-01
Imputation from a high-density SNP panel (777k) to whole-genome sequence with a reference population of 20 Holstein resulted in an average imputation accuracy of 0.70, and increased to 0.83 when the reference population was increased by including 3 other dairy breeds with 20 animals each. When the
Taking don't knows as valid responses: a multiple complete random imputation of missing data
Kroh, Martin
2006-01-01
Incomplete data is a common problem of survey research. Recent work on multiple imputation techniques has increased analysts awareness of the biasing effects of missing data and has also provided a convenient solution. Imputation methods replace non-response with estimates of the unobserved scores.
A Method for Imputing Response Options for Missing Data on Multiple-Choice Assessments
Wolkowitz, Amanda A.; Skorupski, William P.
2013-01-01
When missing values are present in item response data, there are a number of ways one might impute a correct or incorrect response to a multiple-choice item. There are significantly fewer methods for imputing the actual response option an examinee may have provided if he or she had not omitted the item either purposely or accidentally. This…
Estimation of missing rainfall data using spatial interpolation and imputation methods
Radi, Noor Fadhilah Ahmad; Zakaria, Roslinazairimah; Azman, Muhammad Az-zuhri
2015-02-01
This study is aimed to estimate missing rainfall data by dividing the analysis into three different percentages namely 5%, 10% and 20% in order to represent various cases of missing data. In practice, spatial interpolation methods are chosen at the first place to estimate missing data. These methods include normal ratio (NR), arithmetic average (AA), coefficient of correlation (CC) and inverse distance (ID) weighting methods. The methods consider the distance between the target and the neighbouring stations as well as the correlations between them. Alternative method for solving missing data is an imputation method. Imputation is a process of replacing missing data with substituted values. A once-common method of imputation is single-imputation method, which allows parameter estimation. However, the single imputation method ignored the estimation of variability which leads to the underestimation of standard errors and confidence intervals. To overcome underestimation problem, multiple imputations method is used, where each missing value is estimated with a distribution of imputations that reflect the uncertainty about the missing data. In this study, comparison of spatial interpolation methods and multiple imputations method are presented to estimate missing rainfall data. The performance of the estimation methods used are assessed using the similarity index (S-index), mean absolute error (MAE) and coefficient of correlation (R).
Whole-Genome Sequencing Coupled to Imputation Discovers Genetic Signals for Anthropometric Traits
Tachmazidou, Ioanna; Süveges, Dániel; Min, Josine L
2017-01-01
Deep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the broader alleli...
Li Li
Full Text Available Genotype imputation has the potential to assess human genetic variation at a lower cost than assaying the variants using laboratory techniques. The performance of imputation for rare variants has not been comprehensively studied. We utilized 8865 human samples with high depth resequencing data for the exons and flanking regions of 202 genes and Genome-Wide Association Study (GWAS data to characterize the performance of genotype imputation for rare variants. We evaluated reference sets ranging from 100 to 3713 subjects for imputing into samples typed for the Affymetrix (500K and 6.0 and Illumina 550K GWAS panels. The proportion of variants that could be well imputed (true r(2>0.7 with a reference panel of 3713 individuals was: 31% (Illumina 550K or 25% (Affymetrix 500K with MAF (Minor Allele Frequency less than or equal 0.001, 48% or 35% with 0.0010.05. The performance for common SNPs (MAF>0.05 within exons and flanking regions is comparable to imputation of more uniformly distributed SNPs. The performance for rare SNPs (0.01
Recursive partitioning for missing data imputation in the presence of interaction effects
Doove, L. L.; Van Buuren, S.; Dusseldorp, E.
2014-01-01
Standard approaches to implement multiple imputation do not automatically incorporate nonlinear relations like interaction effects. This leads to biased parameter estimates when interactions are present in a dataset. With the aim of providing an imputation method which preserves interactions in the
48 CFR 1830.7002-4 - Determining imputed cost of money.
2010-10-01
... money. 1830.7002-4 Section 1830.7002-4 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND... Determining imputed cost of money. (a) Determine the imputed cost of money for an asset under construction, fabrication, or development by applying a cost of money rate (see 1830.7002-2) to the...
5 CFR 919.630 - May the OPM impute conduct of one person to another?
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false May the OPM impute conduct of one person to another? 919.630 Section 919.630 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT...) General Principles Relating to Suspension and Debarment Actions § 919.630 May the OPM impute conduct...
Evaluation of Multi-parameter Test Statistics for Multiple Imputation.
Liu, Yu; Enders, Craig K
2017-01-01
In Ordinary Least Square regression, researchers often are interested in knowing whether a set of parameters is different from zero. With complete data, this could be achieved using the gain in prediction test, hierarchical multiple regression, or an omnibus F test. However, in substantive research scenarios, missing data often exist. In the context of multiple imputation, one of the current state-of-art missing data strategies, there are several different analogous multi-parameter tests of the joint significance of a set of parameters, and these multi-parameter test statistics can be referenced to various distributions to make statistical inferences. However, little is known about the performance of these tests, and virtually no research study has compared the Type 1 error rates and statistical power of these tests in scenarios that are typical of behavioral science data (e.g., small to moderate samples, etc.). This paper uses Monte Carlo simulation techniques to examine the performance of these multi-parameter test statistics for multiple imputation under a variety of realistic conditions. We provide a number of practical recommendations for substantive researchers based on the simulation results, and illustrate the calculation of these test statistics with an empirical example.
Imputation of KIR Types from SNP Variation Data.
Vukcevic, Damjan; Traherne, James A; Næss, Sigrid; Ellinghaus, Eva; Kamatani, Yoichiro; Dilthey, Alexander; Lathrop, Mark; Karlsen, Tom H; Franke, Andre; Moffatt, Miriam; Cookson, William; Trowsdale, John; McVean, Gil; Sawcer, Stephen; Leslie, Stephen
2015-10-01
Large population studies of immune system genes are essential for characterizing their role in diseases, including autoimmune conditions. Of key interest are a group of genes encoding the killer cell immunoglobulin-like receptors (KIRs), which have known and hypothesized roles in autoimmune diseases, resistance to viruses, reproductive conditions, and cancer. These genes are highly polymorphic, which makes typing expensive and time consuming. Consequently, despite their importance, KIRs have been little studied in large cohorts. Statistical imputation methods developed for other complex loci (e.g., human leukocyte antigen [HLA]) on the basis of SNP data provide an inexpensive high-throughput alternative to direct laboratory typing of these loci and have enabled important findings and insights for many diseases. We present KIR∗IMP, a method for imputation of KIR copy number. We show that KIR∗IMP is highly accurate and thus allows the study of KIRs in large cohorts and enables detailed investigation of the role of KIRs in human disease.
[Imputing missing data in public health: general concepts and application to dichotomous variables].
Hernández, Gilma; Moriña, David; Navarro, Albert
The presence of missing data in collected variables is common in health surveys, but the subsequent imputation thereof at the time of analysis is not. Working with imputed data may have certain benefits regarding the precision of the estimators and the unbiased identification of associations between variables. The imputation process is probably still little understood by many non-statisticians, who view this process as highly complex and with an uncertain goal. To clarify these questions, this note aims to provide a straightforward, non-exhaustive overview of the imputation process to enable public health researchers ascertain its strengths. All this in the context of dichotomous variables which are commonplace in public health. To illustrate these concepts, an example in which missing data is handled by means of simple and multiple imputation is introduced. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Wenzhi Li
2016-09-01
Full Text Available The data presented in this article is related to the research article entitled “High-accuracy haplotype imputation using unphased genotype data as the references” which reports the unphased genotype data can be used as reference for haplotyping imputation [1]. This article reports different implementation generation pipeline, the results of performance comparison between different implementations (A, B, and C and between HiFi and three major imputation software tools. Our data showed that the performances of these three implementations are similar on accuracy, in which the accuracy of implementation-B is slightly but consistently higher than A and C. HiFi performed better on haplotype imputation accuracy and three other software performed slightly better on genotype imputation accuracy. These data may provide a strategy for choosing optimal phasing pipeline and software for different studies.
A study of gas electron multiplier
AN Shao-Hui; LI Cheng; ZHOU Yi; XU Zi-Zong
2004-01-01
A new kind of gas detector based on gas electron multiplier (GEM) is studied for X-ray imaging of high luminosity. A single-GEM device is designed to test the property of GEM foil .The effective gain and counting capability of a double-GEM detector are measured by an X-ray tube with Cu target. An initial X-ray imaging experiment is carried out using a triple-GEM detector and the position resolution of less than 0.1mm is achieved. The 3D distribution of electrostatic field of GEM mesh is also presented.
VLSI binary multiplier using residue number systems
Barsi, F.; Di Cola, A.
1982-01-01
The idea of performing multiplication of n-bit binary numbers using a hardware based on residue number systems is considered. This paper develops the design of a VLSI chip deriving area and time upper bounds of a n-bit multiplier. To perform multiplication using residue arithmetic, numbers are converted from binary to residue representation and, after residue multiplication, the result is reconverted to the original notation. It is shown that the proposed design requires an area a=o(n/sup 2/ log n) and an execution time t=o(log/sup 2/n). 7 references.
Combining multiple imputation and meta-analysis with individual participant data.
Burgess, Stephen; White, Ian R; Resche-Rigon, Matthieu; Wood, Angela M
2013-11-20
Multiple imputation is a strategy for the analysis of incomplete data such that the impact of the missingness on the power and bias of estimates is mitigated. When data from multiple studies are collated, we can propose both within-study and multilevel imputation models to impute missing data on covariates. It is not clear how to choose between imputation models or how to combine imputation and inverse-variance weighted meta-analysis methods. This is especially important as often different studies measure data on different variables, meaning that we may need to impute data on a variable which is systematically missing in a particular study. In this paper, we consider a simulation analysis of sporadically missing data in a single covariate with a linear analysis model and discuss how the results would be applicable to the case of systematically missing data. We find in this context that ensuring the congeniality of the imputation and analysis models is important to give correct standard errors and confidence intervals. For example, if the analysis model allows between-study heterogeneity of a parameter, then we should incorporate this heterogeneity into the imputation model to maintain the congeniality of the two models. In an inverse-variance weighted meta-analysis, we should impute missing data and apply Rubin's rules at the study level prior to meta-analysis, rather than meta-analyzing each of the multiple imputations and then combining the meta-analysis estimates using Rubin's rules. We illustrate the results using data from the Emerging Risk Factors Collaboration.
Rhinoplasty for the multiply revised nose.
Foda, Hossam M T
2005-01-01
To evaluate the problems encountered on revising a multiply operated nose and the methods used in correcting such problems. The study included 50 cases presenting for revision rhinoplasty after having had 2 or more previous rhinoplasties. An external rhinoplasty approach was used in all cases. Simultaneous septal surgery was done whenever indicated. All cases were followed for a mean period of 32 months (range, 1.5-8 years). Evaluation of the surgical result depended on clinical examination, comparison of pre- and postoperative photographs, and degree of patients' satisfaction with their aesthetic and functional outcome. Functionally, 68% suffered nasal obstruction that was mainly caused by septal deviations and nasal valve problems. Aesthetically, the most common deformities of the upper two thirds of the nose included pollybeak (64%), dorsal irregularities (54%), dorsal saddle (44%), and open roof deformity (42%), whereas the deformities of lower third included depressed tip (68%), tip contour irregularities (60%), and overrotated tip (42%). Nasal grafting was necessary in all cases; usually more than 1 type of graft was used in each case. Postoperatively, 79% of the patients, with preoperative nasal obstruction, reported improved breathing; 84% were satisfied with their aesthetic result; and only 8 cases (16%) requested further revision to correct minor deformities. Revision of a multiply operated nose is a complex and technically demanding task, yet, in a good percentage of cases, aesthetic as well as functional improvement are still possible.
A Comparative Performance Analysis of Low Power Bypassing Array Multipliers
Nirlakalla Ravi
2013-07-01
Full Text Available Low power design of VLSI circuits has been identified as vital technology in battery powered portable electronic devices and signal processing applications such as Digital Signal Processors (DSP. Multiplier has an important role in the DSPs. Without degrading the performance of the processor, low power parallel multipliers are needed to be design. Bypassing is the widely used technique in the DSPs when the input operand of the multiplier is zero. A Row based Bypassing Multiplier with compressor at the final addition of the ripple carry adder (RCA is designed to focus on low power and high speed. The proposed bypassing multiplier with compressor shows high performance and energy efficiency than Kuo multiplier with Carry Save Adder (CSA at the final RCA.
An Overview and Evaluation of Recent Machine Learning Imputation Methods Using Cardiac Imaging Data.
Liu, Yuzhe; Gopalakrishnan, Vanathi
2017-03-01
Many clinical research datasets have a large percentage of missing values that directly impacts their usefulness in yielding high accuracy classifiers when used for training in supervised machine learning. While missing value imputation methods have been shown to work well with smaller percentages of missing values, their ability to impute sparse clinical research data can be problem specific. We previously attempted to learn quantitative guidelines for ordering cardiac magnetic resonance imaging during the evaluation for pediatric cardiomyopathy, but missing data significantly reduced our usable sample size. In this work, we sought to determine if increasing the usable sample size through imputation would allow us to learn better guidelines. We first review several machine learning methods for estimating missing data. Then, we apply four popular methods (mean imputation, decision tree, k-nearest neighbors, and self-organizing maps) to a clinical research dataset of pediatric patients undergoing evaluation for cardiomyopathy. Using Bayesian Rule Learning (BRL) to learn ruleset models, we compared the performance of imputation-augmented models versus unaugmented models. We found that all four imputation-augmented models performed similarly to unaugmented models. While imputation did not improve performance, it did provide evidence for the robustness of our learned models.
On multipliers of Fourier series in the Lorentz space
Ydyrys, Aizhan Zh.; Tleukhanova, Nazerke T.
2016-08-01
We study the multipliers of Fourier series on the Lorentz spaces, in particular, the sufficient conditions for a sequence of complex numbers {λk}k∈Z in order to make it a multiplier of trigonometric Fourier series of space Lp,r [0; 1] in the Lq,r [0; 1]. In the paper there is a new multipliers theorem which is supplement of the well-known theorems, and given a counterexample.
Implementation of MAC by using Modified Vedic Multiplier
2013-01-01
Multiplier Accumulator Unit (MAC) is a part of Digital Signal Processors. The speed of MAC depends on the speed of multiplier. So by using an efficient Vedic multiplier which excels in terms of speed, power and area, the performance of MAC can be increased. For this fast method of multiplication based on ancient Indian Vedic mathematics is proposed in this paper. Among various method of multiplication in Vedic mathematics, Urdhva Tiryagbhyam is used and the multiplication is for 32 X 32 bits....
Multiplier Accounting of Indian Mining Industry--The Concept
Hussain, A.; Karmakar, N. C.
2015-04-01
Input-output multipliers are indicators used for predicting the total impact on an economy due to the changes in its industrial demand and output. Also, input-output tables provide detailed dissection of the intermediate transactions in an economy. The aim of the paper is to put forward a basic framework of input-output economics as well as the multiplier concept. The outline of the methodology for calculating the multiplier associated with Indian mining industry is also presented.
Genotype Imputation for Latinos Using the HapMap and 1000 Genomes Project Reference Panels
Xiaoyi eGao
2012-06-01
Full Text Available Genotype imputation is a vital tool in genome-wide association studies (GWAS and meta-analyses of multiple GWAS results. Imputation enables researchers to increase genomic coverage and to pool data generated using different genotyping platforms. HapMap samples are often employed as the reference panel. More recently, the 1000 Genomes Project resource is becoming the primary source for reference panels. Multiple GWAS and meta-analyses are targeting Latinos, the most populous and fastest growing minority group in the US. However, genotype imputation resources for Latinos are rather limited compared to individuals of European ancestry at present, largely because of the lack of good reference data. One choice of reference panel for Latinos is one derived from the population of Mexican individuals in Los Angeles contained in the HapMap Phase 3 project and the 1000 Genomes Project. However, a detailed evaluation of the quality of the imputed genotypes derived from the public reference panels has not yet been reported. Using simulation studies, the Illumina OmniExpress GWAS data from the Los Angles Latino Eye Study and the MACH software package, we evaluated the accuracy of genotype imputation in Latinos. Our results show that the 1000 Genomes Project AMR+CEU+YRI reference panel provides the highest imputation accuracy for Latinos, and that also including Asian samples in the panel can reduce imputation accuracy. We also provide the imputation accuracy for each autosomal chromosome using the 1000 Genomes Project panel for Latinos. Our results serve as a guide to future imputation-based analysis in Latinos.
Genotype Imputation for Latinos Using the HapMap and 1000 Genomes Project Reference Panels.
Gao, Xiaoyi; Haritunians, Talin; Marjoram, Paul; McKean-Cowdin, Roberta; Torres, Mina; Taylor, Kent D; Rotter, Jerome I; Gauderman, William J; Varma, Rohit
2012-01-01
Genotype imputation is a vital tool in genome-wide association studies (GWAS) and meta-analyses of multiple GWAS results. Imputation enables researchers to increase genomic coverage and to pool data generated using different genotyping platforms. HapMap samples are often employed as the reference panel. More recently, the 1000 Genomes Project resource is becoming the primary source for reference panels. Multiple GWAS and meta-analyses are targeting Latinos, the most populous, and fastest growing minority group in the US. However, genotype imputation resources for Latinos are rather limited compared to individuals of European ancestry at present, largely because of the lack of good reference data. One choice of reference panel for Latinos is one derived from the population of Mexican individuals in Los Angeles contained in the HapMap Phase 3 project and the 1000 Genomes Project. However, a detailed evaluation of the quality of the imputed genotypes derived from the public reference panels has not yet been reported. Using simulation studies, the Illumina OmniExpress GWAS data from the Los Angles Latino Eye Study and the MACH software package, we evaluated the accuracy of genotype imputation in Latinos. Our results show that the 1000 Genomes Project AMR + CEU + YRI reference panel provides the highest imputation accuracy for Latinos, and that also including Asian samples in the panel can reduce imputation accuracy. We also provide the imputation accuracy for each autosomal chromosome using the 1000 Genomes Project panel for Latinos. Our results serve as a guide to future imputation based analysis in Latinos.
Association studies with imputed variants using expectation-maximization likelihood-ratio tests.
Kuan-Chieh Huang
Full Text Available Genotype imputation has become standard practice in modern genetic studies. As sequencing-based reference panels continue to grow, increasingly more markers are being well or better imputed but at the same time, even more markers with relatively low minor allele frequency are being imputed with low imputation quality. Here, we propose new methods that incorporate imputation uncertainty for downstream association analysis, with improved power and/or computational efficiency. We consider two scenarios: I when posterior probabilities of all potential genotypes are estimated; and II when only the one-dimensional summary statistic, imputed dosage, is available. For scenario I, we have developed an expectation-maximization likelihood-ratio test for association based on posterior probabilities. When only imputed dosages are available (scenario II, we first sample the genotype probabilities from its posterior distribution given the dosages, and then apply the EM-LRT on the sampled probabilities. Our simulations show that type I error of the proposed EM-LRT methods under both scenarios are protected. Compared with existing methods, EM-LRT-Prob (for scenario I offers optimal statistical power across a wide spectrum of MAF and imputation quality. EM-LRT-Dose (for scenario II achieves a similar level of statistical power as EM-LRT-Prob and, outperforms the standard Dosage method, especially for markers with relatively low MAF or imputation quality. Applications to two real data sets, the Cebu Longitudinal Health and Nutrition Survey study and the Women's Health Initiative Study, provide further support to the validity and efficiency of our proposed methods.
Optimized Modulo Multiplier Based On R.N.S
Manjula.S.Doddamane
2013-07-01
Full Text Available To implement long and repetitive multiplications of cryptographic and signal processing algorithmwe often adopt residue number system. In this paper a new low power and low modulo multiplier foe well established {2n-1,2n,2n+1} based is proposed .Radix-8 Booth encoding technique is used in the proposed modulo 2n-1 and modulo 2n+1 multipliers. In the proposed modulo 2n-1 multiplier, the number of partial products is lowered to [n/3]+1. For modulo 2n+1 multiplication ,the aggregate bias due to the hard multiple and the modulo reduced partial product generation is composed of multiplier dependent dynamic bias and multiplier-independent static bias .In the proposed modulo 2n+1 multiplier , the number of partial products is lowered to n/3+6 .For different modulo 2n-1 and modulo 2n+1 multiplier our proposed modulo multiplier consumes less area and has minimum power dissipation over radix-4 Booth encoded and non-encoded modulo multiplier
Fission of Multiply Charged Alkali Clusters
Barnett, Robert N.; Yannouleas, Constantine; Landman, Uzi
2001-03-01
We use ab-initio molecular dynamics simulations to investigate the fission of multiply charged pure and mixed alkali clusters. Positive (+2 to +4) clusters of up to 30 atoms are considered. The clusters are initially equilibrated with a charge of +1 or +2 (depending on size) and at temperatures of 150 to 800 K. subsequently the clusters are further ionized and their evolution is followed. For doubly charged clusters binary fission occurs, while higher charged clusters fission through ternary or quaternary channels. The most common occurrence is the emission of a singly charged 3-atom cluster, which may occur repeatedly until the remaining cluster is stable. The dynamics of the fission process is discussed, and the results are compared with experiments and with the predictions of the liquid-drop and shell-corrected jellium models.
Gas Electron multipliers for low energy beams
Arnold, F; Ropelewski, L; Spanggaard, J; Tranquille, G
2010-01-01
Gas Electron Multipliers (GEM) find their way to more and more applications in beam instrumentation. Gas Electron Multiplication uses a very similar physical phenomenon to that of Multi Wire Proportional Chambers (MWPC) but for small profile monitors they are much more cost efficient both to produce and to maintain. This paper presents the new GEM profile monitors intended to replace the MWPCs currently used at CERN’s low energy Antiproton Decelerator (AD). It will be shown how GEMs overcome the documented problems of profile measurements with MWPCs for low energy beams, where the interaction of the beam with the detector has a large influence on the measured profile. Results will be shown of profile measurements performed at 5 MeV using four different GEM prototypes, with discussion on the possible use of GEMs at even lower energies needed at the AD in 2013.
Four-gate transistor analog multiplier circuit
Mojarradi, Mohammad M. (Inventor); Blalock, Benjamin (Inventor); Cristoloveanu, Sorin (Inventor); Chen, Suheng (Inventor); Akarvardar, Kerem (Inventor)
2011-01-01
A differential output analog multiplier circuit utilizing four G.sup.4-FETs, each source connected to a current source. The four G.sup.4-FETs may be grouped into two pairs of two G.sup.4-FETs each, where one pair has its drains connected to a load, and the other par has its drains connected to another load. The differential output voltage is taken at the two loads. In one embodiment, for each G.sup.4-FET, the first and second junction gates are each connected together, where a first input voltage is applied to the front gates of each pair, and a second input voltage is applied to the first junction gates of each pair. Other embodiments are described and claimed.
Satish S Bhairannawar
2014-06-01
Full Text Available The Digital Image processing applications like medi cal imaging, satellite imaging, Biometric trait ima ges etc., rely on multipliers to improve the quality of image. However, existing multiplication techniques introduce errors in the output with consumption of more time, hence error free high speed multipliers has to be designed. In this paper we propose FPGA based Recursive Error Free Mitchell Log Multiplier (REFMLM for image Filters. The 2x2 error free Mitc hell log multiplier is designed with zero error by introducing error correction term is used in higher order Karastuba-Ofman Multiplier (KOM Architectures. The higher order KOM multipliers is decomposed into number of lower order multipliers using radix 2 till basic multiplier block of order 2x2 which is designed by error free Mitchell log mu ltiplier. The 8x8 REFMLM is tested for Gaussian filter to rem ove noise in fingerprint image. The Multiplier is synthesized using Spartan 3 FPGA family device XC3S 1500-5fg320. It is observed that the performance parameters such as area utilization, speed, error a nd PSNR are better in the case of proposed architec ture compared to existing architectures
Multipliers for Floating-Point Double Precision and Beyond on FPGAs
Banescu, Sebastian; De Dinechin, Florent; Pasca, Bogdan; Tudoran, Radu
2010-01-01
International audience; The implementation of high-precision floating-point applications on reconfigurable hardware requires a variety of large multipliers: Standard multipliers are the core of floating-point multipliers; Truncated multipliers, trading resources for a well-controlled accuracy degradation, are useful building blocks in situations where a full multiplier is not needed. This work studies the automated generation of such multipliers using the embedded multipliers and adders prese...
Hu, Bo; Li, Liang; Greene, Tom
2016-07-30
Longitudinal cohort studies often collect both repeated measurements of longitudinal outcomes and times to clinical events whose occurrence precludes further longitudinal measurements. Although joint modeling of the clinical events and the longitudinal data can be used to provide valid statistical inference for target estimands in certain contexts, the application of joint models in medical literature is currently rather restricted because of the complexity of the joint models and the intensive computation involved. We propose a multiple imputation approach to jointly impute missing data of both the longitudinal and clinical event outcomes. With complete imputed datasets, analysts are then able to use simple and transparent statistical methods and standard statistical software to perform various analyses without dealing with the complications of missing data and joint modeling. We show that the proposed multiple imputation approach is flexible and easy to implement in practice. Numerical results are also provided to demonstrate its performance. Copyright © 2015 John Wiley & Sons, Ltd.
Imputation methods for filling missing data in urban air pollution data for Malaysia
Nur Afiqah Zakaria
2018-06-01
Full Text Available The air quality measurement data obtained from the continuous ambient air quality monitoring (CAAQM station usually contained missing data. The missing observations of the data usually occurred due to machine failure, routine maintenance and human error. In this study, the hourly monitoring data of CO, O3, PM10, SO2, NOx, NO2, ambient temperature and humidity were used to evaluate four imputation methods (Mean Top Bottom, Linear Regression, Multiple Imputation and Nearest Neighbour. The air pollutants observations were simulated into four percentages of simulated missing data i.e. 5%, 10%, 15% and 20%. Performance measures namely the Mean Absolute Error, Root Mean Squared Error, Coefficient of Determination and Index of Agreement were used to describe the goodness of fit of the imputation methods. From the results of the performance measures, Mean Top Bottom method was selected as the most appropriate imputation method for filling in the missing values in air pollutants data.
A new proof of the Lagrange multiplier rule
J. Brinkhuis (Jan); V. Protassov (Vladimir)
2015-01-01
textabstractWe present an elementary self-contained proof for the Lagrange multiplier rule. It does not refer to any substantial preparations and it is only based on the observation that a certain limit is positive. At the end of this note, the power of the Lagrange multiplier rule is analyzed.
Dimension of the $c$-nilpotent multiplier of Lie algebras
MEHDI ARASKHAN; MOHAMMAD REZA RISMANCHIAN
2016-08-01
The purpose of this paper is to derive some inequalities for dimension of the $c$-nilpotent multiplier of finite dimensional Lie algebras and their factor Lie algebras. We further obtain an inequality between dimensions of $c$-nilpotent multiplier of Lie algebra $L$ and tensor product of a central ideal by its abelianized factor Lie algebra
OPERATOR-VALUED FOURIER MULTIPLIER THEOREMS ON TRIEBEL SPACES
Bu Shangquan; Kim Jin-Myong
2005-01-01
The authors establish operator-valued Fourier multiplier theorems on Triebel spaces on RN, where the required smoothness of the multiplier functions depends on the dimension N and the indices of the Triebel spaces. This is used to give a sufficient condition of the maximal regularity in the sense of Triebel spaces for vector-valued Cauchy problems with Dirichlet boundary conditions.
Operator-valued Fourier Multipliers on Periodic Triebel Spaces
Shang Quan BU; Jin Myong KIM
2005-01-01
We establish operator-valued Fourier multiplier theorems on periodic Triebel spaces, where the required smoothness of the multipliers depends on the indices of the Triebel spaces. This is used to give a characterization of the maximal regularity in the sense of Triebel spaces for Cauchy problems with periodic boundary conditions.
Multiplier theorems for special Hermite expansions on Cn
无
2000-01-01
The weak type (1,1) estimate for special Hermite expansions on Cn is proved by using the Calderón-Zygmund decomposition. Then the multiplier theorem in Lp(1
multipliers for a certain kind of Laguerre expansions are given in Lp space.
Design of Reversible Multipliers for Linear Filtering Applications in DSP
Rakshith Saligram
2012-12-01
Full Text Available Multipliers in DSP computations are crucial. Thus modern DSP systems need to develop low power multipliers to reduce the power dissipation. One of the efficient ways to reduce power dissipation is by the use of bypassing technique. If a bit in the multiplier and/or multiplicand is zero the whole array of rowand/or diagonal will be bypassed and hence the name bypass multipliers. This paper presents the column Bypass multiplier and 2-D bypass multiplier using reversible logic; Reversible logic is a more prominent technology, having its applications in Low Power CMOS and quantum computations. The switching activity of any component in the bypass multiplier depends only on the input bit coefficients. The semultipliers find application in linear filtering FFT computational units, particularly during zero padding where there will be umpteen numbers of zeros. A bypass multiplier reduces the number of switching activities as well as the power consumption, above which reversible logic design acts to further almost nullify the dissipations
Multipliers for the Absolute Euler Summability of Fourier Series
Prem Chandra
2001-05-01
In this paper, the author has investigated necessary and sufficient conditions for the absolute Euler summability of the Fourier series with multipliers. These conditions are weaker than those obtained earlier by some workers. It is further shown that the multipliers are best possible in certain sense.
An Efficient 16-Bit Multiplier based on Booth Algorithm
Khan, M. Zamin Ali; Saleem, Hussain; Afzal, Shiraz; Naseem, Jawed
2012-11-01
Multipliers are key components of many high performance systems such as microprocessors, digital signal processors, etc. Optimizing the speed and area of the multiplier is major design issue which is usually conflicting constraint so that improving speed results mostly in bigger areas. A VHDL designed architecture based on booth multiplication algorithm is proposed which not only optimize speed but also efficient on energy use.
L-R smash products for multiplier Hopf algebras
ZHAO Li-hui; LU Di-ming; FANG Xiao-li
2008-01-01
The theory of L-R smash product is extended to multiplier Hopf algebras and a sufficient condition for L-R smash product to be regular multiplier Hopf algebras is given. In particular the result of the paper implies Delvaux's main theorem in the case of smash products.
Missing value imputation for microarray gene expression data using histone acetylation information
Feng Jihua
2008-05-01
Full Text Available Abstract Background It is an important pre-processing step to accurately estimate missing values in microarray data, because complete datasets are required in numerous expression profile analysis in bioinformatics. Although several methods have been suggested, their performances are not satisfactory for datasets with high missing percentages. Results The paper explores the feasibility of doing missing value imputation with the help of gene regulatory mechanism. An imputation framework called histone acetylation information aided imputation method (HAIimpute method is presented. It incorporates the histone acetylation information into the conventional KNN(k-nearest neighbor and LLS(local least square imputation algorithms for final prediction of the missing values. The experimental results indicated that the use of acetylation information can provide significant improvements in microarray imputation accuracy. The HAIimpute methods consistently improve the widely used methods such as KNN and LLS in terms of normalized root mean squared error (NRMSE. Meanwhile, the genes imputed by HAIimpute methods are more correlated with the original complete genes in terms of Pearson correlation coefficients. Furthermore, the proposed methods also outperform GOimpute, which is one of the existing related methods that use the functional similarity as the external information. Conclusion We demonstrated that the using of histone acetylation information could greatly improve the performance of the imputation especially at high missing percentages. This idea can be generalized to various imputation methods to facilitate the performance. Moreover, with more knowledge accumulated on gene regulatory mechanism in addition to histone acetylation, the performance of our approach can be further improved and verified.
Glitch Reduction in Low- Power Low- Frequency Multiplier
Bhethala Rajasekhar
2014-01-01
Full Text Available Multiplication is an essential arithmetic operation for common DSP applications, such as filtering and fast Fourier transform (FFT. To achieve high execution speed, parallel array multipliers are widely used. These multipliers tend to consume most of the power in DSP computations, and thus power-efficient multipliers are very important for the design of low-power DSP systems. A straightforward approach is to design a full adder (FA that consumes less power. Power reduction can also be achieved through structural modification. For example, rows of partial products can be ignored. In this project a 10 transistor full adder is designed for low power which is used in the implementation of different types of multipliers. All these multipliers are compared for different technologies. A power gating technique is used by placing an MTCMOS cell is used at fine grain level so as to minimize the leakage power.
COMPARATIVE DESIGN OF REGULAR STRUCTURED MODIFIED BOOTH MULTIPLIER
Ram RackshaTripathi
2016-04-01
Full Text Available Multiplication is a crucial function and plays a vital role for practically any DSP system. Several DSP algorithms require different types of multiplications, specifically modified booth multiplication algorithm. In this paper, a simple approach is proposed for generating last partial product row for reducing extra sign (negative bit bit to achieve more regular structure. As compared to the conventional multipliers these proposed modified Booth’s multipliers can achieve improved reduction in area 5.9%, power 3.2%, and delay 0.5% for 8 x 8 multipliers. We can also observe that achievable improvement for 16 x 16 multiplier in area, power, delay are 4.0%, 2.3%, 0.3% respectively. These multipliers are implemented using verilog HDL and synthesized by using synopsis design compiler with an Artisan TSMC 90nm Technology
genipe: an automated genome-wide imputation pipeline with automatic reporting and statistical tools.
Lemieux Perreault, Louis-Philippe; Legault, Marc-André; Asselin, Géraldine; Dubé, Marie-Pierre
2016-12-01
Genotype imputation is now commonly performed following genome-wide genotyping experiments. Imputation increases the density of analyzed genotypes in the dataset, enabling fine-mapping across the genome. However, the process of imputation using the most recent publicly available reference datasets can require considerable computation power and the management of hundreds of large intermediate files. We have developed genipe, a complete genome-wide imputation pipeline which includes automatic reporting, imputed data indexing and management, and a suite of statistical tests for imputed data commonly used in genetic epidemiology (Sequence Kernel Association Test, Cox proportional hazards for survival analysis, and linear mixed models for repeated measurements in longitudinal studies). The genipe package is an open source Python software and is freely available for non-commercial use (CC BY-NC 4.0) at https://github.com/pgxcentre/genipe Documentation and tutorials are available at http://pgxcentre.github.io/genipe CONTACT: louis-philippe.lemieux.perreault@statgen.org or marie-pierre.dube@statgen.orgSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
First Use of Multiple Imputation with the National Tuberculosis Surveillance System
Christopher Vinnard
2013-01-01
Full Text Available Aims. The purpose of this study was to compare methods for handling missing data in analysis of the National Tuberculosis Surveillance System of the Centers for Disease Control and Prevention. Because of the high rate of missing human immunodeficiency virus (HIV infection status in this dataset, we used multiple imputation methods to minimize the bias that may result from less sophisticated methods. Methods. We compared analysis based on multiple imputation methods with analysis based on deleting subjects with missing covariate data from regression analysis (case exclusion, and determined whether the use of increasing numbers of imputed datasets would lead to changes in the estimated association between isoniazid resistance and death. Results. Following multiple imputation, the odds ratio for initial isoniazid resistance and death was 2.07 (95% CI 1.30, 3.29; with case exclusion, this odds ratio decreased to 1.53 (95% CI 0.83, 2.83. The use of more than 5 imputed datasets did not substantively change the results. Conclusions. Our experience with the National Tuberculosis Surveillance System dataset supports the use of multiple imputation methods in epidemiologic analysis, but also demonstrates that close attention should be paid to the potential impact of missing covariates at each step of the analysis.
Zhang, Guosheng; Huang, Kuan-Chieh; Xu, Zheng; Tzeng, Jung-Ying; Conneely, Karen N; Guan, Weihua; Kang, Jian; Li, Yun
2016-05-01
DNA methylation is a key epigenetic mark involved in both normal development and disease progression. Recent advances in high-throughput technologies have enabled genome-wide profiling of DNA methylation. However, DNA methylation profiling often employs different designs and platforms with varying resolution, which hinders joint analysis of methylation data from multiple platforms. In this study, we propose a penalized functional regression model to impute missing methylation data. By incorporating functional predictors, our model utilizes information from nonlocal probes to improve imputation quality. Here, we compared the performance of our functional model to linear regression and the best single probe surrogate in real data and via simulations. Specifically, we applied different imputation approaches to an acute myeloid leukemia dataset consisting of 194 samples and our method showed higher imputation accuracy, manifested, for example, by a 94% relative increase in information content and up to 86% more CpG sites passing post-imputation filtering. Our simulated association study further demonstrated that our method substantially improves the statistical power to identify trait-associated methylation loci. These findings indicate that the penalized functional regression model is a convenient and valuable imputation tool for methylation data, and it can boost statistical power in downstream epigenome-wide association study (EWAS).
A comparison of multiple imputation methods for incomplete longitudinal binary data.
Yamaguchi, Yusuke; Misumi, Toshihiro; Maruo, Kazushi
2017-09-08
Longitudinal binary data are commonly encountered in clinical trials. Multiple imputation is an approach for getting a valid estimation of treatment effects under an assumption of missing at random mechanism. Although there are a variety of multiple imputation methods for the longitudinal binary data, a limited number of researches have reported on relative performances of the methods. Moreover, when focusing on the treatment effect throughout a period that has often been used in clinical evaluations of specific disease areas, no definite investigations comparing the methods have been available. We conducted an extensive simulation study to examine comparative performances of six multiple imputation methods available in the SAS MI procedure for longitudinal binary data, where two endpoints of responder rates at a specified time point and throughout a period were assessed. The simulation study suggested that results from nave approaches of a single imputation with non-responders and a complete case analysis could be very sensitive against missing data. The multiple imputation methods using a monotone method and a full conditional specification with a logistic regression imputation model were recommended for obtaining unbiased and robust estimations of the treatment effect. The methods were illustrated with data from a mental health research.
Optimizing strassen matrix multiply on GPUs
ul Hasan Khan, Ayaz
2015-06-01
© 2015 IEEE. Many core systems are basically designed for applications having large data parallelism. Strassen Matrix Multiply (MM) can be formulated as a depth first (DFS) traversal of a recursion tree where all cores work in parallel on computing each of the NxN sub-matrices that reduces storage at the detriment of large data motion to gather and aggregate the results. We propose Strassen and Winograd algorithms (S-MM and W-MM) based on three optimizations: a set of basic algebra functions to reduce overhead, invoking efficient library (CUBLAS 5.5), and parameter-tuning of parametric kernel to improve resource occupancy. On GPUs, W-MM and S-MM with one recursion level outperform CUBLAS 5.5 Library with up to twice as faster for large arrays satisfying N>=2048 and N>=3072, respectively. Compared to NVIDIA SDK library, S-MM and W-MM achieved a speedup between 20x to 80x for the above arrays. The proposed approach can be used to enhance the performance of CUBLAS and MKL libraries.
Beyond Linear Delay Multipliers in Air Transport
Seddik Belkoura
2017-01-01
Full Text Available Delays are considered one of the most important burdens of air transport, both for their social and environmental consequences and for the cost they cause for airlines and passengers. It is therefore not surprising that a large effort has been devoted to study how they propagate through the system. One of the most important indicators to assess such propagation is the delay multiplier, a ratio between outbound and inbound average delays; in spite of its widespread utilisation, its simplicity precludes capturing all details about the dynamics behind the diffusion process. Here we present a methodology that extracts a more complete relationship between the in- and outbound delays, distinguishing a linear and a nonlinear phase and thus yielding a richer description of the system’s response as a function of the delay magnitude. We validate the methodology through the study of a historical data set of flights crossing the European airspace and show how its most important airports have heterogeneous ways of reacting to extreme delays and that this reaction strongly depends on some of their global properties.
An Optimized Sparse Approximate Matrix Multiply
Bock, Nicolas
2012-01-01
We present an optimized single-precision implementation of the Sparse Approximate Matrix Multiply (\\SpAMM{}) [M. Challacombe and N. Bock, arXiv {\\bf 1011.3534} (2010)], a fast algorithm for matrix-matrix multiplication for matrices with decay that achieves an $\\mathcal{O} (n \\ln n)$ computational complexity with respect to matrix dimension $n$. We find that the max norm of the error matrix achieved with a \\SpAMM{} tolerance of below $2 \\times 10^{-8}$ is lower than that of the single-precision {\\tt SGEMM} for quantum chemical test matrices, while outperforming {\\tt SGEMM} with a cross-over already for small matrices ($n \\sim 1000$). Relative to naive implementations of \\SpAMM{} using optimized versions of {\\tt SGEMM}, such as those found in Intel's Math Kernel Library ({\\tt MKL}) or AMD's Core Math Library ({\\tt ACML}), our optimized version is found to be significantly faster. Detailed performance comparisons are made with for quantum chemical matrices of RHF/STO-2G and RHF/6-31G${}^{**}$ water clusters.
Vacancy rearrangement processes in multiply ionized atoms
Czarnota, M [Institute of Physics, Swietokrzyska Academy, 25-406 Kielce (Poland); Pajek, M [Institute of Physics, Swietokrzyska Academy, 25-406 Kielce (Poland); Banas, D [Institute of Physics, Swietokrzyska Academy, 25-406 Kielce (Poland); Dousse, J-Cl [Physics Department, University of Fribourg, CH-1700 Fribourg (Switzerland); Maillard, Y-P [Physics Department, University of Fribourg, CH-1700 Fribourg (Switzerland); Mauron, O [Physics Department, University of Fribourg, CH-1700 Fribourg (Switzerland); Raboud, P A [Physics Department, University of Fribourg, CH-1700 Fribourg (Switzerland); Berset, M [Physics Department, University of Fribourg, CH-1700 Fribourg (Switzerland); Hoszowska, J [European Synchrotron Radiation Facility (ESRF), F-38043 Grenoble (France); Slabkowska, K [Faculty of Chemistry, Nicholas Copernicus University, 87-100 Torun (Poland); Polasik, M [Faculty of Chemistry, Nicholas Copernicus University, 87-100 Torun (Poland); Chmielewska, D [Soltan Institute for Nuclear Studies, 05-400 Otwock-Swierk (Poland); Rzadkiewicz, J [Soltan Institute for Nuclear Studies, 05-400 Otwock-Swierk (Poland); Sujkowski, Z [Soltan Institute for Nuclear Studies, 05-400 Otwock-Swierk (Poland)
2007-03-01
We demonstrate that in order to interpret the x-ray satellite structure of Pd L{alpha}{sub 1,2}(L{sub 3}M{sub 4,5}) transitions excited by fast O ions, which was measured using a high-resolution von Hamos crystal spectrometer, the vacancy rearrangement processes, taking place prior to the x-ray emission, have to be taken into account. The measured spectra were compared with the predictions of the multi-con.guration Dirac-Fock (MCDF) calculations using the fluorescence and Coster-Kronig yields which were modiffed due to a reduced number of electrons available for relaxation processes and the effect of closing the Coster-Kronig transitions. We demonstrate that the vacancy rearrangement processes can be described in terms of the rearrangement factor, which can be calculated by solving the system of rate equations modelling the flow of vacancies in the multiply ionized atom. By using this factor, the ionization probability at the moment of collision can be extracted from the measured intensity distribution of x-ray satellites. The present results support the independent electron picture of multiple ionization and indicate the importance of use of Dirac-Hartree-Fock wave functions to calculate the ionization probabilities.
Genotype Imputation To Improve the Cost-Efficiency of Genomic Selection in Farmed Atlantic Salmon
Hsin-Yuan Tsai
2017-04-01
Full Text Available Genomic selection uses genome-wide marker information to predict breeding values for traits of economic interest, and is more accurate than pedigree-based methods. The development of high density SNP arrays for Atlantic salmon has enabled genomic selection in selective breeding programs, alongside high-resolution association mapping of the genetic basis of complex traits. However, in sibling testing schemes typical of salmon breeding programs, trait records are available on many thousands of fish with close relationships to the selection candidates. Therefore, routine high density SNP genotyping may be prohibitively expensive. One means to reducing genotyping cost is the use of genotype imputation, where selected key animals (e.g., breeding program parents are genotyped at high density, and the majority of individuals (e.g., performance tested fish and selection candidates are genotyped at much lower density, followed by imputation to high density. The main objectives of the current study were to assess the feasibility and accuracy of genotype imputation in the context of a salmon breeding program. The specific aims were: (i to measure the accuracy of genotype imputation using medium (25 K and high (78 K density mapped SNP panels, by masking varying proportions of the genotypes and assessing the correlation between the imputed genotypes and the true genotypes; and (ii to assess the efficacy of imputed genotype data in genomic prediction of key performance traits (sea lice resistance and body weight. Imputation accuracies of up to 0.90 were observed using the simple two-generation pedigree dataset, and moderately high accuracy (0.83 was possible even with very low density SNP data (∼250 SNPs. The performance of genomic prediction using imputed genotype data was comparable to using true genotype data, and both were superior to pedigree-based prediction. These results demonstrate that the genotype imputation approach used in this study can
Genotype Imputation To Improve the Cost-Efficiency of Genomic Selection in Farmed Atlantic Salmon
Tsai, Hsin-Yuan; Matika, Oswald; Edwards, Stefan McKinnon; Antolín–Sánchez, Roberto; Hamilton, Alastair; Guy, Derrick R.; Tinch, Alan E.; Gharbi, Karim; Stear, Michael J.; Taggart, John B.; Bron, James E.; Hickey, John M.; Houston, Ross D.
2017-01-01
Genomic selection uses genome-wide marker information to predict breeding values for traits of economic interest, and is more accurate than pedigree-based methods. The development of high density SNP arrays for Atlantic salmon has enabled genomic selection in selective breeding programs, alongside high-resolution association mapping of the genetic basis of complex traits. However, in sibling testing schemes typical of salmon breeding programs, trait records are available on many thousands of fish with close relationships to the selection candidates. Therefore, routine high density SNP genotyping may be prohibitively expensive. One means to reducing genotyping cost is the use of genotype imputation, where selected key animals (e.g., breeding program parents) are genotyped at high density, and the majority of individuals (e.g., performance tested fish and selection candidates) are genotyped at much lower density, followed by imputation to high density. The main objectives of the current study were to assess the feasibility and accuracy of genotype imputation in the context of a salmon breeding program. The specific aims were: (i) to measure the accuracy of genotype imputation using medium (25 K) and high (78 K) density mapped SNP panels, by masking varying proportions of the genotypes and assessing the correlation between the imputed genotypes and the true genotypes; and (ii) to assess the efficacy of imputed genotype data in genomic prediction of key performance traits (sea lice resistance and body weight). Imputation accuracies of up to 0.90 were observed using the simple two-generation pedigree dataset, and moderately high accuracy (0.83) was possible even with very low density SNP data (∼250 SNPs). The performance of genomic prediction using imputed genotype data was comparable to using true genotype data, and both were superior to pedigree-based prediction. These results demonstrate that the genotype imputation approach used in this study can provide a cost
The utility of low-density genotyping for imputation in the Thoroughbred horse.
Corbin, Laura J; Kranis, Andreas; Blott, Sarah C; Swinburne, June E; Vaudin, Mark; Bishop, Stephen C; Woolliams, John A
2014-02-04
Despite the dramatic reduction in the cost of high-density genotyping that has occurred over the last decade, it remains one of the limiting factors for obtaining the large datasets required for genomic studies of disease in the horse. In this study, we investigated the potential for low-density genotyping and subsequent imputation to address this problem. Using the haplotype phasing and imputation program, BEAGLE, it is possible to impute genotypes from low- to high-density (50K) in the Thoroughbred horse with reasonable to high accuracy. Analysis of the sources of variation in imputation accuracy revealed dependence both on the minor allele frequency of the single nucleotide polymorphisms (SNPs) being imputed and on the underlying linkage disequilibrium structure. Whereas equidistant spacing of the SNPs on the low-density panel worked well, optimising SNP selection to increase their minor allele frequency was advantageous, even when the panel was subsequently used in a population of different geographical origin. Replacing base pair position with linkage disequilibrium map distance reduced the variation in imputation accuracy across SNPs. Whereas a 1K SNP panel was generally sufficient to ensure that more than 80% of genotypes were correctly imputed, other studies suggest that a 2K to 3K panel is more efficient to minimize the subsequent loss of accuracy in genomic prediction analyses. The relationship between accuracy and genotyping costs for the different low-density panels, suggests that a 2K SNP panel would represent good value for money. Low-density genotyping with a 2K SNP panel followed by imputation provides a compromise between cost and accuracy that could promote more widespread genotyping, and hence the use of genomic information in horses. In addition to offering a low cost alternative to high-density genotyping, imputation provides a means to combine datasets from different genotyping platforms, which is becoming necessary since researchers are
Siddique, Juned; Harel, Ofer; Crespi, Catherine M; Hedeker, Donald
2014-07-30
The true missing data mechanism is never known in practice. We present a method for generating multiple imputations for binary variables, which formally incorporates missing data mechanism uncertainty. Imputations are generated from a distribution of imputation models rather than a single model, with the distribution reflecting subjective notions of missing data mechanism uncertainty. Parameter estimates and standard errors are obtained using rules for nested multiple imputation. Using simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal smoking cessation trial where nonignorably missing data were a concern. Our method provides a simple approach for formalizing subjective notions regarding nonresponse and can be implemented using existing imputation software.
Siddique, Juned; Harel, Ofer; Crespi, Catherine M
2012-12-01
We present a framework for generating multiple imputations for continuous data when the missing data mechanism is unknown. Imputations are generated from more than one imputation model in order to incorporate uncertainty regarding the missing data mechanism. Parameter estimates based on the different imputation models are combined using rules for nested multiple imputation. Through the use of simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal clinical trial of low-income women with depression where nonignorably missing data were a concern. We show that different assumptions regarding the missing data mechanism can have a substantial impact on inferences. Our method provides a simple approach for formalizing subjective notions regarding nonresponse so that they can be easily stated, communicated, and compared.
Dyadic Bivariate Wavelet Multipliers in L2(R2)
Zhong Yan LI; Xian Liang SHI
2011-01-01
The single 2 dilation wavelet multipliers in one-dimensional case and single A-dilation (where A is any expansive matrix with integer entries and |detA|＝2)wavelet multipliers in twodimensional case were completely characterized by Wutam Consortium(1998)and Li Z.,et al.(2010).But there exist no results on multivariate wavelet multipliers corresponding to integer expansive dilation.matrix with the absolute value of determinant not 2 in L2(R2).In this paper,we choose 2I2＝(0202)as the dilation matrix and consider the 2I2-dilation multivariate wavelet Ψ＝{ψ1,ψ2,ψ3}(which is called a dyadic bivariate wavelet)multipliers.Here we call a measurable function family f＝{f1,f2,f3}a dyadic bivariate wavelet multiplier if Ψ1＝{F-1(f1ψ1),F-1(f2ψ2),F-1(f3ψ3)} is a dyadic bivariate wavelet for any dyadic bivariate wavelet Ψ={ψ1,ψ2,ψ3},where(f)and,F-1 denote the Fourier transform and the inverse transform of function f respectively.We study dyadic bivariate wavelet multipliers,and give some conditions for dyadic bivariate wavelet multipliers.We also give concrete forms of linear phases of dyadic MRA bivariate wavelets.
Ming-Huei Chen
Full Text Available Imputation has been widely used in genome-wide association studies (GWAS to infer genotypes of un-genotyped variants based on the linkage disequilibrium in external reference panels such as the HapMap and 1000 Genomes. However, imputation has only rarely been performed based on family relationships to infer genotypes of un-genotyped individuals. Using 8998 Framingham Heart Study (FHS participants genotyped with Affymetrix 550K SNPs, we imputed genotypes of same set of SNPs for additional 3121 participants, most of whom were never genotyped due to lack of DNA sample. Prior to imputation, 122 pedigrees were too large to be handled by the imputation software Merlin. Therefore, we developed a novel pedigree splitting algorithm that can maximize the number of genotyped relatives for imputing each un-genotyped individual, while keeping new sub-pedigrees under a pre-specified size. In GWAS of four phenotypes available in FHS (Alzheimer disease, circulating levels of fibrinogen, high-density lipoprotein cholesterol, and uric acid, we compared results using genotyped individuals only with results using both genotyped and imputed individuals. We studied the impact of applying different imputation quality filtering thresholds on the association results and did not found a universal threshold that always resulted in a more significant p-value for previously identified loci. However most of these loci had a lower p-value when we only included imputed genotypes with with ≥60% SNP- and ≥50% person-specific imputation certainty. In summary, we developed a novel algorithm for splitting large pedigrees for imputation and found a plausible imputation quality filtering threshold based on FHS. Further examination may be required to generalize this threshold to other studies.
Fix-point Multiplier Distributions in Discrete Turbulent Cascade Models
Jouault, B; Lipa, P
1998-01-01
One-point time-series measurements limit the observation of three-dimensional fully developed turbulence to one dimension. For one-dimensional models, like multiplicative branching processes, this implies that the energy flux from large to small scales is not conserved locally. This then renders the random weights used in the cascade curdling to be different from the multipliers obtained from a backward averaging procedure. The resulting multiplier distributions become solutions of a fix-point problem. With a further restoration of homogeneity, all observed correlations between multipliers in the energy dissipation field can be understood in terms of simple scale-invariant multiplicative branching processes.
Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy
Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker
2015-01-01
The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy. PMID:26283989
NOVEL REVERSIBLE VARIABLE PRECISION MULTIPLIER USING REVERSIBLE LOGIC GATES
M. Saravanan; K. Suresh Manic
2014-01-01
.... In this study a reversible logic gate based design of variable precision multiplier is proposed which have the greater efficiency in power consumption and speed since the partial products received...
Design of Low Power Vedic Multiplier Based on Reversible Logic
Sagar
2017-03-01
Full Text Available Reversible logic is a new technique to reduce the power dissipation. There is no loss of information in reversible logic and produces unique output for specified inputs and vice-versa. There is no loss of bits so the power dissipation is reduced. In this paper new design for high speed, low power and area efficient 8-bit Vedic multiplier using Urdhva Tiryakbhyam Sutra (ancient methodology of Indian mathematics is introduced and implemented using Reversible logic to generate products with low power dissipation. UT Sutra generates partial product and sum in single step with less number of adders unit when compare to conventional booth and array multipliers which will reduce the delay and area utilized, Reversible logic will reduce the power dissipation. An 8-bit Vedic multiplier is realized using a 4-bit Vedic multiplier and modified ripple carry adders. The proposed logic blocks are implemented using Verilog HDL programming language, simulation using Xilinx ISE software.
A LOW-PHASE NOISE FREQUENCY MULTIPLIER CHAIN ...
Consequently, the driving crystal oscillators and the first multiplier .... the upper cut off frequency of the system and its asymptotic slope. ..... (SMHz}, the order of multipliction of the. "difference" ... upto 300GHz. To go higher in frequency it is.
Multipliers of Marcinkiewicz type for spherical harmonic expansions
陆善镇; 马柏林
1996-01-01
A sufficient condition for multipliers on the unit sphere to be bounded in is given. The condition is analogous to those of Marcinkiewicz criterions, which is an extension of A. Bonami and J. L. Clerc’s.
Gauss-Bonnet dark energy by Lagrange multipliers
Capozziello, Salvatore; Odintsov, Sergei D
2013-01-01
A string-inspired effective theory of gravity, containing Gauss-Bonnet invariant interacting with a scalar field, is considered in view of obtaining cosmological dark energy solutions. A Lagrange multiplier is inserted into the action in order to achieve the cosmological reconstruction by selecting suitable forms of couplings and potentials. Several cosmological exact solutions (including dark energy of quintessence, phantom or Little Rip type) are derived in presence and in absence of the Lagrange multiplier showing the difference in the two dynamical approaches. In the models that we consider, the Lagrange multiplier behaves as a sort of dust fluid that realizes the transitions between matter dominated and dark energy epochs. The relation between Lagrange multipliers and Noether symmetries is discussed.
Single electron based binary multipliers with overflow detection
ATHARVA
Multipliers with overflow detection based on serial and parallel ... current following through a tunnel junction is a series of events in which only one electron ..... Processing delay based on SED and analyzed SED for parallel prefix circuit.
Efek Multiplier Zakat Terhadap Pendapatan di Propinsi DKI Jakarta
M. Nur Rianto Al Arif
2015-10-01
Full Text Available The aim of this research is to analyze the multiplier effect of zakah revenue in DKI Jakarta, a study case at Badan Amil Zakat, Infak, and Shadaqah (BAZIS DKI Jakarta. Least square methods is used to analyze the data. The coefficient will be used to calculate the multiplier effect of zakah revenue and it will be compared with the economy without zakah revenue. The result showed 2,522 multiplier effects of zakah revenue and 3,561 multiplier effect of economic income without zakah revenue. This suggest that the management of zakah in BAZIS DKI Jakarta still can have a significant influence on the economyDOI: 10.15408/aiq.v4i1.2079
Sociophysics of sexism: normal and anomalous petrie multipliers
Eliazar, Iddo
2015-07-01
A recent mathematical model by Karen Petrie explains how sexism towards women can arise in organizations where male and female are equally sexist. Indeed, the Petrie model predicts that such sexism will emerge whenever there is a male majority, and quantifies this majority bias by the ‘Petrie multiplier’: the square of the male/female ratio. In this paper—emulating the shift from ‘normal’ to ‘anomalous’ diffusion—we generalize the Petrie model to a stochastic Poisson model that accommodates heterogeneously sexist men and woman, and that extends the ‘normal’ quadratic Petrie multiplier to ‘anomalous’ non-quadratic multipliers. The Petrie multipliers span a full spectrum of behaviors which we classify into four universal types. A variation of the stochastic Poisson model and its Petrie multipliers is further applied to the context of cyber warfare.
Dealing with missing data in a multi-question depression scale: a comparison of imputation methods
Stuart Heather
2006-12-01
Full Text Available Abstract Background Missing data present a challenge to many research projects. The problem is often pronounced in studies utilizing self-report scales, and literature addressing different strategies for dealing with missing data in such circumstances is scarce. The objective of this study was to compare six different imputation techniques for dealing with missing data in the Zung Self-reported Depression scale (SDS. Methods 1580 participants from a surgical outcomes study completed the SDS. The SDS is a 20 question scale that respondents complete by circling a value of 1 to 4 for each question. The sum of the responses is calculated and respondents are classified as exhibiting depressive symptoms when their total score is over 40. Missing values were simulated by randomly selecting questions whose values were then deleted (a missing completely at random simulation. Additionally, a missing at random and missing not at random simulation were completed. Six imputation methods were then considered; 1 multiple imputation, 2 single regression, 3 individual mean, 4 overall mean, 5 participant's preceding response, and 6 random selection of a value from 1 to 4. For each method, the imputed mean SDS score and standard deviation were compared to the population statistics. The Spearman correlation coefficient, percent misclassified and the Kappa statistic were also calculated. Results When 10% of values are missing, all the imputation methods except random selection produce Kappa statistics greater than 0.80 indicating 'near perfect' agreement. MI produces the most valid imputed values with a high Kappa statistic (0.89, although both single regression and individual mean imputation also produced favorable results. As the percent of missing information increased to 30%, or when unbalanced missing data were introduced, MI maintained a high Kappa statistic. The individual mean and single regression method produced Kappas in the 'substantial agreement' range
NOVEL REVERSIBLE VARIABLE PRECISION MULTIPLIER USING REVERSIBLE LOGIC GATES
M. Saravanan; K. Suresh Manic
2014-01-01
Multipliers play a vital role in digital systems especially in digital processors. There are many algorithms and designs were proposed in the earlier works, but still there is a need and a greater interest in designing a less complex, low power consuming, fastest multipliers. Reversible logic design became the promising technologies gaining greater interest due to less dissipation of heat and low power consumption. In this study a reversible logic gate based design of variable precision multi...
The Mortar Element Method with Lagrange Multipliers for Stokes Problem
Yaqin Jiang
2007-01-01
In this paper, we propose a mortar element method with Lagrange multiplier for incompressible Stokes problem, i.e., the matching constraints of velocity on mortar edges are expressed in terms of Lagrange multipliers. We also present P1 nonconforming element attached to the subdomains. By proving inf-sup condition, we derive optimal error estimates for velocity and pressure. Moreover, we obtain satisfactory approximation for normal derivatives of the velocity across the interfaces.
High Speed Area Efficient 8-point FFT using Vedic Multiplier
Avneesh Kumar Mishra
2014-12-01
Full Text Available A high speed fast fourier transform (FFT design by using three algorithm is presented in this paper. In algorithm 3, 4-bit Vedic multiplier based technique are used in FFT. In this technique used in three 4-bit ripple carry adder and four 2*2 Vedic multiplier. The main parameter of this paper is number of slice, 4-input LUTS and maximum combinational path delay were calculate.
PRIMAL: Fast and accurate pedigree-based imputation from sequence data in a founder population.
Oren E Livne
2015-03-01
Full Text Available Founder populations and large pedigrees offer many well-known advantages for genetic mapping studies, including cost-efficient study designs. Here, we describe PRIMAL (PedigRee IMputation ALgorithm, a fast and accurate pedigree-based phasing and imputation algorithm for founder populations. PRIMAL incorporates both existing and original ideas, such as a novel indexing strategy of Identity-By-Descent (IBD segments based on clique graphs. We were able to impute the genomes of 1,317 South Dakota Hutterites, who had genome-wide genotypes for ~300,000 common single nucleotide variants (SNVs, from 98 whole genome sequences. Using a combination of pedigree-based and LD-based imputation, we were able to assign 87% of genotypes with >99% accuracy over the full range of allele frequencies. Using the IBD cliques we were also able to infer the parental origin of 83% of alleles, and genotypes of deceased recent ancestors for whom no genotype information was available. This imputed data set will enable us to better study the relative contribution of rare and common variants on human phenotypes, as well as parental origin effect of disease risk alleles in >1,000 individuals at minimal cost.
Siddique, Juned; Reiter, Jerome P; Brincks, Ahnalee; Gibbons, Robert D; Crespi, Catherine M; Brown, C Hendricks
2015-11-20
There are many advantages to individual participant data meta-analysis for combining data from multiple studies. These advantages include greater power to detect effects, increased sample heterogeneity, and the ability to perform more sophisticated analyses than meta-analyses that rely on published results. However, a fundamental challenge is that it is unlikely that variables of interest are measured the same way in all of the studies to be combined. We propose that this situation can be viewed as a missing data problem in which some outcomes are entirely missing within some trials and use multiple imputation to fill in missing measurements. We apply our method to five longitudinal adolescent depression trials where four studies used one depression measure and the fifth study used a different depression measure. None of the five studies contained both depression measures. We describe a multiple imputation approach for filling in missing depression measures that makes use of external calibration studies in which both depression measures were used. We discuss some practical issues in developing the imputation model including taking into account treatment group and study. We present diagnostics for checking the fit of the imputation model and investigate whether external information is appropriately incorporated into the imputed values.
SparRec: An effective matrix completion framework of missing data imputation for GWAS
Jiang, Bo; Ma, Shiqian; Causey, Jason; Qiao, Linbo; Hardin, Matthew Price; Bitts, Ian; Johnson, Daniel; Zhang, Shuzhong; Huang, Xiuzhen
2016-10-01
Genome-wide association studies present computational challenges for missing data imputation, while the advances of genotype technologies are generating datasets of large sample sizes with sample sets genotyped on multiple SNP chips. We present a new framework SparRec (Sparse Recovery) for imputation, with the following properties: (1) The optimization models of SparRec, based on low-rank and low number of co-clusters of matrices, are different from current statistics methods. While our low-rank matrix completion (LRMC) model is similar to Mendel-Impute, our matrix co-clustering factorization (MCCF) model is completely new. (2) SparRec, as other matrix completion methods, is flexible to be applied to missing data imputation for large meta-analysis with different cohorts genotyped on different sets of SNPs, even when there is no reference panel. This kind of meta-analysis is very challenging for current statistics based methods. (3) SparRec has consistent performance and achieves high recovery accuracy even when the missing data rate is as high as 90%. Compared with Mendel-Impute, our low-rank based method achieves similar accuracy and efficiency, while the co-clustering based method has advantages in running time. The testing results show that SparRec has significant advantages and competitive performance over other state-of-the-art existing statistics methods including Beagle and fastPhase.
Enders, Craig K; Keller, Brian T; Levy, Roy
2017-05-29
Specialized imputation routines for multilevel data are widely available in software packages, but these methods are generally not equipped to handle a wide range of complexities that are typical of behavioral science data. In particular, existing imputation schemes differ in their ability to handle random slopes, categorical variables, differential relations at Level-1 and Level-2, and incomplete Level-2 variables. Given the limitations of existing imputation tools, the purpose of this manuscript is to describe a flexible imputation approach that can accommodate a diverse set of 2-level analysis problems that includes any of the aforementioned features. The procedure employs a fully conditional specification (also known as chained equations) approach with a latent variable formulation for handling incomplete categorical variables. Computer simulations suggest that the proposed procedure works quite well, with trivial biases in most cases. We provide a software program that implements the imputation strategy, and we use an artificial data set to illustrate its use. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Multiple imputation as a flexible tool for missing data handling in clinical research.
Enders, Craig K
2016-11-18
The last 20 years has seen an uptick in research on missing data problems, and most software applications now implement one or more sophisticated missing data handling routines (e.g., multiple imputation or maximum likelihood estimation). Despite their superior statistical properties (e.g., less stringent assumptions, greater accuracy and power), the adoption of these modern analytic approaches is not uniform in psychology and related disciplines. Thus, the primary goal of this manuscript is to describe and illustrate the application of multiple imputation. Although maximum likelihood estimation is perhaps the easiest method to use in practice, psychological data sets often feature complexities that are currently difficult to handle appropriately in the likelihood framework (e.g., mixtures of categorical and continuous variables), but relatively simple to treat with imputation. The paper describes a number of practical issues that clinical researchers are likely to encounter when applying multiple imputation, including mixtures of categorical and continuous variables, item-level missing data in questionnaires, significance testing, interaction effects, and multilevel missing data. Analysis examples illustrate imputation with software packages that are freely available on the internet.
Demirtas, Hakan; Hedeker, Donald
2007-02-20
New quasi-imputation and expansion strategies for correlated binary responses are proposed by borrowing ideas from random number generation. The core idea is to convert correlated binary outcomes to multivariate normal outcomes in a sensible way so that re-conversion to the binary scale, after performing multiple imputation, yields the original specified marginal expectations and correlations. This conversion process ensures that the correlations are transformed reasonably which in turn allows us to take advantage of well-developed imputation techniques for Gaussian outcomes. We use the phrase 'quasi' because the original observations are not guaranteed to be preserved. We argue that if the inferential goals are well-defined, it is not necessary to strictly adhere to the established definition of multiple imputation. Our expansion scheme employs a similar strategy where imputation is used as an intermediate step. It leads to proportionally inflated observed patterns, forcing the data set to a complete rectangular format. The plausibility of the proposed methodology is examined by applying it to a wide range of simulated data sets that reflect alternative assumptions on complete data populations and missing-data mechanisms. We also present an application using a data set from obesity research. We conclude that the proposed method is a promising tool for handling incomplete longitudinal or clustered binary outcomes under ignorable non-response mechanisms. Copyright 2006 John Wiley & Sons, Ltd.
Multiple imputation and analysis for high-dimensional incomplete proteomics data.
Yin, Xiaoyan; Levy, Daniel; Willinger, Christine; Adourian, Aram; Larson, Martin G
2016-04-15
Multivariable analysis of proteomics data using standard statistical models is hindered by the presence of incomplete data. We faced this issue in a nested case-control study of 135 incident cases of myocardial infarction and 135 pair-matched controls from the Framingham Heart Study Offspring cohort. Plasma protein markers (K = 861) were measured on the case-control pairs (N = 135), and the majority of proteins had missing expression values for a subset of samples. In the setting of many more variables than observations (K ≫ N), we explored and documented the feasibility of multiple imputation approaches along with subsequent analysis of the imputed data sets. Initially, we selected proteins with complete expression data (K = 261) and randomly masked some values as the basis of simulation to tune the imputation and analysis process. We randomly shuffled proteins into several bins, performed multiple imputation within each bin, and followed up with stepwise selection using conditional logistic regression within each bin. This process was repeated hundreds of times. We determined the optimal method of multiple imputation, number of proteins per bin, and number of random shuffles using several performance statistics. We then applied this method to 544 proteins with incomplete expression data (≤ 40% missing values), from which we identified a panel of seven proteins that were jointly associated with myocardial infarction.
Resche-Rigon, Matthieu; White, Ian R
2016-09-19
In multilevel settings such as individual participant data meta-analysis, a variable is 'systematically missing' if it is wholly missing in some clusters and 'sporadically missing' if it is partly missing in some clusters. Previously proposed methods to impute incomplete multilevel data handle either systematically or sporadically missing data, but frequently both patterns are observed. We describe a new multiple imputation by chained equations (MICE) algorithm for multilevel data with arbitrary patterns of systematically and sporadically missing variables. The algorithm is described for multilevel normal data but can easily be extended for other variable types. We first propose two methods for imputing a single incomplete variable: an extension of an existing method and a new two-stage method which conveniently allows for heteroscedastic data. We then discuss the difficulties of imputing missing values in several variables in multilevel data using MICE, and show that even the simplest joint multilevel model implies conditional models which involve cluster means and heteroscedasticity. However, a simulation study finds that the proposed methods can be successfully combined in a multilevel MICE procedure, even when cluster means are not included in the imputation models.
PRIMAL: Fast and accurate pedigree-based imputation from sequence data in a founder population.
Livne, Oren E; Han, Lide; Alkorta-Aranburu, Gorka; Wentworth-Sheilds, William; Abney, Mark; Ober, Carole; Nicolae, Dan L
2015-03-01
Founder populations and large pedigrees offer many well-known advantages for genetic mapping studies, including cost-efficient study designs. Here, we describe PRIMAL (PedigRee IMputation ALgorithm), a fast and accurate pedigree-based phasing and imputation algorithm for founder populations. PRIMAL incorporates both existing and original ideas, such as a novel indexing strategy of Identity-By-Descent (IBD) segments based on clique graphs. We were able to impute the genomes of 1,317 South Dakota Hutterites, who had genome-wide genotypes for ~300,000 common single nucleotide variants (SNVs), from 98 whole genome sequences. Using a combination of pedigree-based and LD-based imputation, we were able to assign 87% of genotypes with >99% accuracy over the full range of allele frequencies. Using the IBD cliques we were also able to infer the parental origin of 83% of alleles, and genotypes of deceased recent ancestors for whom no genotype information was available. This imputed data set will enable us to better study the relative contribution of rare and common variants on human phenotypes, as well as parental origin effect of disease risk alleles in >1,000 individuals at minimal cost.
Verilog Implementation of an Efficient Multiplier Using Vedic Mathematics
Harsh Yadav
2015-07-01
Full Text Available In this paper, the design of a 16x16 Vedic multiplier has been proposed using the 16 bit Modified Carry Select Adder and 16 bit Kogge Stone Adder. The Modified Carry Select Adder incorporates the Binary to Excess -1 Converter (BEC and is known to be the fastest adder as compared to all the conventional adders. The design is implemented using the Verilog Hardware Description Language and tested using the Modelsim simulator. The code is synthesized using the Virtex-7 family with the XC7VX330T device. The Vedic multiplier has applications in Digital Signal Processing, Microprocessors, FIR filters and communication systems. This paper presents a comparison of the results of 16x16 Vedic multiplier using Modified Carry Select Adder and 16x16 Vedic Multiplier using Kogge Stone Adder. The results show that 16x16 Vedic Multiplier using Modified Carry Select Adder is more efficient and has less time delay as compared to the 16x16 Vedic Multiplier using Kogge Stone Adder.
High speed multiplier using Nikhilam Sutra algorithm of Vedic mathematics
Pradhan, Manoranjan; Panda, Rutuparna
2014-03-01
This article presents the design of a new high-speed multiplier architecture using Nikhilam Sutra of Vedic mathematics. The proposed multiplier architecture finds out the compliment of the large operand from its nearest base to perform the multiplication. The multiplication of two large operands is reduced to the multiplication of their compliments and addition. It is more efficient when the magnitudes of both operands are more than half of their maximum values. The carry save adder in the multiplier architecture increases the speed of addition of partial products. The multiplier circuit is synthesised and simulated using Xilinx ISE 10.1 software and implemented on Spartan 2 FPGA device XC2S30-5pq208. The output parameters such as propagation delay and device utilisation are calculated from synthesis results. The performance evaluation results in terms of speed and device utilisation are compared with earlier multiplier architecture. The proposed design has speed improvements compared to multiplier architecture presented in the literature.
Sung, Yun J; Gu, C Charles; Tiwari, Hemant K; Arnett, Donna K; Broeckel, Ulrich; Rao, Dabeeru C
2012-07-01
Genotype imputation provides imputation of untyped single nucleotide polymorphisms (SNPs) that are present on a reference panel such as those from the HapMap Project. It is popular for increasing statistical power and comparing results across studies using different platforms. Imputation for African American populations is challenging because their linkage disequilibrium blocks are shorter and also because no ideal reference panel is available due to admixture. In this paper, we evaluated three imputation strategies for African Americans. The intersection strategy used a combined panel consisting of SNPs polymorphic in both CEU and YRI. The union strategy used a panel consisting of SNPs polymorphic in either CEU or YRI. The merge strategy merged results from two separate imputations, one using CEU and the other using YRI. Because recent investigators are increasingly using the data from the 1000 Genomes (1KG) Project for genotype imputation, we evaluated both 1KG-based imputations and HapMap-based imputations. We used 23,707 SNPs from chromosomes 21 and 22 on Affymetrix SNP Array 6.0 genotyped for 1,075 HyperGEN African Americans. We found that 1KG-based imputations provided a substantially larger number of variants than HapMap-based imputations, about three times as many common variants and eight times as many rare and low-frequency variants. This higher yield is expected because the 1KG panel includes more SNPs. Accuracy rates using 1KG data were slightly lower than those using HapMap data before filtering, but slightly higher after filtering. The union strategy provided the highest imputation yield with next highest accuracy. The intersection strategy provided the lowest imputation yield but the highest accuracy. The merge strategy provided the lowest imputation accuracy. We observed that SNPs polymorphic only in CEU had much lower accuracy, reducing the accuracy of the union strategy. Our findings suggest that 1KG-based imputations can facilitate discovery of
A New Missing Data Imputation Algorithm Applied to Electrical Data Loggers
Concepción Crespo Turrado
2015-12-01
Full Text Available Nowadays, data collection is a key process in the study of electrical power networks when searching for harmonics and a lack of balance among phases. In this context, the lack of data of any of the main electrical variables (phase-to-neutral voltage, phase-to-phase voltage, and current in each phase and power factor adversely affects any time series study performed. When this occurs, a data imputation process must be accomplished in order to substitute the data that is missing for estimated values. This paper presents a novel missing data imputation method based on multivariate adaptive regression splines (MARS and compares it with the well-known technique called multivariate imputation by chained equations (MICE. The results obtained demonstrate how the proposed method outperforms the MICE algorithm.
Graffelman, Jan; Nelson, S.; Gogarten, S. M.; Weir, B. S.
2015-01-01
This paper addresses the issue of exact-test based statistical inference for Hardy−Weinberg equilibrium in the presence of missing genotype data. Missing genotypes often are discarded when markers are tested for Hardy−Weinberg equilibrium, which can lead to bias in the statistical inference about equilibrium. Single and multiple imputation can improve inference on equilibrium. We develop tests for equilibrium in the presence of missingness by using both inbreeding coefficients (or, equivalently, χ2 statistics) and exact p-values. The analysis of a set of markers with a high missing rate from the GENEVA project on prematurity shows that exact inference on equilibrium can be altered considerably when missingness is taken into account. For markers with a high missing rate (>5%), we found that both single and multiple imputation tend to diminish evidence for Hardy−Weinberg disequilibrium. Depending on the imputation method used, 6−13% of the test results changed qualitatively at the 5% level. PMID:26377959
Kristin Meseck
2016-05-01
Full Text Available The main purpose of the present study was to assess the impact of global positioning system (GPS signal lapse on physical activity analyses, discover any existing associations between missing GPS data and environmental and demographics attributes, and to determine whether imputation is an accurate and viable method for correcting GPS data loss. Accelerometer and GPS data of 782 participants from 8 studies were pooled to represent a range of lifestyles and interactions with the built environment. Periods of GPS signal lapse were identified and extracted. Generalised linear mixed models were run with the number of lapses and the length of lapses as outcomes. The signal lapses were imputed using a simple ruleset, and imputation was validated against person-worn camera imagery. A final generalised linear mixed model was used to identify the difference between the amount of GPS minutes pre- and post-imputation for the activity categories of sedentary, light, and moderate-to-vigorous physical activity. Over 17% of the dataset was comprised of GPS data lapses. No strong associations were found between increasing lapse length and number of lapses and the demographic and built environment variables. A significant difference was found between the pre- and postimputation minutes for each activity category. No demographic or environmental bias was found for length or number of lapses, but imputation of GPS data may make a significant difference for inclusion of physical activity data that occurred during a lapse. Imputing GPS data lapses is a viable technique for returning spatial context to accelerometer data and improving the completeness of the dataset.
Ward Judson A
2013-01-01
Full Text Available Abstract Background Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry. Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker density, but result in some genotype errors and a large number of missing genotype values. Imputation can reduce the number of missing values and can correct genotyping errors, but current methods of imputation require a reference genome and thus are not an option for most species. Results Genotyping by Sequencing (GBS was used to produce highly saturated maps for a R. idaeus pseudo-testcross progeny. While low coverage and high variance in sequencing resulted in a large number of missing values for some individuals, a novel method of imputation based on maximum likelihood marker ordering from initial marker segregation overcame the challenge of missing values, and made map construction computationally tractable. The two resulting parental maps contained 4521 and 2391 molecular markers spanning 462.7 and 376.6 cM respectively over seven linkage groups. Detection of precise genomic regions with segregation distortion was possible because of map saturation. Microsatellites (SSRs linked these results to published maps for cross-validation and map comparison. Conclusions GBS together with genome-independent imputation provides a rapid method for genetic map construction in any pseudo-testcross progeny. Our method of imputation estimates the correct genotype call of missing values and corrects genotyping errors that lead to inflated map size and reduced precision in marker placement. Comparison of SSRs to published R. idaeus maps showed that the linkage maps constructed with GBS and our method of imputation were robust, and marker positioning reliable. The high marker density allowed identification of genomic regions with segregation
A suggested approach for imputation of missing dietary data for young children in daycare
June Stevens
2015-12-01
Full Text Available Background: Parent-reported 24-h diet recalls are an accepted method of estimating intake in young children. However, many children eat while at childcare making accurate proxy reports by parents difficult. Objective: The goal of this study was to demonstrate a method to impute missing weekday lunch and daytime snack nutrient data for daycare children and to explore the concurrent predictive and criterion validity of the method. Design: Data were from children aged 2-5 years in the My Parenting SOS project (n=308; 870 24-h diet recalls. Mixed models were used to simultaneously predict breakfast, dinner, and evening snacks (B+D+ES; lunch; and daytime snacks for all children after adjusting for age, sex, and body mass index (BMI. From these models, we imputed the missing weekday daycare lunches by interpolation using the mean lunch to B+D+ES [L/(B+D+ES] ratio among non-daycare children on weekdays and the L/(B+D+ES ratio for all children on weekends. Daytime snack data were used to impute snacks. Results: The reported mean (± standard deviation weekday intake was lower for daycare children [725 (±324 kcal] compared to non-daycare children [1,048 (±463 kcal]. Weekend intake for all children was 1,173 (±427 kcal. After imputation, weekday caloric intake for daycare children was 1,230 (±409 kcal. Daily intakes that included imputed data were associated with age and sex but not with BMI. Conclusion: This work indicates that imputation is a promising method for improving the precision of daily nutrient data from young children.
Handling Out-of-Sequence Data: Kalman Filter Methods or Statistical Imputation?
Bhekisipho Twala
2010-01-01
Full Text Available The issue of handling sensor measurements data over single and multiple lag delays also known as outof-sequence measurement (OOSM has been considered. It is argued that this problem can also be addressed using model-based imputation strategies and their application in comparison to Kalman filter (KF-based approaches for a multi-sensor tracking prediction problem has also been demonstrated. The effectiveness of two model-based imputation procedures against five OOSM methods was investigated in Monte Carlo simulation experiments. The delayed measurements were either incorporated (or fused at the time these were finally available (using OOSM methods or imputed in a random way with higher probability of delays for multiple lags and lower probability of delays for a single lag (using single or multiple imputation. For single lag, estimates of target tracking computed from the observed data and those based on a data set in which the delayed measurements were imputed were equally unbiased; however, the KF estimates obtained using the Bayesian framework (BF-KF were more precise. When the measurements were delayed in a multiple lag fashion, there were significant differences in bias or precision between multiple imputation (MI and OOSM methods, with the former exhibiting a superior performance at nearly all levels of probability of measurement delay and range of manoeuvring indices. Researchers working on sensor data are encouraged to take advantage of software to implement delayed measurements using MI, as estimates of tracking are more precise and less biased in the presence of delayed multi-sensor data than those derived from an observed data analysis approach.Defence Science Journal, 2010, 60(1, pp.87-99, DOI:http://dx.doi.org/10.14429/dsj.60.115
Imputation-based population genetics analysis of Plasmodium falciparum malaria parasites.
Samad, Hanif; Coll, Francesc; Preston, Mark D; Ocholla, Harold; Fairhurst, Rick M; Clark, Taane G
2015-04-01
Whole-genome sequencing technologies are being increasingly applied to Plasmodium falciparum clinical isolates to identify genetic determinants of malaria pathogenesis. However, genome-wide discovery methods, such as haplotype scans for signatures of natural selection, are hindered by missing genotypes in sequence data. Poor correlation between single nucleotide polymorphisms (SNPs) in the P. falciparum genome complicates efforts to apply established missing-genotype imputation methods that leverage off patterns of linkage disequilibrium (LD). The accuracy of state-of-the-art, LD-based imputation methods (IMPUTE, Beagle) was assessed by measuring allelic r2 for 459 P. falciparum samples from malaria patients in 4 countries: Thailand, Cambodia, Gambia, and Malawi. In restricting our analysis to 86 k high-quality SNPs across the populations, we found that the complete-case analysis was restricted to 21k SNPs (24.5%), despite no single SNP having more than 10% missing genotypes. The accuracy of Beagle in filling in missing genotypes was consistently high across all populations (allelic r2, 0.87-0.96), but the performance of IMPUTE was mixed (allelic r2, 0.34-0.99) depending on reference haplotypes and population. Positive selection analysis using Beagle-imputed haplotypes identified loci involved in resistance to chloroquine (crt) in Thailand, Cambodia, and Gambia, sulfadoxine-pyrimethamine (dhfr, dhps) in Cambodia, and artemisinin (kelch13) in Cambodia. Tajima's D-based analysis identified genes under balancing selection that encode well-characterized vaccine candidates: apical merozoite antigen 1 (ama1) and merozoite surface protein 1 (msp1). In contrast, the complete-case analysis failed to identify any well-validated drug resistance or candidate vaccine loci, except kelch13. In a setting of low LD and modest levels of missing genotypes, using Beagle to impute P. falciparum genotypes is a viable strategy for conducting accurate large-scale population genetics and
Implementation of Different Low Power Multipliers Using Verilog
Koteswara Rao Ponnuru
2014-06-01
Full Text Available Low power consumption and smaller area are some of the most important criteria for the fabrication of DSP systems and high performance systems. Optimizing the speed and area of the multiplier is a major design issue. Multiplication represents a fundamental building block in all DSP tasks. The objective of a good multiplier is to provide a physically compact, good speed and low power consumption. To save significant power consumption of a VLSI design it is a good direction to reduce its dynamic power that is the major part of total power consumption. Two methods are common in current implementations: regular arrays and Wallace trees. The gate-level analyses have suggested that not only are Wallace trees faster than array schemes, they also consume much less power. However these analyses did not take wiring into account, resulting in optimistic timing and power estimates. Continuous advances of microelectronic technologies make better use of energy, encode data more effectively, reduce power consumption, etc. Particularly, many of these technologies address low-power consumption to meet the requirements of various portable applications. In these application systems, a multiplier is a fundamental arithmetic unit and widely used in circuits. I compare results for 8bit-width the working of different multipliers by comparing the power consumption by each of them. The result of my paper helps us to choose a better option between serial and parallel multiplier in fabricating different systems. Multipliers form one of the most important components of many systems. So, by analyzing the working of different multipliers helps to frame a better system with less power consumption and lesser area.
Imputation of genotypes in Danish purebred and two-way crossbred pigs using low-density panels
Xiang, Tao; Ma, Peipei; Ostersen, Tage;
2015-01-01
in crossbred animals and, in particular, in pigs. The extent and pattern of linkage disequilibrium differ in crossbred versus purebred animals, which may impact the performance of imputation. In this study, first we compared different scenarios of imputation from 5 K to 8 K single nucleotide polymorphisms...
Erler, Nicole S; Rizopoulos, Dimitris; Rosmalen, Joost van; Jaddoe, Vincent W V; Franco, Oscar H; Lesaffre, Emmanuel M E H
2016-07-30
Incomplete data are generally a challenge to the analysis of most large studies. The current gold standard to account for missing data is multiple imputation, and more specifically multiple imputation with chained equations (MICE). Numerous studies have been conducted to illustrate the performance of MICE for missing covariate data. The results show that the method works well in various situations. However, less is known about its performance in more complex models, specifically when the outcome is multivariate as in longitudinal studies. In current practice, the multivariate nature of the longitudinal outcome is often neglected in the imputation procedure, or only the baseline outcome is used to impute missing covariates. In this work, we evaluate the performance of MICE using different strategies to include a longitudinal outcome into the imputation models and compare it with a fully Bayesian approach that jointly imputes missing values and estimates the parameters of the longitudinal model. Results from simulation and a real data example show that MICE requires the analyst to correctly specify which components of the longitudinal process need to be included in the imputation models in order to obtain unbiased results. The full Bayesian approach, on the other hand, does not require the analyst to explicitly specify how the longitudinal outcome enters the imputation models. It performed well under different scenarios. Copyright © 2016 John Wiley & Sons, Ltd.
Dassonneville, R; Brøndum, Rasmus Froberg; Druet, T
2011-01-01
The purpose of this study was to investigate the imputation error and loss of reliability of direct genomic values (DGV) or genomically enhanced breeding values (GEBV) when using genotypes imputed from a 3,000-marker single nucleotide polymorphism (SNP) panel to a 50,000-marker SNP panel. Data co...
2010-04-01
...'s knowledge, approval or acquiescence. The organization's acceptance of the benefits derived from...: (a) Conduct imputed from an individual to an organization. We may impute the fraudulent, criminal, or... associated with an organization, to that organization when the improper conduct occurred in connection...
31 CFR 19.630 - May the Department of the Treasury impute conduct of one person to another?
2010-07-01
...'s knowledge, approval or acquiescence. The organization's acceptance of the benefits derived from...: (a) Conduct imputed from an individual to an organization. We may impute the fraudulent, criminal, or... associated with an organization, to that organization when the improper conduct occurred in connection...
Brøndum, Rasmus Froberg; Guldbrandtsen, Bernt; Sahana, Goutam
2014-01-01
Background The advent of low cost next generation sequencing has made it possible to sequence a large number of dairy and beef bulls which can be used as a reference for imputation of whole genome sequence data. The aim of this study was to investigate the accuracy and speed of imputation from...
Bryan N Howie
2009-06-01
Full Text Available Genotype imputation methods are now being widely used in the analysis of genome-wide association studies. Most imputation analyses to date have used the HapMap as a reference dataset, but new reference panels (such as controls genotyped on multiple SNP chips and densely typed samples from the 1,000 Genomes Project will soon allow a broader range of SNPs to be imputed with higher accuracy, thereby increasing power. We describe a genotype imputation method (IMPUTE version 2 that is designed to address the challenges presented by these new datasets. The main innovation of our approach is a flexible modelling framework that increases accuracy and combines information across multiple reference panels while remaining computationally feasible. We find that IMPUTE v2 attains higher accuracy than other methods when the HapMap provides the sole reference panel, but that the size of the panel constrains the improvements that can be made. We also find that imputation accuracy can be greatly enhanced by expanding the reference panel to contain thousands of chromosomes and that IMPUTE v2 outperforms other methods in this setting at both rare and common SNPs, with overall error rates that are 15%-20% lower than those of the closest competing method. One particularly challenging aspect of next-generation association studies is to integrate information across multiple reference panels genotyped on different sets of SNPs; we show that our approach to this problem has practical advantages over other suggested solutions.
A reference panel of 64,976 haplotypes for genotype imputation.
McCarthy, Shane; Das, Sayantan; Kretzschmar, Warren; Delaneau, Olivier; Wood, Andrew R; Teumer, Alexander; Kang, Hyun Min; Fuchsberger, Christian; Danecek, Petr; Sharp, Kevin; Luo, Yang; Sidore, Carlo; Kwong, Alan; Timpson, Nicholas; Koskinen, Seppo; Vrieze, Scott; Scott, Laura J; Zhang, He; Mahajan, Anubha; Veldink, Jan; Peters, Ulrike; Pato, Carlos; van Duijn, Cornelia M; Gillies, Christopher E; Gandin, Ilaria; Mezzavilla, Massimo; Gilly, Arthur; Cocca, Massimiliano; Traglia, Michela; Angius, Andrea; Barrett, Jeffrey C; Boomsma, Dorrett; Branham, Kari; Breen, Gerome; Brummett, Chad M; Busonero, Fabio; Campbell, Harry; Chan, Andrew; Chen, Sai; Chew, Emily; Collins, Francis S; Corbin, Laura J; Smith, George Davey; Dedoussis, George; Dorr, Marcus; Farmaki, Aliki-Eleni; Ferrucci, Luigi; Forer, Lukas; Fraser, Ross M; Gabriel, Stacey; Levy, Shawn; Groop, Leif; Harrison, Tabitha; Hattersley, Andrew; Holmen, Oddgeir L; Hveem, Kristian; Kretzler, Matthias; Lee, James C; McGue, Matt; Meitinger, Thomas; Melzer, David; Min, Josine L; Mohlke, Karen L; Vincent, John B; Nauck, Matthias; Nickerson, Deborah; Palotie, Aarno; Pato, Michele; Pirastu, Nicola; McInnis, Melvin; Richards, J Brent; Sala, Cinzia; Salomaa, Veikko; Schlessinger, David; Schoenherr, Sebastian; Slagboom, P Eline; Small, Kerrin; Spector, Timothy; Stambolian, Dwight; Tuke, Marcus; Tuomilehto, Jaakko; Van den Berg, Leonard H; Van Rheenen, Wouter; Volker, Uwe; Wijmenga, Cisca; Toniolo, Daniela; Zeggini, Eleftheria; Gasparini, Paolo; Sampson, Matthew G; Wilson, James F; Frayling, Timothy; de Bakker, Paul I W; Swertz, Morris A; McCarroll, Steven; Kooperberg, Charles; Dekker, Annelot; Altshuler, David; Willer, Cristen; Iacono, William; Ripatti, Samuli; Soranzo, Nicole; Walter, Klaudia; Swaroop, Anand; Cucca, Francesco; Anderson, Carl A; Myers, Richard M; Boehnke, Michael; McCarthy, Mark I; Durbin, Richard
2016-10-01
We describe a reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry. Using this resource leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in the number of SNPs tested in association studies, and it can help to discover and refine causal loci. We describe remote server resources that allow researchers to carry out imputation and phasing consistently and efficiently.
Underwater implosions of large format photo-multiplier tubes
Diwan, Milind; Dolph, Jeffrey [Brookhaven National Laboratory, P.O. Box 5000, Bldg 510E, Upton, NY 11973 (United States); Ling, Jiajie, E-mail: jjling@bnl.gov [Brookhaven National Laboratory, P.O. Box 5000, Bldg 510E, Upton, NY 11973 (United States); Russo, Thomas; Sharma, Rahul; Sexton, Kenneth; Simos, Nikolaos; Stewart, James; Tanaka, Hidekazu [Brookhaven National Laboratory, P.O. Box 5000, Bldg 510E, Upton, NY 11973 (United States); Arnold, Douglas; Tabor, Philip; Turner, Stephen [Naval Underwater Warfare Center, Newport, RI 02841 (United States)
2012-04-01
Large, deep, well shielded liquid detectors have become an important technology for the detection of neutrinos over a wide dynamic range from few MeV to TeV. The critical component of this technology is the large format semi-hemispherical photo-multiplier tube with diameters in the range of 25-50 cm. The survival of an assembled array of these photo-multiplier tubes under high hydrostatic pressure is the subject of this study. These are the results from an R and D program which is intended to understand the modes of failure when a photo-multiplier tube implodes under hydrostatic pressure. Our tests include detailed measurements of the shock wave which results from the implosion of a photo-multiplier tube and a comparison of the test data to modern hydrodynamic simulation codes. Using these results we can extrapolate to other tube geometries and make recommendation on deployment of the photo-multiplier tubes in deep water detectors with a focus on risk mitigation from a tube implosion shock wave causing a chain reaction loss of multiple tubes.
Four-quadrant analogue multiplier using operational amplifier
Riewruja, Vanchai; Rerkratn, Apinai
2011-04-01
A method to realise a four-quadrant analogue multiplier using general-purpose operational amplifiers (opamps) as only the active elements is described in this article. The realisation method is based on the quarter-square technique, which utilises the inherent square-law characteristic of class AB output stage of the opamp. The multiplier can be achieved from the proposed structure with using either bipolar or complementary metal-oxide-semiconductor (CMOS) opamps. The operation principle of the proposed multiplier has been confirmed by PSPICE analogue simulation program. Simulation results reveal that the principle of proposed scheme provides an adequate performance for a four-quadrant analogue multiplier. Experimental implementations of the proposed multiplier using bipolar and CMOS opamps are performed to verify the circuit performances. Measured results of the experimental proposed schemes based on the use of bipolar and CMOS opamps with supply voltage ±2.4 V show the worst-case relative errors of 0.32% and 0.47%, and the total harmonic distortions of 0.47% and 0.98%, respectively.
OPTIMIZATION OF HYBRID FINAL ADDER FOR THE HIGH PERFORMANCE MULTIPLIER
RAMKUMAR B.
2013-04-01
Full Text Available In this work we evaluated arrival profile of the HPM based multiplier partial products reduction tree in two ways: 1.manual delay, area calculation through logical effort, 2.ASIC implementation. Based on the arrival profile, we worked with some recently proposed optimal adders and finally we proposed an optimal hybrid adder for the final addition in HPM based parallel multiplier. This work derives some mathematical expressions to find the size of different regions in the partial product arrival profile which helps to design optimal adder for each region. This work evaluates the performance of proposed hybrid adder in terms of area, power and delay using 90nm technology. This work deals with manual calculation for 8-b and ASIC simulation of different adder designs for 8-b, 16-b, 32-b and 64-b multiplier bit sizes.
Performance evaluation of high speed compressors for high speed multipliers
Nirlakalla Ravi
2011-01-01
Full Text Available This paper describes high speed compressors for high speed parallel multipliers like Booth Multiplier, Wallace Tree Multiplier in Digital Signal Processing (DSP. This paper presents 4-3, 5-3, 6-3 and 7-3 compressors for high speed multiplication. These compressors reduce vertical critical path more rapidly than conventional compressors. A 5-3 conventional compressor can take four steps to reduce bits from 5 to 3, but the proposed 5-3 takes only 2 steps. These compressors are simulated with H-Spice at a temperature of 25°C at a supply voltage 2.0V using 90nm MOSIS technology. The Power, Delay, Power Delay Product (PDP and Energy Delay Product (EDP of the compressors are calculated to analyze the total propagation delay and energy consumption. All the compressors are designed with half adder and full Adders only.
Multiplier Accounting of Indian Mining Industry: The Application
Hussain, Azhar; Karmakar, Netai Chandra
2017-10-01
In the previous paper (Hussain and Karmakar in Inst Eng India Ser, 2014. doi: 10.1007/s40033-014-0058-0), the concepts of input-output transaction matrix and multiplier were explained in detail. Input-output multipliers are indicators used for predicting the total impact on an economy due to changes in its industrial demand and output which is calculated using transaction matrix. The aim of this paper is to present an application of the concepts with respect to the mining industry, showing progress in different sectors of mining with time and explaining different outcomes from the results obtained. The analysis shows that a few mineral industries saw a significant growth in their multiplier values over the years.
Dark energy from modified gravity with Lagrange multipliers
Capozziello, Salvatore [Dipartimento di Scienze Fisiche, Universita ' Federico II' di Napoli (Italy)] [INFN Sezione di Napoli, Complesso Universitario di Monte S. Angelo, Ed. N, via Cintia, I-80126 Napoli (Italy); Matsumoto, Jiro [Department of Physics, Nagoya University, Nagoya 464-8602 (Japan); Nojiri, Shin' ichi, E-mail: nojiri@phys.nagoya-u.ac.j [Department of Physics, Nagoya University, Nagoya 464-8602 (Japan)] [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya University, Nagoya 464-8602 (Japan); Odintsov, Sergei D. [Institucio Catalana de Recerca i Estudis Avancats (ICREA) and Institut de Ciencies de l' Espai (IEEC-CSIC), Campus UAB, Facultat de Ciencies, Torre C5-Par-2a pl, E-08193 Bellaterra, Barcelona (Spain)
2010-09-27
We study scalar-tensor theory, k-essence and modified gravity with Lagrange multiplier constraint which role is to reduce the number of degrees of freedom. Dark Energy cosmology of different types ({Lambda}CDM, unified inflation with DE, smooth non-phantom/phantom transition epoch) is reconstructed in such models. It is demonstrated that presence of Lagrange multiplier simplifies the reconstruction scenario. It is shown that mathematical equivalence between scalar theory and F(R) gravity is broken due to presence of constraint. The cosmological evolution is defined by the second F{sub 2}(R) function dictated by the constraint. The convenient F(R) gravity sector is relevant for local tests. This opens the possibility to make originally non-realistic theory to be viable by adding the corresponding constraint. A general discussion on the role of Lagrange multipliers to make higher-derivative gravity canonical is developed.
Performance Evaluation of Complex Multiplier Using Advance Algorithm
Gopichand D. Khandale
2013-06-01
Full Text Available In this paper VHDL implementation of complex number multiplier using ancient Vedic mathematics and conventional modified Booth algorithm is presented and compared. The idea for designing the multiplier unit is adopted from ancient Indian mathematics "Vedas". The Urdhva Tiryakbhyam sutra (method was selected for implementation since it is applicable to all cases of multiplication. Multiplication using Urdhva Tiryakbhyam sutra is performed by vertically and crosswise. The feature of this method is any multi-bit multiplication can be reduced down to single bit multiplication and addition. On account of these formulas, the partial products and sums are generated in one step which reduces the carry propagation from LSB to MSB. The implementation of the Vedic mathematics and their application to the complex multiplier ensure substantial reduction of propagation delay. The simulation results for 4 bit multiplication using Booth’s algorithm and using Vedic sutra are illustrated.
Multiplier Accounting of Indian Mining Industry: The Application
Hussain, Azhar; Karmakar, Netai Chandra
2016-10-01
In the previous paper (Hussain and Karmakar in Inst Eng India Ser, 2014. doi: 10.1007/s40033-014-0058-0), the concepts of input-output transaction matrix and multiplier were explained in detail. Input-output multipliers are indicators used for predicting the total impact on an economy due to changes in its industrial demand and output which is calculated using transaction matrix. The aim of this paper is to present an application of the concepts with respect to the mining industry, showing progress in different sectors of mining with time and explaining different outcomes from the results obtained. The analysis shows that a few mineral industries saw a significant growth in their multiplier values over the years.
Implementation of MAC by using Modified Vedic Multiplier
Sreelekshmi M. S.
2013-09-01
Full Text Available Multiplier Accumulator Unit (MAC is a part of Digital Signal Processors. The speed of MAC depends on the speed of multiplier. So by using an efficient Vedic multiplier which excels in terms of speed, power and area, the performance of MAC can be increased. For this fast method of multiplication based on ancient Indian Vedic mathematics is proposed in this paper. Among various method of multiplication in Vedic mathematics, Urdhva Tiryagbhyam is used and the multiplication is for 32 X 32 bits. Urdhva Tiryagbhyam is a general multiplication formula applicable to all cases of multiplication. Adder used is Carry Look Ahead adder. The proposed design shows improvement over carry save adder.
VLSI IMPLEMENTATION OF AN ANALOG MULTIPLIER FOR MODEM
SRIVIDYA .P,
2011-02-01
Full Text Available A modem (modulator-demodulator is a device that modulates an analog carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goalis to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. Here there is a need to mix the signals of different frequencies or signals of different types, whichemphasizes the use of mixers or multipliers for different RF applications. In this paper, A CMOS analog multiplier, with less number of transistors which can operate at high frequencies with low power and high linearity is proposed. The multiplier works on the basis of parallel connected MOS operation circuit.
Poyatos, Rafael; Sus, Oliver; Vilà-Cabrera, Albert; Vayreda, Jordi; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi
2016-04-01
Plant functional traits are increasingly being used in ecosystem ecology thanks to the growing availability of large ecological databases. However, these databases usually contain a large fraction of missing data because measuring plant functional traits systematically is labour-intensive and because most databases are compilations of datasets with different sampling designs. As a result, within a given database, there is an inevitable variability in the number of traits available for each data entry and/or the species coverage in a given geographical area. The presence of missing data may severely bias trait-based analyses, such as the quantification of trait covariation or trait-environment relationships and may hamper efforts towards trait-based modelling of ecosystem biogeochemical cycles. Several data imputation (i.e. gap-filling) methods have been recently tested on compiled functional trait databases, but the performance of imputation methods applied to a functional trait database with a regular spatial sampling has not been thoroughly studied. Here, we assess the effects of data imputation on five tree functional traits (leaf biomass to sapwood area ratio, foliar nitrogen, maximum height, specific leaf area and wood density) in the Ecological and Forest Inventory of Catalonia, an extensive spatial database (covering 31900 km2). We tested the performance of species mean imputation, single imputation by the k-nearest neighbors algorithm (kNN) and a multiple imputation method, Multivariate Imputation with Chained Equations (MICE) at different levels of missing data (10%, 30%, 50%, and 80%). We also assessed the changes in imputation performance when additional predictors (species identity, climate, forest structure, spatial structure) were added in kNN and MICE imputations. We evaluated the imputed datasets using a battery of indexes describing departure from the complete dataset in trait distribution, in the mean prediction error, in the correlation matrix
Isometric Multipliers of $L^p(G, X)$
U B Tewari; P K Chaurasia
2005-02-01
Let be a locally compact group with a fixed right Haar measure and a separable Banach space. Let $L^p(G, X)$ be the space of -valued measurable functions whose norm-functions are in the usual $L^p$. A left multiplier of $L^p(G, X)$ is a bounded linear operator on $L^p(G, X)$ which commutes with all left translations. We use the characterization of isometries of $L^p(G, X)$ onto itself to characterize the isometric, invertible, left multipliers of $L^p(G, X)$ for 1 ≤ < ∞, ≠ 2, under the assumption that is not the $l^p$-direct sum of two non-zero subspaces. In fact we prove that if is an isometric left multiplier of $L^p(G, X)$ onto itself then there exists $a y \\in G$ and an isometry of onto itself such that $Tf(x) = U(R_y f)(x)$. As an application, we determine the isometric left multipliers of $L^1 \\cap L^p(G, X)$ and $L^1 \\cap C_0(G, X)$ where is non-compact and is not the $l^p$-direct sum of two non-zero subspaces. If is a locally compact abelian group and is a separable Hilbert space, we define $A^p(G, H)=\\{f\\in l^1(G, H):\\hat{f}\\in L^p(, H)\\}$ where is the dual group of . We characterize the isometric, invertible, left multipliers of $A^p(G, H)$, provided is non-compact. Finally, we use the characterization of isometries of (,) for compact to determine the isometric left multipliers of (,) provided * is strictly convex.
Energy and area efficient hierarchy multiplier architecture based on Vedic mathematics and GDI logic
Mohan Shoba
2017-02-01
Full Text Available Hierarchy multiplier is attractive because of its ability to carry the multiplication operation within one clock cycle. The existing hierarchical multipliers occupy more area and also results in more delay. Therefore, in this paper, a method to reduce the computation delay of hierarchy multiplier by employing CslA and Binary to Excess 1 Converter (BEC is proposed. The use of BEC eliminates the n/4 number of adders, existing in the conventional addition scheme, where n denotes the multiplier input width. As the area of the hierarchy multiplier is determined by its base multiplier, the base multiplier is realized with the proposed Vedic multiplier, which has small area and operates with less delay than the conventional multipliers. In addition, the reduction of power consumption in the hierarchy multiplier can be ensured by implementing the designed multiplier with full swing Gate Diffusion Input (GDI logic. The performances of the proposed and the existing multipliers are evaluated by Cadence SPICE simulator using 45 nm technology model. From the simulation results, the performance parameters namely, delay and power consumption are calculated. Further, the area is measured from the corresponding layout for the same technology model. It is examined from the results that the proposed multiplier operates with 17% lesser power delay product than the recently reported hierarchy multiplier. The Monte Carlo simulation is performed to understand the robustness of the proposed hierarchy multiplier.
Goode, Ellen L; Fridley, Brooke L; Vierkant, Robert A
2009-01-01
, CDK4, RB1, CDKN2D, and CCNE1) and one gene region (CDKN2A-CDKN2B). Because of the semi-overlapping nature of the 123 assayed tagging SNPs, we performed multiple imputation based on fastPHASE using data from White non-Hispanic study participants and participants in the international HapMap Consortium...... and National Institute of Environmental Health Sciences SNPs Program. Logistic regression assuming a log-additive model was done on combined and imputed data. We observed strengthened signals in imputation-based analyses at several SNPs, particularly CDKN2A-CDKN2B rs3731239; CCND1 rs602652, rs3212879, rs649392......, and rs3212891; CDK2 rs2069391, rs2069414, and rs17528736; and CCNE1 rs3218036. These results exemplify the utility of imputation in candidate gene studies and lend evidence to a role of cell cycle genes in ovarian cancer etiology, suggest a reduced set of SNPs to target in additional cases and controls....
Consequences of splitting whole-genome sequencing effort over multiple breeds on imputation accuracy
Bouwman, A.C.; Veerkamp, R.F.
2014-01-01
The aim of this study was to determine the consequences of splitting sequencing effort over multiple breeds for imputation accuracy from a high-density SNP chip towards whole-genome sequence. Such information would assist for instance numerical smaller cattle breeds, but also pig and chicken
Mapping change of older forest with nearest-neighbor imputation and Landsat time-series
Janet L. Ohmann; Matthew J. Gregory; Heather M. Roberts; Warren B. Cohen; Robert E. Kennedy; Zhiqiang. Yang
2012-01-01
The Northwest Forest Plan (NWFP), which aims to conserve late-successional and old-growth forests (older forests) and associated species, established new policies on federal lands in the Pacific Northwest USA. As part of monitoring for the NWFP, we tested nearest-neighbor imputation for mapping change in older forest, defined by threshold values for forest attributes...
A Comparison of Imputation Strategies for Ordinal Missing Data on Likert Scale Variables.
Wu, Wei; Jia, Fan; Enders, Craig
2015-01-01
This article compares a variety of imputation strategies for ordinal missing data on Likert scale variables (number of categories = 2, 3, 5, or 7) in recovering reliability coefficients, mean scale scores, and regression coefficients of predicting one scale score from another. The examined strategies include imputing using normal data models with naïve rounding/without rounding, using latent variable models, and using categorical data models such as discriminant analysis and binary logistic regression (for dichotomous data only), multinomial and proportional odds logistic regression (for polytomous data only). The result suggests that both the normal model approach without rounding and the latent variable model approach perform well for either dichotomous or polytomous data regardless of sample size, missing data proportion, and asymmetry of item distributions. The discriminant analysis approach also performs well for dichotomous data. Naïvely rounding normal imputations or using logistic regression models to impute ordinal data are not recommended as they can potentially lead to substantial bias in all or some of the parameters.
Combining Fourier and lagged k-nearest neighbor imputation for biomedical time series data.
Rahman, Shah Atiqur; Huang, Yuxiao; Claassen, Jan; Heintzman, Nathaniel; Kleinberg, Samantha
2015-12-01
Most clinical and biomedical data contain missing values. A patient's record may be split across multiple institutions, devices may fail, and sensors may not be worn at all times. While these missing values are often ignored, this can lead to bias and error when the data are mined. Further, the data are not simply missing at random. Instead the measurement of a variable such as blood glucose may depend on its prior values as well as that of other variables. These dependencies exist across time as well, but current methods have yet to incorporate these temporal relationships as well as multiple types of missingness. To address this, we propose an imputation method (FLk-NN) that incorporates time lagged correlations both within and across variables by combining two imputation methods, based on an extension to k-NN and the Fourier transform. This enables imputation of missing values even when all data at a time point is missing and when there are different types of missingness both within and across variables. In comparison to other approaches on three biological datasets (simulated and actual Type 1 diabetes datasets, and multi-modality neurological ICU monitoring) the proposed method has the highest imputation accuracy. This was true for up to half the data being missing and when consecutive missing values are a significant fraction of the overall time series length. Copyright © 2015 Elsevier Inc. All rights reserved.
MacNeil Vroomen, Janet; Eekhout, Iris; Dijkgraaf, Marcel G; van Hout, Hein; de Rooij, Sophia E; Heymans, Martijn W; Bosmans, Judith E
2015-01-01
Cost and effect data often have missing data because economic evaluations are frequently added onto clinical studies where cost data are rarely the primary outcome. The objective of this article was to investigate which multiple imputation strategy is most appropriate to use for missing cost-effecti
Multiple Imputation for Multivariate Missing-Data Problems: A Data Analyst's Perspective.
Schafer, Joseph L.; Olsen, Maren K.
1998-01-01
The key ideas of multiple imputation for multivariate missing data problems are reviewed. Software programs available for this analysis are described, and their use is illustrated with data from the Adolescent Alcohol Prevention Trial (W. Hansen and J. Graham, 1991). (SLD)
Reporting the Use of Multiple Imputation for Missing Data in Higher Education Research
Manly, Catherine A.; Wells, Ryan S.
2015-01-01
Higher education researchers using survey data often face decisions about handling missing data. Multiple imputation (MI) is considered by many statisticians to be the most appropriate technique for addressing missing data in many circumstances. In particular, it has been shown to be preferable to listwise deletion, which has historically been a…
Handling Missing Data: Analysis of a Challenging Data Set Using Multiple Imputation
Pampaka, Maria; Hutcheson, Graeme; Williams, Julian
2016-01-01
Missing data is endemic in much educational research. However, practices such as step-wise regression common in the educational research literature have been shown to be dangerous when significant data are missing, and multiple imputation (MI) is generally recommended by statisticians. In this paper, we provide a review of these advances and their…
Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance
Finch, W. Holmes
2016-01-01
Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…
Vroomen, Janet MacNeil; Eekhout, Iris; Dijkgraaf, Marcel G.; van Hout, Hein; de Rooij, Sophia E.; Heymans, Martijn W.; Bosmans, Judith E.
2016-01-01
Cost and effect data often have missing data because economic evaluations are frequently added onto clinical studies where cost data are rarely the primary outcome. The objective of this article was to investigate which multiple imputation strategy is most appropriate to use for missing cost-effecti
A hot-deck multiple imputation procedure for gaps in longitudinal recurrent event histories.
Wang, Chia-Ning; Little, Roderick; Nan, Bin; Harlow, Siobán D
2011-12-01
We propose a regression-based hot-deck multiple imputation method for gaps of missing data in longitudinal studies, where subjects experience a recurrent event process and a terminal event. Examples are repeated asthma episodes and death, or menstrual periods and menopause, as in our motivating application. Research interest concerns the onset time of a marker event, defined by the recurrent event process, or the duration from this marker event to the final event. Gaps in the recorded event history make it difficult to determine the onset time of the marker event, and hence, the duration from onset to the final event. Simple approaches such as jumping gap times or dropping cases with gaps have obvious limitations. We propose a procedure for imputing information in the gaps by substituting information in the gap from a matched individual with a completely recorded history in the corresponding interval. Predictive mean matching is used to incorporate information on longitudinal characteristics of the repeated process and the final event time. Multiple imputation is used to propagate imputation uncertainty. The procedure is applied to an important data set for assessing the timing and duration of the menopausal transition. The performance of the proposed method is assessed by a simulation study. © 2011, The International Biometric Society.
Gosho, Masahiko; Maruo, Kazushi; Ishii, Ryota; Hirakawa, Akihiro
2016-11-16
The total score, which is calculated as the sum of scores in multiple items or questions, is repeatedly measured in longitudinal clinical studies. A mixed effects model for repeated measures method is often used to analyze these data; however, if one or more individual items are not measured, the method cannot be directly applied to the total score. We develop two simple and interpretable procedures that infer fixed effects for a longitudinal continuous composite variable. These procedures consider that the items that compose the total score are multivariate longitudinal continuous data and, simultaneously, handle subject-level and item-level missing data. One procedure is based on a multivariate marginalized random effects model with a multiple of Kronecker product covariance matrices for serial time dependence and correlation among items. The other procedure is based on a multiple imputation approach with a multivariate normal model. In terms of the type-1 error rate and the bias of treatment effect in total score, the marginalized random effects model and multiple imputation procedures performed better than the standard mixed effects model for repeated measures analysis with listwise deletion and single imputations for handling item-level missing data. In particular, the mixed effects model for repeated measures with listwise deletion resulted in substantial inflation of the type-1 error rate. The marginalized random effects model and multiple imputation methods provide for a more efficient analysis by fully utilizing the partially available data, compared to the mixed effects model for repeated measures method with listwise deletion.
The effects of reference population size and the availability of information from genotyped ancestors on the accuracy of imputation of single nucleotide polymorphisms (SNPs) were investigated for Mexican Holstein cattle. Three scenarios for reference population size were examined: (1) a local popula...
Bianca N. I. Eskelson; Hailemariam Temesgen; Valerie Lemay; Tara M. Barrett; Nicholas L. Crookston; Andrew T. Hudak
2009-01-01
Almost universally, forest inventory and monitoring databases are incomplete, ranging from missing data for only a few records and a few variables, common for small land areas, to missing data for many observations and many variables, common for large land areas. For a wide variety of applications, nearest neighbor (NN) imputation methods have been developed to fill in...
Iotchkova, Valentina; Huang, Jie; Morris, John A.; Jain, Deepti; Barbieri, Caterina; Walter, Klaudia; Min, Josine L.; Chen, Lu; Astle, William; Cocca, Massimilian; Deelen, Patrick; Elding, Heather; Farmaki, Aliki-Eleni; Franklin, Christopher S.; Franberg, Mattias; Gaunt, Tom R.; Hofman, Albert; Jiang, Tao; Kleber, Marcus E.; Lachance, Genevieve; Luan, Jianan; Malerba, Giovanni; Matchan, Angela; Mead, Daniel; Memari, Yasin; Ntalla, Ioanna; Panoutsopoulou, Kalliope; Pazoki, Raha; Perry, John R. B.; Rivadeneira, Fernando; Sabater-Lleal, Maria; Sennblad, Bengt; Shin, So-Youn; Southam, Lorraine; Traglia, Michela; van Dijk, Freerk; van Leeuwen, Elisabeth M.; Zaza, Gianluigi; Zhang, Weihua; Amin, Najaf; Butterworth, Adam; Chambers, John C.; Dedoussis, George; Dehghan, Abbas; Franco, Oscar H.; Franke, Lude; Frontini, Mattia; Gambaro, Giovanni; Gasparini, Paolo; Hamsten, Anders; Issacs, Aaron; Kooner, Jaspal S.; Kooperberg, Charles; Langenberg, Claudia; Marz, Winfried; Scott, Robert A.; Swertz, Morris A.; Toniolo, Daniela; Uitterlinden, Andre G.; van Duijn, Cornelia M.; Watkins, Hugh; Zeggini, Eleftheria; Maurano, Mathew T.; Timpson, Nicholas J.; Reiner, Alexander P.; Auer, Paul L.; Soranzo, Nicole
2016-01-01
Large-scale whole-genome sequence data sets offer novel opportunities to identify genetic variation underlying human traits. Here we apply genotype imputation based on whole-genome sequence data from the UK1OK and 1000 Genomes Project into 35,981 study participants of European ancestry, followed by
V. Iotchkova (Valentina); J. Huang (Jian); Morris, J.A. (John A); Jain, D. (Deepti); C. Barbieri (Caterina); K. Walter (Klaudia); J. Min (Josine); L. Chen (Lu); Astle, W. (William); M. Cocca (Massimiliano); P. Deelen (Patrick); Elding, H. (Heather); A.-E. Farmaki (Aliki-Eleni); C.S. Franklin (Christopher); M. Frånberg (Mattias); T.R. Gaunt (Tom); Hofman, A. (Albert); Jiang, T. (Tao); M.E. Kleber (Marcus); G. Lachance (Genevieve); J. Luan (Jian'An); G. Malerba (Giovanni); A. Matchan (Angela); Mead, D. (Daniel); Y. Memari (Yasin); I. Ntalla (Ioanna); Panoutsopoulou, K. (Kalliope); R. Pazoki (Raha); J.R.B. Perry (John); F. Rivadeneira Ramirez (Fernando); M. Sabater-Lleal (Maria); B. Sennblad (Bengt); S.-Y. Shin; L. Southam (Lorraine); M. Traglia (Michela); F. van Dijk (Freerk); E.M. van Leeuwen (Elisa); G. Zaza (Gianluigi); W. Zhang (Weihua); N. Amin (Najaf); A.S. Butterworth (Adam); J.C. Chambers (John); G.V. Dedoussis (George); A. Dehghan (Abbas); O.H. Franco (Oscar); L. Franke (Lude); Frontini, M. (Mattia); Gambaro, G. (Giovanni); P. Gasparini (Paolo); A. Hamsten (Anders); Issacs, A. (Aaron); J.S. Kooner (Jaspal S.); C. Kooperberg (Charles); C. Langenberg (Claudia); W. März (Winfried); R.A. Scott (Robert); Swertz, M.A. (Morris A); D. Toniolo (Daniela); A.G. Uitterlinden (André); C.M. van Duijn (Cock); H. Watkins (Hugh); E. Zeggini (Eleftheria); M.T. Maurano (Matthew T.); N. Timpson (Nicholas); A. Reiner (Alexander); P. Auer (Paul); N. Soranzo (Nicole)
2016-01-01
textabstractLarge-scale whole-genome sequence data sets offer novel opportunities to identify genetic variation underlying human traits. Here we apply genotype imputation based on whole-genome sequence data from the UK10K and 1000 Genomes Project into 35,981 study participants of European ancestry,
Sixteen new lung function signals identified through 1000 Genomes Project reference panel imputation
Artigas, Maria Soler; Wain, Louise V.; Miller, Suzanne; Kheirallah, Abdul Kader; Huffman, Jennifer E.; Ntalla, Ioanna; Shrine, Nick; Obeidat, Ma'en; Trochet, Holly; McArdle, Wendy L.; Alves, Alexessander Couto; Hui, Jennie; Zhao, Jing Hua; Joshi, Peter K.; Teumer, Alexander; Albrecht, Eva; Imboden, Medea; Rawal, Rajesh; Lopez, Lorna M.; Marten, Jonathan; Enroth, Stefan; Surakka, Ida; Polasek, Ozren; Lyytikainen, Leo-Pekka; Granell, Raquel; Hysi, Pirro G.; Flexeder, Claudia; Mahajan, Anubha; Beilby, John; Bosse, Yohan; Brandsma, Corry-Anke; Campbell, Harry; Gieger, Christian; Glaeser, Sven; Gonzalez, Juan R.; Grallert, Harald; Hammond, Chris J.; Harris, Sarah E.; Hartikainen, Anna-Liisa; Heliovaara, Markku; Henderson, John; Hocking, Lynne; Horikoshi, Momoko; Hutri-Kahonen, Nina; Ingelsson, Erik; Johansson, Asa; Kemp, John P.; Kolcic, Ivana; Kumar, Ashish; Lind, Lars; Melen, Erik; Musk, Arthur W.; Navarro, Pau; Nickle, David C.; Padmanabhan, Sandosh; Raitakari, Olli T.; Ried, Janina S.; Ripatti, Samuli; Schulz, Holger; Scott, Robert A.; Sin, Don D.; Starr, John M.; Vinuela, Ana; Voelzke, Henry; Wild, Sarah H.; Wright, Alan F.; Zemunik, Tatijana; Jarvis, Deborah L.; Spector, Tim D.; Evans, David M.; Lehtimaki, Terho; Vitart, Veronique; Kahonen, Mika; Gyllensten, Ulf; Rudan, Igor; Deary, Ian J.; Karrasch, Stefan; Probst-Hensch, Nicole M.; Heinrich, Joachim; Stubbe, Beate; Wilson, James F.; Wareham, Nicholas J.; James, Alan L.; Morris, Andrew P.; Jarvelin, Marjo-Riitta; Hayward, Caroline; Sayers, Ian; Strachan, David P.; Hall, Ian P.; Tobin, Martin D.; Deloukas, Panos; Hansell, Anna L.; Hubbard, Richard; Jackson, Victoria E.; Marchini, Jonathan; Pavord, Ian; Thomson, Neil C.; Zeggini, Eleftheria
2015-01-01
Lung function measures are used in the diagnosis of chronic obstructive pulmonary disease. In 38,199 European ancestry individuals, we studied genome-wide association of forced expiratory volume in 1 s (FEV1), forced vital capacity (FVC) and FEV1/FVC with 1000 Genomes Project (phase 1)-imputed genot
Twisk, J.; de Boer, M.; de Vente, W.; Heymans, M.
2013-01-01
Background and Objectives: As a result of the development of sophisticated techniques, such as multiple imputation, the interest in handling missing data in longitudinal studies has increased enormously in past years. Within the field of longitudinal data analysis, there is a current debate on wheth
Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance
Finch, W. Holmes
2016-01-01
Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…
Limitations in Using Multiple Imputation to Harmonize Individual Participant Data for Meta-Analysis.
Siddique, Juned; de Chavez, Peter J; Howe, George; Cruden, Gracelyn; Brown, C Hendricks
2017-02-27
Individual participant data (IPD) meta-analysis is a meta-analysis in which the individual-level data for each study are obtained and used for synthesis. A common challenge in IPD meta-analysis is when variables of interest are measured differently in different studies. The term harmonization has been coined to describe the procedure of placing variables on the same scale in order to permit pooling of data from a large number of studies. Using data from an IPD meta-analysis of 19 adolescent depression trials, we describe a multiple imputation approach for harmonizing 10 depression measures across the 19 trials by treating those depression measures that were not used in a study as missing data. We then apply diagnostics to address the fit of our imputation model. Even after reducing the scale of our application, we were still unable to produce accurate imputations of the missing values. We describe those features of the data that made it difficult to harmonize the depression measures and provide some guidelines for using multiple imputation for harmonization in IPD meta-analysis.
High performance dc-dc conversion with voltage multipliers
Harrigill, W. T.; Myers, I. T.
1974-01-01
The voltage multipliers using capacitors and diodes first developed by Cockcroft and Walton in 1932 were reexamined in terms of state of the art fast switching transistors and diodes, and high energy density capacitors. Because of component improvements, the voltage multiplier, used without a transformer, now appears superior in weight to systems now in use for dc-dc conversion. An experimental 100-watt 1000-volt dc-dc converter operating at 100 kHz was built, with a component weight of about 1 kg/kW. Calculated and measured values of output voltage and efficiency agreed within experimental error.
Optimal Final Carry Propagate Adder Design for Parallel Multipliers
B., Ramkumar
2011-01-01
Based on the ASIC layout level simulation of 7 types of adder structures each of four different sizes, i.e. a total of 28 adders, we propose expressions for the width of each of the three regions of the final Carry Propagate Adder (CPA) to be used in parallel multipliers. We also propose the types of adders to be used in each region that would lead to the optimal performance of the hybrid final adders in parallel multipliers. This work evaluates the complete performance of the analyzed designs in terms of delay, area, power through custom design and layout in 0.18 um CMOS process technology.
Comparative study of Braun’s Multiplier Using FPGA Devices
Anitha R,
2011-06-01
Full Text Available The development cost for ASIC are high, algorithms should be verified and optimized before implementation. To decrease computational delay and improve resource utilization, bypassing techniques are beapplied and braun-arhitectured multiplier is compared with its architectural modification i.e. Column-bypassing and Row-bypassing architectures and the full adder structure has been replaced by the fast adder. The architectures have been implemented on Spartan 3E, Virtex 5 and Virtex 6 LowerPower. Virtex 5 showed the best performance whereas column-bypassed multiplier has the best performance among the three architectures using Xilinx ISE and Verilog HDL.
Radial multipliers on amalgamated free products of II-factors
Möller, Sören
2014-01-01
Let ℳi be a family of II1-factors, containing a common II1-subfactor 풩, such that [ℳi : 풩] ∈ ℕ0 for all i. Furthermore, let ϕ: ℕ0 → ℂ. We show that if a Hankel matrix related to ϕ is trace-class, then there exists a unique completely bounded map Mϕ on the amalgamated free product of the ℳi...... with amalgamation over 풩, which acts as a radial multiplier. Hereby, we extend a result of Haagerup and the author for radial multipliers on reduced free products of unital C*- and von Neumann algebras....
Optimized Multiplier Using Reversible Multicontrol Input Toffoli Gates
H R Bhagyalakshmi
2013-01-01
Full Text Available Reversible logic is an important area to carry the computation into the world of quantum computing. In thispaper a 4-bit multiplier using a new reversible logic gate called BVPPG gate is presented. BVPPG gate isa 5 x 5 reversible gate which is designed to generate partial products required to perform multiplicationand also duplication of operand bits is obtained. This reduces the total cost of the circuit. Toffoli gate isthe universal and also most flexible reversible logic gate. So we have used the Toffoli gates to construct thedesigned multiplier.
Auger neutralization rates of multiply charged ions near metal surfaces
Nedeljkovic, N.N.; Janev, R.K.; Lazur, V.Y.
1988-08-15
Transition rates for the Auger neutralization processes of multiply charged ions on metal surfaces are calculated in closed analytical form. The core potential of a multiply charged ion is represented by a pseudopotential, which accounts for the electron screening effects and allows transition to the pure Coulomb case (fully stripped ions). The relative importance of various neutralization channels in slow-ion--surface collisions is discussed for the examples of He/sup 2+/+Mo(100) and C/sup 3+/+Mo(100) collisional systems.
Spot Pricing When Lagrange Multipliers Are Not Unique
Feng, Donghan; Xu, Zhao; Zhong, Jin
2012-01-01
Classical spot pricing theory is based on multipliers of the primal problem of an optimal market dispatch, i.e., the solution of the dual problem. However, the dual problem of market dispatch may yield multiple solutions. In these circumstances, spot pricing or any standard pricing practice based...... on multipliers cannot generate a unique clearing price. Although such situations are rare, they can cause significant uncertainties and complexities in market dispatch. In practice, this situation is solved through simple empirical methods, which may cause additional operations or biased allocation. Based...... the results of the theoretical analysis, and further demonstrate that the method performs effectively in both uniform-pricing and nodalpricing markets....
AN IMPROVED DESIGN OF A MULTIPLIER USING REVERSIBLE LOGIC GATES
H.R.BHAGYALAKSHMI
2010-08-01
Full Text Available Reversible logic gates are very much in demand for the future computing technologies as they are known to produce zero power dissipation under ideal conditions. This paper proposes an improved design of a multiplier using reversible logic gates. Multipliers are very essential for the construction of various computational units of a quantum computer. The quantum cost of a reversible logic circuit can be minimized by reducing the number of reversible logic gates. For this two 4*4 reversible logic gates called a DPG gate and a BVF gate are used.
Sun Youting
2009-01-01
Full Text Available Many missing-value (MV imputation methods have been developed for microarray data, but only a few studies have investigated the relationship between MV imputation and classification accuracy. Furthermore, these studies are problematic in fundamental steps such as MV generation and classifier error estimation. In this work, we carry out a model-based study that addresses some of the issues in previous studies. Six popular imputation algorithms, two feature selection methods, and three classification rules are considered. The results suggest that it is beneficial to apply MV imputation when the noise level is high, variance is small, or gene-cluster correlation is strong, under small to moderate MV rates. In these cases, if data quality metrics are available, then it may be helpful to consider the data point with poor quality as missing and apply one of the most robust imputation algorithms to estimate the true signal based on the available high-quality data points. However, at large MV rates, we conclude that imputation methods are not recommended. Regarding the MV rate, our results indicate the presence of a peaking phenomenon: performance of imputation methods actually improves initially as the MV rate increases, but after an optimum point, performance quickly deteriorates with increasing MV rates.
Stanley Xu
2014-05-01
Full Text Available In studies that use electronic health record data, imputation of important data elements such as Glycated hemoglobin (A1c has become common. However, few studies have systematically examined the validity of various imputation strategies for missing A1c values. We derived a complete dataset using an incident diabetes population that has no missing values in A1c, fasting and random plasma glucose (FPG and RPG, age, and gender. We then created missing A1c values under two assumptions: missing completely at random (MCAR and missing at random (MAR. We then imputed A1c values, compared the imputed values to the true A1c values, and used these data to assess the impact of A1c on initiation of antihyperglycemic therapy. Under MCAR, imputation of A1c based on FPG 1 estimated a continuous A1c within ± 1.88% of the true A1c 68.3% of the time; 2 estimated a categorical A1c within ± one category from the true A1c about 50% of the time. Including RPG in imputation slightly improved the precision but did not improve the accuracy. Under MAR, including gender and age in addition to FPG improved the accuracy of imputed continuous A1c but not categorical A1c. Moreover, imputation of up to 33% of missing A1c values did not change the accuracy and precision and did not alter the impact of A1c on initiation of antihyperglycemic therapy. When using A1c values as a predictor variable, a simple imputation algorithm based only on age, sex, and fasting plasma glucose gave acceptable results.
Hybrid Voltage-Multipliers Based Switching Power Converters
Rosas-Caro, Julio C.; Mayo-Maldonado, Jonathan C.; Vazquez-Bautista, Rene Fabian; Valderrabano-Gonzalez, Antonio; Salas-Cabrera, Ruben; Valdez-Resendiz, Jesus Elias
2011-08-01
This work presents a derivation of PWM DC-DC hybrid converters by combining traditional converters with the Cockcroft-Walton voltage multiplier, the voltage multiplier of each converter is driven with the same transistor of the basic topology; this fact makes the structure of the new converters very simple and provides high-voltage gain. The traditional topologies discussed are the boost, buck-boost, Cuk and SEPIC. They main features of the discussed family are: (i) high-voltage gain without using extreme duty cycles or transformers, which allow high switching frequency and (ii) low voltage stress in switching devices, along with modular structures, and more output levels can be added without modifying the main circuit, which is highly desirable in some applications such as renewable energy generation systems. It is shown how a multiplier converter can become a generalized topology and how some of the traditional converters and several state-of-the-art converters can be derived from the generalized topologies and vice-versa. All the discussed converters were simulated, additionally experimental results are provided with an interleaved multiplier converter.
Multiply-Constrained Semantic Search in the Remote Associates Test
Smith, Kevin A.; Huber, David E.; Vul, Edward
2013-01-01
Many important problems require consideration of multiple constraints, such as choosing a job based on salary, location, and responsibilities. We used the Remote Associates Test to study how people solve such multiply-constrained problems by asking participants to make guesses as they came to mind. We evaluated how people generated these guesses…
Garbage-free reversible constant multipliers for arbitrary integers
Mogensen, Torben Ægidius
2013-01-01
We present a method for constructing reversible circuitry for multiplying integers by arbitrary integer constants. The method is based on Mealy machines and gives circuits whose size are (in the worst case) linear in the size of the constant. This makes the method unsuitable for large constants......, but gives quite compact circuits for small constants. The circuits use no garbage or ancillary lines....
Radial multipliers on reduced free products of operator algebras
Haagerup, Uffe; Möller, Sören
2012-01-01
Let Ai be a family of unital C*-algebras, respectively, of von Neumann algebras and \\phi: N0 \\to C. We show that if a Hankel matrix related to \\phi is trace-class, then there exists a unique completely bounded map M\\phi on the reduced free product of the Ai, which acts as a radial multiplier...
Design and Implementation of Analog Multiplier with Improved Linearity
Nandini A.S
2012-11-01
Full Text Available Analog multipliers are used for frequency conversion and are critical components in modern radio frequency (RF systems. RF systems must process analog signals with a wide dynamic range at high frequencies. A mixer converts RF power at one frequency into power at another frequency to make signalprocessing easier and also inexpensive. A fundamental reason for frequency conversion is to allow amplification of the received signal at a frequency other than the RF, or the audio, frequency. This paper deals with two such multipliers using MOSFETs which can be used in communication systems. They were designed and implemented using 0.5 micron CMOS process. The two multipliers were characterized for power consumption, linearity, noise and harmonic distortion. The initial circuit simulated is a basic Gilbert cell whose gain is fairly high but shows more power consumption and high total harmonic distortion. Our paper aims in reducing both power consumption and total harmonic distortion. The second multiplier is a new architecture that consumes 43.07 percent less power and shows 22.69 percent less total harmonic distortion when compared to the basic Gilbert cell. The common centroid layouts of both the circuits have also been developed.
The Gas Electron Multiplier Chamber Exhibition LEPFest 2000
2000-01-01
The Gas Electron Multiplier (GEM) is a novel device introduced in 1996.Large area detectors based on this technology are in construction for high energy physics detectors.This technology can also be used for high-rate X-ray imaging in medical diagnostics and for monitoring irradiation during cancer treatment
Multiplier methods for optimization problems with Lipschitzian derivatives
Izmailov, A. F.; Kurennoy, A. S.
2012-12-01
Optimization problems for which the objective function and the constraints have locally Lipschitzian derivatives but are not assumed to be twice differentiable are examined. For such problems, analyses of the local convergence and the convergence rate of the multiplier (or the augmented Lagrangian) method and the linearly constraint Lagrangian method are given.
A Method for Deriving Transverse Masses Using Lagrange Multipliers
Gross, Eilam; Vitells, Ofer
2008-01-01
We use Lagrange multipliers to extend the traditional definition of Transverse Mass used in experimental high energy physics. We demonstrate the method by implementing it to derive a new Transverse Mass that can be used as a discriminator to distinguish between top decays via a charged W or a charged Higgs Boson.
New method for high performance multiply-accumulator design
Bing-jie XIA; Peng LIU; Qing-dong YAO
2009-01-01
This study presents a new method of 4-pipelined high-performance split multiply-accumulator (MAC) architecture,which is capable of supporting multiple precisions developed for media processors. To speed up the design further, a novel partial product compression circuit based on interleaved adders and a modified hybrid partial product reduction tree (PPRT) scheme are proposed. The MAC can perform 1-way 32-bit, 4-way 16-bit signed/unsigned multiply or multiply-accumulate operations and 2-way parallel multiply add (PMADD) operations at a high frequency of 1.25 GHz under worst-case conditions and 1.67 GHz under typical-case conditions, respectively. Compared with the MAC in 32-bit microprocessor without interlocked piped stages (MIPS), the proposed design shows a great advantage in speed. Moreover, an improvement of up to 32% in throughput is achieved.The MAC design has been fabricated with Taiwan Semiconductor Manufacturing Company (TSMC) 90-nm CMOS standard cell technology and has passed a functional test.
New approach to streaming semigroups with multiplying boundary conditions
Mohamed Boulanouar
2008-11-01
Full Text Available This paper concerns the generation of a C_0-semigroup by the streaming operator with general multiplying boundary conditions. A first approach, presented in [2], is based on the Hille-Yosida's Theorem. Here, we present a second approach based on the construction of the generated semigroup, without using the Hille-Yosida's Theorem.
Problems with Accurate Atomic Lfetime Measurements of Multiply Charged Ions
Trabert, E
2009-02-19
A number of recent atomic lifetime measurements on multiply charged ions have reported uncertainties lower than 1%. Such a level of accuracy challenges theory, which is a good thing. However, a few lessons learned from earlier precision lifetime measurements on atoms and singly charged ions suggest to remain cautious about the systematic errors of experimental techniques.
Design and Implementation of Analog Multiplier with Improved Linearity
Nandini A.S
2012-10-01
Full Text Available Analog multipliers are used for frequency conversion and are critical components in modern radio frequency (RF systems. RF systems must process analog signals with a wide dynamic range at high frequencies. A mixer converts RF power at one frequency into power at another frequency to make signal processing easier and also inexpensive. A fundamental reason for frequency conversion is to allow amplification of the received signal at a frequency other than the RF, or the audio, frequency. This paper deals with two such multipliers using MOSFETs which can be used in communication systems. They were designed and implemented using 0.5 micron CMOS process. The two multipliers were characterized for power consumption, linearity, noise and harmonic distortion. The initial circuit simulated is a basic Gilbert cell whose gain is fairly high but shows more power consumption and high total harmonic distortion. Our paper aims in reducing both power consumption and total harmonic distortion. The second multiplier is a new architecture that consumes 43.07 percent less power and shows 22.69 percent less total harmonic distortion when compared to the basic Gilbert cell. The common centroid layouts of both the circuits have also been developed.
Treatment of multiply controlled destructive behavior with food reinforcement.
Adelinis, J D; Piazza, C C; Goh, H L
2001-01-01
We evaluated the extent to which the positive reinforcement of communication would reduce multiply controlled destructive behavior in the absence of relevant extinction components. When edible reinforcement for appropriate communication and nonfood reinforcers for problem behavior were available simultaneously, responding was allocated almost exclusively toward the behavior that produced edible reinforcement.
Quantum noise frequency correlations of multiply scattered light
Lodahl, Peter
2006-01-01
Frequency correlations in multiply scattered light that are present in quantum fluctuations are investigated. The speckle correlations for quantum and classical noise are compared and are found to depend markedly differently on optical frequency, which was confirmed in a recent experiment....... Furthermore, novel mesoscopic correlations are predicted that depend on the photon statistics of the incoming light....
Analysis of Random Jitter in a Clock Multiplying DLL Architecture
Beek, van de R.C.H; Klumperink, E.A.M.; Vaucher, C.S.; Nauta, B.
2001-01-01
In this paper, a thorough analysis of the jitter behavior of a Delay Locked Loop (DLL) based clock multiplying architecture is presented. The noise sources that are included in the analysis are the noise of the delay elements, the reference jitter and the noise of the Phase Frequency Detector and Ch
Radial multipliers on reduced free products of operator algebras
Haagerup, Uffe; Möller, Sören
2012-01-01
Let Ai be a family of unital C*-algebras, respectively, of von Neumann algebras and \\phi: N0 \\to C. We show that if a Hankel matrix related to \\phi is trace-class, then there exists a unique completely bounded map M\\phi on the reduced free product of the Ai, which acts as a radial multiplier...
Gas Electron Multiplier detectors with high reliability and stability
Ovchinnikov, B M; Ovchinnikov, Yu B
2010-01-01
The Gas Electron Multiplier detectors with wire and metallic electrodes, with a gas filling in the gap between them were proposed and tested. The main advantage of these Gas Electron Multipliers compared to standard ones consists in their increased stability and reliability. The experimental results on testing of such detectors with gaps between the electrodes of 1 and 3 mm are reported. It is demonstrated, that the best gas filling for the gas electron multipliers is neon with small admixture of quenching gases (for example, (N2+H2O) at ~100ppm). This filling offers the greatest coefficient of proportional multiplication as compared with other gases, at small electric potential difference between the GEM electrodes, in absence of streamer discharges in the proportional region. The results on operation of the multi-channel gas electron multiplier with wire cathode and continuous anode filled with Ne, Ar, Ar+CH4 and Ar+1%Xe are presented also. Based on the experimental observations, the explanation of the mech...
Lagrangian multiplier and massive Yang-Mills fields
Li, Z.P.
1982-09-01
If we give appropriate constraint to the gauge invariant Lagrangian, the variation principle of the action convert to the variational problems with subsidiary condition. The effective Lagrangian which contains Lagrangian multiplier may have the mass term of the mesons. In that case we obtain naturally the massive Yang-Mills fields which was discussed by Nakanishi.
A cascaded three-phase symmetrical multistage voltage multiplier
Iqbal, Shahid [Faculty of Engineering and Technology, Multimedia University, Melaka Campus, 75450 Melaka (Malaysia); Singh, G K [Faculty of Engineering and Technology, Multimedia University, Melaka Campus, 75450 Melaka (Malaysia); Besar, R [Faculty of Engineering and Technology, Multimedia University, Melaka Campus, 75450 Melaka (Malaysia); Muhammad, G [Faculty of Information Science and Technology, Multimedia University, Melaka Campus, 75450 Melaka (Malaysia)
2006-10-15
A cascaded three-phase symmetrical multistage Cockcroft-Walton voltage multiplier (CW-VM) is proposed in this report. It consists of three single-phase symmetrical voltage multipliers, which are connected in series at their smoothing columns like string of batteries and are driven by three-phase ac power source. The smoothing column of each voltage multiplier is charged twice every cycle independently by respective oscillating columns and discharged in series through load. The charging discharging process completes six times a cycle and therefore the output voltage ripple's frequency is of sixth order of the drive signal frequency. Thus the proposed approach eliminates the first five harmonic components of load generated voltage ripples and sixth harmonic is the major ripple component. The proposed cascaded three-phase symmetrical voltage multiplier has less than half the voltage ripple, and three times larger output voltage and output power than the conventional single-phase symmetrical CW-VM. Experimental and simulation results of the laboratory prototype are given to show the feasibility of proposed cascaded three-phase symmetrical CW-VM.
An R function for imputation of missing cells in two-way data sets by EM-AMMI algorithm
Jakub Paderewski
2014-06-01
Full Text Available Various statistical methods for two-way classification data sets (including AMMI or GGE analyses, used in crop science for interpreting genotype-by-environment interaction require the data to be complete, that is, not to have missing cells. If there are such, however, one might impute the missing cells. The paper offers R code for imputing missing values by the EM-AMMI algorithm. In addition, a function to check the repeatability of this algorithm is proposed. This function could be used to evaluate if the missing data were imputed reliably (unambiguously, which is important especially for small data sets
Wilson, Machelle D; Kerstin Lueck
2014-01-01
The imputation of missing data is often a crucial step in the analysis of survey data. This study reviews typical problems with missing data and discusses a method for the imputation of missing survey data with a large number of categorical variables which do not have a monotone missing pattern. We develop a method for constructing a monotone missing pattern that allows for imputation of categorical data in data sets with a large number of variables using a model-based MCMC approach. We repor...
Forced monogamy in a multiply mating species does not impede colonisation success.
Deacon, Amy E; Barbosa, Miguel; Magurran, Anne E
2014-06-12
The guppy (Poecilia reticulata) is a successful invasive species. It is also a species that mates multiply; previous studies have demonstrated that this strategy carries fitness benefits. Guppies are routinely introduced to tanks and troughs in regions outside their native range for mosquito-control purposes, and often spread beyond these initial confines into natural water bodies with negative ecological consequences. Here, using a mesocosm set up that resembles the containers into which single guppies are typically introduced for mosquito control, we ask whether singly-mated females are at a disadvantage, relative to multiply-mated females, when it comes to founding a population. Treatments were monitored for one year. A key finding was that mating history did not predict establishment success, which was 88% in both treatments. Furthermore, analysis of behavioural traits revealed that the descendants of singly-mated females retained antipredator behaviours, and that adult males showed no decrease in courtship vigour. Also, we detected no differences in behavioural variability between treatments. These results suggest that even when denied the option of multiple mating, singly-mated female guppies can produce viable populations, at least at the founder stage. This may prove to be a critical advantage in typical introduction scenarios where few individuals are released into enclosed water bodies before finding their way into natural ecosystems.
Imputing forest carbon stock estimates from inventory plots to a nationally continuous coverage
Wilson Barry Tyler
2013-01-01
Full Text Available Abstract The U.S. has been providing national-scale estimates of forest carbon (C stocks and stock change to meet United Nations Framework Convention on Climate Change (UNFCCC reporting requirements for years. Although these currently are provided as national estimates by pool and year to meet greenhouse gas monitoring requirements, there is growing need to disaggregate these estimates to finer scales to enable strategic forest management and monitoring activities focused on various ecosystem services such as C storage enhancement. Through application of a nearest-neighbor imputation approach, spatially extant estimates of forest C density were developed for the conterminous U.S. using the U.S.’s annual forest inventory. Results suggest that an existing forest inventory plot imputation approach can be readily modified to provide raster maps of C density across a range of pools (e.g., live tree to soil organic carbon and spatial scales (e.g., sub-county to biome. Comparisons among imputed maps indicate strong regional differences across C pools. The C density of pools closely related to detrital input (e.g., dead wood is often highest in forests suffering from recent mortality events such as those in the northern Rocky Mountains (e.g., beetle infestations. In contrast, live tree carbon density is often highest on the highest quality forest sites such as those found in the Pacific Northwest. Validation results suggest strong agreement between the estimates produced from the forest inventory plots and those from the imputed maps, particularly when the C pool is closely associated with the imputation model (e.g., aboveground live biomass and live tree basal area, with weaker agreement for detrital pools (e.g., standing dead trees. Forest inventory imputed plot maps provide an efficient and flexible approach to monitoring diverse C pools at national (e.g., UNFCCC and regional scales (e.g., Reducing Emissions from Deforestation and Forest
Andersen, Andreas; Rieckmann, Andreas
2016-01-01
In this article, we illustrate how to use mi impute chained with intreg to fit an analysis of covariance analysis of censored and nondetectable immunological concentrations measured in a randomized pretest–posttest design....
Yan Bing; Zhang Yu-Juan
2013-01-01
The potential energy curves for neutrals and multiply charged ions of carbon monosulfide are computed with highly correlated multi-reference configuration interaction wavefunctions.The correlations of inner-shell electrons with the scalar relativistic effects are included in the present computations.The spectroscopic constants,dissociation energies,ionization energies for ground and low-lying excited states together with corresponding electronic configurations of ions are obtained,and a good agreement between the present work and existing experiments is found.No theoretical evidence is found for the adiabatically stable CSq+ (q ＞ 2) ions according to the present ab initio calculations.The calculated values for lst-6th ionization energies are 11.25,32.66,64.82,106.25,159.75,and 224.64 eV,respectively.The kinetic energy release data of fragments are provided by the present work for further experimental comparisons.
A New Design for Array Multiplier with Trade off in Power and Area
Ravi, Nirlakalla; Prasad, T Jayachandra; Rao, T Subba
2011-01-01
In this paper a low power and low area array multiplier with carry save adder is proposed. The proposed adder eliminates the final addition stage of the multiplier than the conventional parallel array multiplier. The conventional and proposed multiplier both are synthesized with 16-T full adder. Among Transmission Gate, Transmission Function Adder, 14-T, 16-T full adder shows energy efficiency. In the proposed 4x4 multiplier to add carry bits with out using Ripple Carry Adder (RCA) in the final stage, the carries given to the input of the next left column input. Due to this the proposed multiplier shows 56 less transistor count, then cause trade off in power and area. The proposed multiplier has shown 13.91% less power, 34.09% more speed and 59.91% less energy consumption for TSMC 0.18nm technology at a supply voltage 2.0V than the conventional multiplier.
Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M
Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms.
T R Sivapriya
2012-07-01
Full Text Available This paper presents a comparison of different data imputation approaches used in filling missing data and proposes a combined approach to estimate accurately missing attribute values in a patient database. The present study suggests a more robust technique that is likely to supply a value closer to the one that is missing for effective classification and diagnosis. Initially data is clustered and z-score method is used to select possible values of an instance with missing attribute values. Then multiple imputation method using LSSVM (Least Squares Support Vector Machine is applied to select the most appropriate values for the missing attributes. Five imputed datasets have been used to demonstrate the performance of the proposed method. Experimental results show that our method outperforms conventional methods of multiple imputation and mean substitution. Moreover, the proposed method CZLSSVM (Clustered Z-score Least Square Support Vector Machine has been evaluated in two classification problems for incomplete data. The efficacy of the imputation methods have been evaluated using LSSVM classifier. Experimental results indicate that accuracy of the classification is increases with CZLSSVM in the case of missing attribute value estimation. It is found that CZLSSVM outperforms other data imputation approaches like decision tree, rough sets and artificial neural networks, K-NN (K-Nearest Neighbour and SVM. Further it is observed that CZLSSVM yields 95 per cent accuracy and prediction capability than other methods included and tested in the study.
Liquid Hole Multipliers: bubble-assisted electroluminescence in liquid xenon
Arazi, L; Coimbra, A E C; Rappaport, M L; Vartsky, D; Chepel, V; Breskin, A
2015-01-01
In this work we discuss the mechanism behind the large electroluminescence signals observed at relatively low electric fields in the holes of a Thick Gas Electron Multiplier (THGEM) electrode immersed in liquid xenon. We present strong evidence that the scintillation light is generated in xenon bubbles trapped below the THGEM holes. The process is shown to be remarkably stable over months of operation, providing - under specific thermodynamic conditions - energy resolution similar to that of present dual-phase liquid xenon experiments. The observed mechanism may serve as the basis for the development of Liquid Hole Multipliers (LHMs), capable of producing local charge-induced electroluminescence signals in large-volume single-phase noble-liquid detectors for dark matter and neutrino physics experiments.
Dynamic effects of fiscal policy and fiscal multipliers in Croatia
Milan Deskar-Škrbić
2013-06-01
Full Text Available The aim of this paper is to analyze the effects of discretionary measures of fiscal policy on the economic activity and to estimate the size of fiscal multipliers in Croatia. Econometric framework is based on the structural VAR model (SVAR, with Blanchard-Perotti identification method that uses information on institutional characteristics of fiscal system. The analysis is conducted on quarterly data for total expenditures and indirect taxes of central, central consolidated and general consolidated government and aggregate demand for the period from 2004-2012. The results show that our initial assumptions about the difference in the size of the multiplier of government expenditures and indirect tax revenues between three levels of government consolidation have been confirmed.
Dark energy from modified gravity with Lagrange multipliers
Capozziello, Salvatore; Nojiri, Shin'ichi; Odintsov, Sergei D
2010-01-01
We study scalar-tensor theory, k-essence and modified gravity with Lagrange multiplier constraint which role is to reduce the number of degrees of freedom. Dark Energy cosmology of different types ($\\Lambda$CDM, unified inflation with DE, smooth non-phantom/phantom transition epoch) is reconstructed in such models. It is shown that mathematical equivalence between scalar theory and $F(R)$ gravity is broken due to presence of constraint. The cosmological dynamics of $F(R)$ gravity is modified by the second $F_2(R)$ function dictated by the constraint. Dark Energy cosmology is defined by this function while standard $F_1(R)$ function is relevant for local tests (modification of newton regime). A general discussion on the role of Lagrange multipliers to make higher-derivative gravity canonical is developed.
Imputating missing values in diary records of sun-exposure study
Have, Anna Szynkowiak; Philipsen, Peter Alshede; Larsen, Jan
2001-01-01
In a sun-exposure study, questionnaires concerning sun-habits were collected from 195 subjects. This paper focuses on the general problem of missing data values, which occurs when some, or even all of the questions have not been answered in a questionnaire. Here, only missing values of low concen...... concentration are investigated. We consider and compare two different models for imputating missing values: the Gaussian model and the non-parametric K-nearest neighbor model....
BAI YUN-XIA; QIN YONG-SONG; WANG LI-RONG; LI LING
2009-01-01
Suppose that there axe two populations x and y with missing data on both of them, where x has a distribution function F(.) which is unknown and y has form depending on some unknown parameter θ. Fractional imputation is used to fill in missing data. The asymptotic distributions of the semi-empirical likelihood ration statistic are obtained under some mild conditions. Then, empirical likelihood confidence intervals on the differences of x and y are constructed.
Effects of height and live crown ratio imputation strategies on stand biomass estimation
Elijah J. Allensworth; Temesgen. Hailemariam
2015-01-01
The effects of subsample design and imputation of total height (ht) and live crown ratio (cr) on the accuracy of stand-level estimates of component and total aboveground biomass are not well investigated in the current body of literature. To assess this gap in research, this study uses a data set of 3,454 Douglas-fir trees obtained from 102 stands in southwestern...
Multipliers of $A_p((0, ∞))$ with Order Convolution
Savita Bhatnagar
2005-08-01
The aim of this paper is to study the multipliers from $A_r(I)$ to $A_p(I), r≠ p$, where =(0, ∞) is the locally compact topological semigroup with multiplication max and usual topology and $A_r(I)=\\{f\\in L_1(I):\\hat{f}\\in L_r(\\hat{I})\\}$ with norm $|||f|||_r=||f||_1+||hat{f}||_r$.
Radial multipliers on reduced free products of operator algebras
Haagerup, Uffe; Møller, Søren
2012-01-01
Let AiAi be a family of unital C¿C¿-algebras, respectively, of von Neumann algebras and ¿:N0¿C¿:N0¿C. We show that if a Hankel matrix related to ¿ is trace-class, then there exists a unique completely bounded map M¿M¿ on the reduced free product of the AiAi, which acts as a radial multiplier...
Lagrange Multipliers and Third Order Scalar-Tensor Field Theories
Horndeski, Gregory W.
2016-01-01
In a space of 4-dimensions, I will examine constrained variational problems in which the Lagrangian, and constraint scalar density, are concomitants of a (pseudo-Riemannian) metric tensor and its first two derivatives. The Lagrange multiplier for these constrained extremal problems will be a scalar field. For suitable choices of the Lagrangian, and constraint, we can obtain Euler-Lagrange equations which are second order in the scalar field and third order in the metric tensor. The effect of ...
High performance pipelined multiplier with fast carry-save adder
Wu, Angus
1990-01-01
A high-performance pipelined multiplier is described. Its high performance results from the fast carry-save adder basic cell which has a simple structure and is suitable for the Gate Forest semi-custom environment. The carry-save adder computes the sum and carry within two gate delay. Results show that the proposed adder can operate at 200 MHz for a 2-micron CMOS process; better performance is expected in a Gate Forest realization.
Multiply-negatively charged aluminium clusters and fullerenes
Walsh, Noelle
2008-07-15
Multiply negatively charged aluminium clusters and fullerenes were generated in a Penning trap using the 'electron-bath' technique. Aluminium monoanions were generated using a laser vaporisation source. After this, two-, three- and four-times negatively charged aluminium clusters were generated for the first time. This research marks the first observation of tetra-anionic metal clusters in the gas phase. Additionally, doubly-negatively charged fullerenes were generated. The smallest fullerene dianion observed contained 70 atoms. (orig.)
Performance of a multianode photo multiplier cluster equipped with lenses
Gibson, V; Wotton, S A; Albrecht, E; Eklund, L; Eisenhardt, S; Muheim, F; Playfer, S; Petrolini, A; Easo, S; Halley, A; Barber, G; Duane, A; Price, D; Websdale, D M; Calvi, M; Paganoni, M; Bibby, J; Charles, M J; Harnew, N; Libby, J; Rademacker, J; Smale, N J; Topp-Jørgensen, S; Wilkinson, G; Baker, J; French, M
2001-01-01
Studies of Multi{anode Photo Multiplier Tubes (MaPMTs), which are a possible photo{detector for the LHCb RICHes, are presented. These studies include those of a cluster of MaPMTs equipped with lenses at the SPS beam during the Summer of 1999. The read{out electronics used were capable of capturing the data at 40 MHz. Results on the effect of charged particles and magnetic fields on MaPMTs are also presented.
Multiplier ideal sheaves in complex and algebraic geometry
Yum-Tong; Siu
2005-01-01
The application of the method of multiplier ideal sheaves to effective problems in algebraic geometry is briefly discussed. Then its application to the deformational invariance of plurigenera for general compact algebraic manifolds is presented and discussed.Finally its application to the conjecture of the finite generation of the canonical ring is explored, and the use of complex algebraic geometry in complex Neumann estimates is discussed.
Three states of fiscal multipliers in a small open economy
Simon Naitram; Justin Carter; Shane Lowe
2015-01-01
This research reviews the effects of fiscal expenditures on economic output in a non-linear fashion for the Barbados economy. Using the Markov-Switching methodology, fiscal expenditure multipliers are estimated for each stage of the business cycle. The data indicates that a three-regime model is the best fit â€“ capturing recession, normal growth and boom periods. Our findings suggest that increasing capital expenditure is positively correlated with economic growth at all stages of the busine...
The gas electron multiplier (GEM): Operating principles and applications
Sauli, Fabio
2016-01-01
Introduced by the author in 1997, The Gas Electron Multiplier (GEM) constitutes a powerful addition to the family of fast radiation detectors; originally developed for particle physics experiments, the device and has spawned a large number of developments and applications; a web search yields more than 400 articles on the subject. This note is an attempt to summarize the status of the design, developments and applications of the new detector.
The role of the Jacobi last multiplier and isochronous systems
Partha Guha; Anindya Ghose Choudhury
2011-11-01
We employ Jacobi’s last multiplier (JLM) to study planar differential systems. In particular, we examine its role in the transformation of the temporal variable for a system of ODEs originally analysed by Calogero–Leyvraz in course of their identiﬁcation of isochronous systems. We also show that JLM simpliﬁes to a great extent the proofs of isochronicity for the Liénard-type equations.
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
Data Editing and Imputation in Business Surveys Using “R”
Elena Romascanu
2014-06-01
Full Text Available Purpose – Missing data are a recurring problem that can cause bias or lead to inefficient analyses. The objective of this paper is a direct comparison between the two statistical software features R and SPSS, in order to take full advantage of the existing automated methods for data editing process and imputation in business surveys (with a proper design of consistency rules as a partial alternative to the manual editing of data. Approach – The comparison of different methods on editing surveys data, in R with the ‘editrules’ and ‘survey’ packages because inside those, exist commonly used transformations in ofﬁcial statistics, as visualization of missing values pattern using ‘Amelia’ and ‘VIM’ packages, imputation approaches for longitudinal data using ‘VIMGUI’ and a comparison of another statistical software performance on the same features, such as SPSS. Findings – Data on business statistics received by NIS’s (National Institute of Statistics are not ready to be used for direct analysis due to in-record inconsistencies, errors and missing values from the collected data sets. The appropriate automatic methods from R packages, offers the ability to set the erroneous fields in edit-violating records, to verify the results after the imputation of missing values providing for users a flexible, less time consuming approach and easy to perform automation in R than in SPSS Macros syntax situations, when macros are very handy.
Missing Data Imputation of Solar Radiation Data under Different Atmospheric Conditions
Concepción Crespo Turrado
2014-10-01
Full Text Available Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE. This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW and Multiple Linear Regression (MLR. The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW.
Missing data imputation of solar radiation data under different atmospheric conditions.
Turrado, Concepción Crespo; López, María Del Carmen Meizoso; Lasheras, Fernando Sánchez; Gómez, Benigno Antonio Rodríguez; Rollé, José Luis Calvo; Juez, Francisco Javier de Cos
2014-10-29
Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE). This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW) and Multiple Linear Regression (MLR). The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW.
Almost everywhere convergence of sequences of multiplier operators on local fields
郑世骏; 郑维行
1997-01-01
Let Kn be the n -dimensional vector space over a local field K . Two maximal multiplier theorems on Lp(Kn) are proved for certain multiplier operator sequences associated with regularization and dilation respectively Consequently the a. e. convergence of such multiplier operator sequences is obtained This sharpens Taibleson’s main result and applies to several important singular integral operators on Kn.
On Nilpotent Multipliers of Some Verbal Products of Groups
Hokmabadi, Azam
2010-01-01
The paper is devoted to finding a homomorphic image for the $c$-nilpotent multiplier of the verbal product of a family of groups with respect to a variety ${\\mathcal V}$ when ${\\mathcal V} \\subseteq {\\mathcal N}_{c}$ or ${\\mathcal N}_{c}\\subseteq {\\mathcal V}$. Also a structure of the $c$-nilpotent multiplier of a special case of the verbal product, the nilpotent product, of cyclic groups is given. In fact, we present an explicit formula for the $c$-nilpotent multiplier of the $n$th nilpotent product of the group $G= {\\bf {Z}}\\stackrel{n}{*}...\\stackrel{n}{*}{\\bf {Z}}\\stackrel{n}{*} {\\bf {Z}}_{r_1}\\stackrel{n}{*}...\\stackrel{n}{*}{\\bf{Z}}_{r_t}$, where $r_{i+1}$ divides $r_i$ for all $i$, $1 \\leq i \\leq t-1$, and $(p,r_1)=1$ for any prime $p$ less than or equal to $n+c$, for all positive integers $n$, $c$.
Inferring polyploid phylogenies from multiply-labeled gene trees
Petri Anna
2009-08-01
Full Text Available Abstract Background Gene trees that arise in the context of reconstructing the evolutionary history of polyploid species are often multiply-labeled, that is, the same leaf label can occur several times in a single tree. This property considerably complicates the task of forming a consensus of a collection of such trees compared to usual phylogenetic trees. Results We present a method for computing a consensus tree of multiply-labeled trees. As with the well-known greedy consensus tree approach for phylogenetic trees, our method first breaks the given collection of gene trees into a set of clusters. It then aims to insert these clusters one at a time into a tree, starting with the clusters that are supported by most of the gene trees. As the problem to decide whether a cluster can be inserted into a multiply-labeled tree is computationally hard, we have developed a heuristic method for solving this problem. Conclusion We illustrate the applicability of our method using two collections of trees for plants of the genus Silene, that involve several allopolyploids at different levels.
Quick, “Imputation-free” meta-analysis with proxy-SNPs
Meesters Christian
2012-09-01
Full Text Available Abstract Background Meta-analysis (MA is widely used to pool genome-wide association studies (GWASes in order to a increase the power to detect strong or weak genotype effects or b as a result verification method. As a consequence of differing SNP panels among genotyping chips, imputation is the method of choice within GWAS consortia to avoid losing too many SNPs in a MA. YAMAS (Yet Another Meta Analysis Software, however, enables cross-GWAS conclusions prior to finished and polished imputation runs, which eventually are time-consuming. Results Here we present a fast method to avoid forfeiting SNPs present in only a subset of studies, without relying on imputation. This is accomplished by using reference linkage disequilibrium data from 1,000 Genomes/HapMap projects to find proxy-SNPs together with in-phase alleles for SNPs missing in at least one study. MA is conducted by combining association effect estimates of a SNP and those of its proxy-SNPs. Our algorithm is implemented in the MA software YAMAS. Association results from GWAS analysis applications can be used as input files for MA, tremendously speeding up MA compared to the conventional imputation approach. We show that our proxy algorithm is well-powered and yields valuable ad hoc results, possibly providing an incentive for follow-up studies. We propose our method as a quick screening step prior to imputation-based MA, as well as an additional main approach for studies without available reference data matching the ethnicities of study participants. As a proof of principle, we analyzed six dbGaP Type II Diabetes GWAS and found that the proxy algorithm clearly outperforms naïve MA on the p-value level: for 17 out of 23 we observe an improvement on the p-value level by a factor of more than two, and a maximum improvement by a factor of 2127. Conclusions YAMAS is an efficient and fast meta-analysis program which offers various methods, including conventional MA as well as inserting proxy
Katya L Masconi
Full Text Available Imputation techniques used to handle missing data are based on the principle of replacement. It is widely advocated that multiple imputation is superior to other imputation methods, however studies have suggested that simple methods for filling missing data can be just as accurate as complex methods. The objective of this study was to implement a number of simple and more complex imputation methods, and assess the effect of these techniques on the performance of undiagnosed diabetes risk prediction models during external validation.Data from the Cape Town Bellville-South cohort served as the basis for this study. Imputation methods and models were identified via recent systematic reviews. Models' discrimination was assessed and compared using C-statistic and non-parametric methods, before and after recalibration through simple intercept adjustment.The study sample consisted of 1256 individuals, of whom 173 were excluded due to previously diagnosed diabetes. Of the final 1083 individuals, 329 (30.4% had missing data. Family history had the highest proportion of missing data (25%. Imputation of the outcome, undiagnosed diabetes, was highest in stochastic regression imputation (163 individuals. Overall, deletion resulted in the lowest model performances while simple imputation yielded the highest C-statistic for the Cambridge Diabetes Risk model, Kuwaiti Risk model, Omani Diabetes Risk model and Rotterdam Predictive model. Multiple imputation only yielded the highest C-statistic for the Rotterdam Predictive model, which were matched by simpler imputation methods.Deletion was confirmed as a poor technique for handling missing data. However, despite the emphasized disadvantages of simpler imputation methods, this study showed that implementing these methods results in similar predictive utility for undiagnosed diabetes when compared to multiple imputation.
A Design of Modified 64 bit Wallace Multiplier using 45 nm Technology
S.Sunilkumar
2013-04-01
Full Text Available Multipliers plays a vital role in the field of digital processing of information especially signal and image. The key benefit of 64 bit multiplier is high precision computation but it has to be faster aswell. In this paper, we have designed a modified 64 bit Wallace multiplier. The designed multiplier reduces the number of half adders which are mainly used in the reduction phase of multiplier and alsothey do not contribute in the reduction of partial products. For the entire multiplication process we have used only 38 half adders. The multiplier is designed using Verilog-HDL and implemented using TSMC45nm technology. It is found that the designed multiplier has reduced number of half adder in each stage and it consumes 15.22 mW at 166 MHz.
Wood, Andrew R; Perry, John R B; Tanaka, Toshiko; Hernandez, Dena G; Zheng, Hou-Feng; Melzer, David; Gibbs, J Raphael; Nalls, Michael A; Weedon, Michael N; Spector, Tim D; Richards, J Brent; Bandinelli, Stefania; Ferrucci, Luigi; Singleton, Andrew B; Frayling, Timothy M
2013-01-01
Genome-wide association (GWA) studies have been limited by the reliance on common variants present on microarrays or imputable from the HapMap Project data. More recently, the completion of the 1000 Genomes Project has provided variant and haplotype information for several million variants derived from sequencing over 1,000 individuals. To help understand the extent to which more variants (including low frequency (1% ≤ MAF HapMap and 1000 Genomes imputation, respectively, and 9 and 11 that reached a stricter, likely conservative, threshold of PHapMap imputed data. We also detected an association between a low frequency variant and phenotype that was previously missed by HapMap based imputation approaches. An association between rs112635299 and alpha-1 globulin near the SERPINA gene represented the known association between rs28929474 (MAF = 0.007) and alpha1-antitrypsin that predisposes to emphysema (P = 2.5×10(-12)). Our data provide important proof of principle that 1000 Genomes imputation will detect novel, low frequency-large effect associations.
Determination of Ultimate Torque for Multiply Connected Cross Section Rod
V. L. Danilov
2015-01-01
Full Text Available The aim of this work is to determine load-carrying capability of the multiply cross-section rod. This calculation is based on the model of the ideal plasticity of the material, so that the desired ultimate torque is a torque at which the entire cross section goes into a plastic state.The article discusses the cylindrical multiply cross-section rod. To satisfy the equilibrium equation and the condition of plasticity simultaneously, two stress function Ф and φ are introduced. By mathematical transformations it has been proved that Ф is constant along the path, and a formula to find its values on the contours has been obtained. The paper also presents the rationale of the line of stress discontinuity and obtained relationships, which allow us to derive the equations break lines for simple interaction of neighboring circuits, such as two lines, straight lines and circles, circles and a different sign of the curvature.After substitution into the boundary condition at the end of the stress function Ф and mathematical transformations a formula is obtained to determine the ultimate torque for the multiply cross-section rod.Using the doubly connected cross-section and three-connected cross-section rods as an example the application of the formula of ultimate torque is studied.For doubly connected cross-section rod, the paper offers a formula of the torque versus the radius of the rod, the aperture radius and the distance between their centers. It also clearly demonstrates the torque dependence both on the ratio of the radii and on the displacement of hole. It is shown that the value of the torque is more influenced by the displacement of hole, rather than by the ratio of the radii.For the three-connected cross-section rod the paper shows the integration feature that consists in selection of a coordinate system. As an example, the ultimate torque is found by two methods: analytical one and 3D modeling. The method of 3D modeling is based on the Nadai
Making a graph crossing-critical by multiplying its edges
Beaudou, Laurent; Salazar, Gelasio
2011-01-01
A graph is crossing-critical if the removal of any of its edges decreases its crossing number. This work is motivated by the following question: to what extent is crossing- criticality a property that is inherent to the structure of a graph, and to what extent can it be induced on a noncritical graph by multiplying (all or some of) its edges? It is shown that if a nonplanar graph G is obtained by adding an edge to a cubic polyhedral graph, and G is sufficiently connected, then G can be made crossing-critical by a suitable multiplication of edges.
Verilog Implementation of an Efficient Multiplier Using Vedic Mathematics
2015-01-01
In this paper, the design of a 16x16 Vedic multiplier has been proposed using the 16 bit Modified Carry Select Adder and 16 bit Kogge Stone Adder. The Modified Carry Select Adder incorporates the Binary to Excess -1 Converter (BEC) and is known to be the fastest adder as compared to all the conventional adders. The design is implemented using the Verilog Hardware Description Language and tested using the Modelsim simulator. The code is synthesized using the Virtex-7 family with th...
Electron capture dissociation of singly and multiply phosphorylated peptides
Stensballe, A; Jensen, Ole Nørregaard; Olsen, J V
2000-01-01
Analysis of phosphotyrosine and phosphoserine containing peptides by nano-electrospray Fourier transform ion cyclotron resonance (FTICR) mass spectrometry established electron capture dissociation (ECD) as a viable method for phosphopeptide sequencing. In general, ECD spectra of synthetic...... and native phosphopeptides appeared less complex than conventional collision activated dissociation (CAD) mass spectra of these species. ECD of multiply protonated phosphopeptide ions generated mainly c- and z(.)-type peptide fragment ion series. No loss of water, phosphate groups or phosphoric acid from......(III)-affinity chromatography combined with nano-electrospray FTMS/ECD facilitated phosphopeptide analysis and amino acid sequencing from crude proteolytic peptide mixtures....
Robust formation control of marine surface craft using Lagrange multipliers
Ihle, Ivar-Andre F.; Jouffroy, Jerome; Fossen, Thor I.
2006-01-01
framework we develop robust control laws for marine surface vessels to counteract unknown, slowly varying, environmental disturbances and measurement noise. Robustness with respect to time-delays in the communication channels are addressed by linearizing the system. Simulations of tugboats subject......This paper presents a formation modelling scheme based on a set of inter-body constraint functions and Lagrangian multipliers. Formation control for a °eet of marine craft is achieved by stabilizing the auxiliary constraints such that the desired formation con¯guration appears. In the proposed...
A First Mass Production of Gas Electron Multipliers
Barbeau, P S; Geissinger, J D; Miyamoto, J; Shipsey, I; Yang, R
2003-01-01
We report on the manufacture of a first batch of approximately 2,000 Gas Electron Multipliers (GEMs) using 3M's fully automated roll to roll flexible circuit production line. This process allows low-cost, reproducible fabrication of a high volume of GEMs of dimensions up to 30$\\times$30 cm$^{2}$. First tests indicate that the resulting GEMs have optimal properties as radiation detectors. Production techniques and preliminary measurements of GEM performance are described. This now demonstrated industrial capability should help further establish the prominence of micropattern gas detectors in accelerator based and non-accelerator particle physics, imaging and photodetection.
Parity nonconservation in dielectronic recombination of multiply charged ions
Kozlov, M G; Currell, F J
2007-01-01
We discuss a parity nonconserving (PNC) asymmetry in the cross section of dielectronic recombination of polarized electrons on multiply charged ions with Z>40. This effect is strongly enhanced for close doubly-excited states of opposite parity in the intermediate compound ion. Such states are known for He-like ions. However, these levels have large energy and large radiative widths which hampers observation of the PNC asymmetry. We argue that accidentally degenerate states of the more complex ions may be more suitable for the corresponding experiment.
Helical channel multiplier package design for space instrumentation
Hoshiko, H. H.
1975-01-01
The package considered is intended for the channel electron multiplier (CEM) detectors which are to be used for the extreme ultraviolet telescope and helium glow detector instruments of the Apollo-Soyuz test project. In the package design selected, the cone of the CEM is supported at the front end by a silicone rubber ring which is molded in place and self-bonded to both the cone and the housing wall. The helix is supported and insulated from the housing by a fiber glass sleeve which is bonded to the inside of the housing.
Second cohomology of Lie rings and the Schur multiplier
Max Horn
2014-06-01
Full Text Available We exhibit an explicit construction for the second cohomology group$H^2(L, A$ for a Lie ring $L$ and a trivial $L$-module $A$.We show how the elements of $H^2(L, A$ correspond one-to-one to theequivalence classes of central extensions of $L$ by $A$, where $A$now is considered as an abelian Lie ring. For a finite Liering $L$ we also show that $H^2(L, C^* cong M(L$, where $M(L$ denotes theSchur multiplier of $L$. These results match precisely the analoguesituation in group theory.
The Administration's Crisis Multiplied by the Crisis of the Administrated
Alina Livia NICU
2014-11-01
Full Text Available The starting point of this work is the idea that the concept of “crisis” should be approached with no fear. It is necessary to understand it as the signal which attracts attention upon the fact that some changes are appropriate and that some rationally thought actions ought to be taken in order to soften the social phenomena occurring within a crisis period. We may say that in the core of the crisis lies impregnated the basic substance of progress and that the moment when a crisis is declared is as well the moment of a new start. It is necessary to anticipate the crisis, in order to prepare the adequate means able to soften up the shocks created by its incipit and to bring forward the progress through its action itself. One of the most necessary and useful instruments able to smooth down the crisis' effects is the early education provided to the citizens concerning the frame of the behavior to be adopted in case of crisis. The officials and the public servants are the social actors who constitute the interface between the citizen who is going to suffer the crisis and this latter's exerted pressure. The personnel from the public administration has to assume the hardest role in reducing the most possible the crisis' effects. Some possibilities are analysed that could reduce the effects of the economical, social and political crises, among which the most important is the quality of juridical norms. The Romanian legislation concerning the public charge is studied, in respect to its capacity to motivate the public servant to perform at his up most level, during crisis periods but not only then. The idea is emphasized that panic and uncontrolled social movements in case of a crisis might lead to the multiplying of the negative effects. The personnel from the public administration comes to a direct confrontation with the pressure of the negative effect of the crisis, as it is received by the public administration - understood as a structure
Modified approximate 8-point multiplier less DCT like transform
Siddharth Pande
2015-05-01
Full Text Available Discrete Cosine Transform (DCT is widely usedtransformation for compression in image and video standardslike H.264 or MPEGv4, JPEG etc. Currently the new standarddeveloped Codec is Highly Efficient Video Coding (HEVC orH.265. With the help of the transformation matrix the computational cost can be dynamically reduce. This paper proposesa novel approach of multiplier-less modified approximate DCT like transformalgorithm and also comparison with exact DCT algorithm and theapproximate DCT like transform. This proposed algorithm willhave lower computational complexity. Furthermore, the proposedalgorithm will be modular in approach, and suitable for pipelinedVLSI implementation.
Lazar, Cosmin; Gatto, Laurent; Ferro, Myriam; Bruley, Christophe; Burger, Thomas
2016-04-01
Missing values are a genuine issue in label-free quantitative proteomics. Recent works have surveyed the different statistical methods to conduct imputation and have compared them on real or simulated data sets and recommended a list of missing value imputation methods for proteomics application. Although insightful, these comparisons do not account for two important facts: (i) depending on the proteomics data set, the missingness mechanism may be of different natures and (ii) each imputation method is devoted to a specific type of missingness mechanism. As a result, we believe that the question at stake is not to find the most accurate imputation method in general but instead the most appropriate one. We describe a series of comparisons that support our views: For instance, we show that a supposedly "under-performing" method (i.e., giving baseline average results), if applied at the "appropriate" time in the data-processing pipeline (before or after peptide aggregation) on a data set with the "appropriate" nature of missing values, can outperform a blindly applied, supposedly "better-performing" method (i.e., the reference method from the state-of-the-art). This leads us to formulate few practical guidelines regarding the choice and the application of an imputation method in a proteomics context.
Shara, Nawar; Yassin, Sayf A; Valaitis, Eduardas; Wang, Hong; Howard, Barbara V; Wang, Wenyu; Lee, Elisa T; Umans, Jason G
2015-01-01
Kidney and cardiovascular disease are widespread among populations with high prevalence of diabetes, such as American Indians participating in the Strong Heart Study (SHS). Studying these conditions simultaneously in longitudinal studies is challenging, because the morbidity and mortality associated with these diseases result in missing data, and these data are likely not missing at random. When such data are merely excluded, study findings may be compromised. In this article, a subset of 2264 participants with complete renal function data from Strong Heart Exams 1 (1989-1991), 2 (1993-1995), and 3 (1998-1999) was used to examine the performance of five methods used to impute missing data: listwise deletion, mean of serial measures, adjacent value, multiple imputation, and pattern-mixture. Three missing at random models and one non-missing at random model were used to compare the performance of the imputation techniques on randomly and non-randomly missing data. The pattern-mixture method was found to perform best for imputing renal function data that were not missing at random. Determining whether data are missing at random or not can help in choosing the imputation method that will provide the most accurate results.
Nawar Shara
Full Text Available Kidney and cardiovascular disease are widespread among populations with high prevalence of diabetes, such as American Indians participating in the Strong Heart Study (SHS. Studying these conditions simultaneously in longitudinal studies is challenging, because the morbidity and mortality associated with these diseases result in missing data, and these data are likely not missing at random. When such data are merely excluded, study findings may be compromised. In this article, a subset of 2264 participants with complete renal function data from Strong Heart Exams 1 (1989-1991, 2 (1993-1995, and 3 (1998-1999 was used to examine the performance of five methods used to impute missing data: listwise deletion, mean of serial measures, adjacent value, multiple imputation, and pattern-mixture. Three missing at random models and one non-missing at random model were used to compare the performance of the imputation techniques on randomly and non-randomly missing data. The pattern-mixture method was found to perform best for imputing renal function data that were not missing at random. Determining whether data are missing at random or not can help in choosing the imputation method that will provide the most accurate results.
Jiangxiu Zhou
2014-09-01
Full Text Available The purpose of this study is to demonstrate a way of dealing with missing data in clustered randomized trials by doing multiple imputation (MI with the PAN package in R through SAS. The procedure for doing MI with PAN through SAS is demonstrated in detail in order for researchers to be able to use this procedure with their own data. An illustration of the technique with empirical data was also included. In this illustration thePAN results were compared with pairwise deletion and three types of MI: (1 Normal Model (NM-MI ignoring the cluster structure; (2 NM-MI with dummy-coded cluster variables (fixed cluster structure; and (3 a hybrid NM-MI which imputes half the time ignoring the cluster structure, and the other half including the dummy-coded cluster variables. The empirical analysis showed that using PAN and the other strategies produced comparable parameter estimates. However, the dummy-coded MI overestimated the intraclass correlation, whereas MI ignoring the cluster structure and the hybrid MI underestimated the intraclass correlation. When compared with PAN, the p-value and standard error for the treatment effect were higher with dummy-coded MI, and lower with MI ignoring the clusterstructure, the hybrid MI approach, and pairwise deletion. Previous studies have shown that NM-MI is not appropriate for handling missing data in clustered randomized trials. This approach, in addition to the pairwise deletion approach, leads to a biased intraclass correlation and faultystatistical conclusions. Imputation in clustered randomized trials should be performed with PAN. We have demonstrated an easy way for using PAN through SAS.
Efficient Reversible Montgomery Multiplier and Its Application to Hardware Cryptography
Noor M. Nayeem
2009-01-01
Full Text Available Problem Statement: Arithmetic Logic Unit (ALU of a crypto-processor and microchips leak information through power consumption. Although the cryptographic protocols are secured against mathematical attacks, the attackers can break the encryption by measuring the energy consumption. Approach: To thwart attacks, this study proposed the use of reversible logic for designing the ALU of a crypto-processor. Ideally, reversible circuits do not dissipate any energy. If reversible circuits are used, then the attacker would not be able to analyze the power consumption. In order to design the reversible ALU of a crypto-processor, reversible Carry Save Adder (CSA using Modified TSG (MTSG gates and architecture of Montgomery multiplier were proposed. For reversible implementation of Montgomery multiplier, efficient reversible multiplexers and sequential circuits such as reversible registers and shift registers were presented. Results: This study showed that modified designs perform better than the existing ones in terms of number of gates, number of garbage outputs and quantum cost. Lower bounds of the proposed designs were established by providing relevant theorems and lemmas. Conclusion: The application of reversible circuit is suitable to the field of hardware cryptography.
Low voltage electron multiplying CCD in a CMOS process
Dunford, Alice; Stefanov, Konstantin; Holland, Andrew
2016-07-01
Low light level and high-speed image sensors as required for space applications can suffer from a decrease in the signal to noise ratio (SNR) due to the photon-starved environment and limitations of the sensor's readout noise. The SNR can be increased by the implementation of Time Delay Integration (TDI) as it allows photoelectrons from multiple exposures to be summed in the charge domain with no added noise. Electron Multiplication (EM) can further improve the SNR and lead to an increase in device performance. However, both techniques have traditionally been confined to Charge Coupled Devices (CCD) due to the efficient charge transfer required. With the increase in demand for CMOS sensors with equivalent or superior functionality and performance, this paper presents findings from the characterisation of a low voltage EMCCD in a CMOS process using advanced design features to increase the electron multiplying gain. By using the CMOS process, it is possible to increase chip integration and functionality and achieve higher readout speeds and reduced pixel size. The presented characterisation results include analysis of the photon transfer curve, the dark current, the electron multiplying gain and analysis of the parameters' dependence on temperature and operating voltage.
Multiplying optical tweezers force using a micro-lever.
Lin, Chih-Lang; Lee, Yi-Hsiung; Lin, Chin-Te; Liu, Yi-Jui; Hwang, Jiann-Lih; Chung, Tien-Tung; Baldeck, Patrice L
2011-10-10
This study presents a photo-driven micro-lever fabricated to multiply optical forces using the two-photon polymerization 3D-microfabrication technique. The micro-lever is a second class lever comprising an optical trapping sphere, a beam, and a pivot. A micro-spring is placed between the short and long arms to characterize the induced force. This design enables precise manipulation of the micro-lever by optical tweezers at the micron scale. Under optical dragging, the sphere placed on the lever beam moves, resulting in torque that induces related force on the spring. The optical force applied at the sphere is approximately 100 to 300 pN, with a laser power of 100 to 300 mW. In this study, the optical tweezers drives the micro-lever successfully. The relationship between the optical force and the spring constant can be determined by using the principle of leverage. The arm ratio design developed in this study multiplies the applied optical force by 9. The experimental results are in good agreement with the simulation of spring property.
MULTIPLIERS AND TENSOR PRODUCTS OF WEIGHTED LP-SPACES
无
2001-01-01
Let G be a locally compact uninmodular group with Haar measure rmdx and ω be the Beurling's weight function on G (Reiter, [10]). In this paper the authors define a space APωq,q (G) and prove that Aωp,q (G) is a translation invariant Banach space. Furthermore the authors discuss inclusion properties and show that if G is a locally compact abelian group then Aωp,q (G) admits an approximate identity bounded in L1ω. (G). It is also proved that the space Lpωp (G) L1ω lPω(G) is isometrically isomorphic to the space Apω.q (G) and the space of multipliers from Lωp (G) to Lqωq-1 (G) is isometrically isomorphic to the dual of the space Aωp,q (G) iff G satisfies a property Ppq. At the end of this work it is showed that if G is a locally compact abelian group then the space of all multipliers from L1ω (G) to Aωp,q (G) is the space Aωp,q (G).
Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam
2012-01-01
Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification
Using the Superpopulation Model for Imputations and Variance Computation in Survey Sampling
Petr Novák
2012-03-01
Full Text Available This study is aimed at variance computation techniques for estimates of population characteristics based on survey sampling and imputation. We use the superpopulation regression model, which means that the target variable values for each statistical unit are treated as random realizations of a linear regression model with weighted variance. We focus on regression models with one auxiliary variable and no intercept, which have many applications and straightforward interpretation in business statistics. Furthermore, we deal with caseswhere the estimates are not independent and thus the covariance must be computed. We also consider chained regression models with auxiliary variables as random variables instead of constants.
Diamond Heat-Spreader for Submillimeter-Wave Frequency Multipliers
Lin, Robert H.; Schlecht, Erich T.; Chattopadhyay, Goutam; Gill, John J.; Mehdi, Imran; Siegel, Peter H.; Ward, John S.; Lee, Choonsup; Thomas, Bertrand C.; Maestrini, Alain
2010-01-01
The planar GaAs Shottky diode frequency multiplier is a critical technology for the local oscillator (LO) for submillimeter- wave heterodyne receivers due to low mass, tenability, long lifetime, and room-temperature operation. The use of a W-band (75-100 GHz) power amplifier followed by a frequency multiplier is the most common for submillimeter-wave sources. Its greatest challenge is to provide enough input power to the LO for instruments onboard future planetary missions. Recently, JPL produced 800 mW at 92.5 GHz by combining four MMICs in parallel in a balanced configuration. As more power at W-band is available to the multipliers, their power-handling capability be comes more important. High operating temperatures can lead to degradation of conversion efficiency or catastrophic failure. The goal of this innovation is to reduce the thermal resistance by attaching diamond film as a heat-spreader on the backside of multipliers to improve their power-handling capability. Polycrystalline diamond is deposited by hot-filament chemical vapor deposition (CVD). This diamond film acts as a heat-spreader to both the existing 250- and 300-GHz triplers, and has a high thermal conductivity (1,000-1,200 W/mK). It is approximately 2.5 times greater than copper (401 W/mK) and 20 times greater than GaAs (46 W/mK). It is an electrical insulator (resistivity approx. equals 10(exp 15) Ohms-cm), and has a low relative dielectric constant of 5.7. Diamond heat-spreaders reduce by at least 200 C at 250 mW of input power, compared to the tripler without diamond, according to thermal simulation. This superior thermal management provides a 100-percent increase in power-handling capability. For example, with this innovation, 40-mW output power has been achieved from a 250-GHz tripler at 350-mW input power, while the previous triplers, without diamond, suffered catastrophic failures. This breakthrough provides a stepping-stone for frequency multipliers-based LO up to 3 THz. The future work
McClure, Matthew C.; Sonstegard, Tad S.; Wiggans, George R.; Van Eenennaam, Alison L.; Weber, Kristina L.; Penedo, Cecilia T.; Berry, Donagh P.; Flynn, John; Garcia, Jose F.; Carmo, Adriana S.; Regitano, Luciana C. A.; Albuquerque, Milla; Silva, Marcos V. G. B.; Machado, Marco A.; Coffey, Mike; Moore, Kirsty; Boscher, Marie-Yvonne; Genestout, Lucie; Mazza, Raffaele; Taylor, Jeremy F.; Schnabel, Robert D.; Simpson, Barry; Marques, Elisa; McEwan, John C.; Cromie, Andrew; Coutinho, Luiz L.; Kuehn, Larry A.; Keele, John W.; Piper, Emily K.; Cook, Jim; Williams, Robert; Van Tassell, Curtis P.
2013-01-01
To assist cattle producers transition from microsatellite (MS) to single nucleotide polymorphism (SNP) genotyping for parental verification we previously devised an effective and inexpensive method to impute MS alleles from SNP haplotypes. While the reported method was verified with only a limited data set (N = 479) from Brown Swiss, Guernsey, Holstein, and Jersey cattle, some of the MS-SNP haplotype associations were concordant across these phylogenetically diverse breeds. This implied that some haplotypes predate modern breed formation and remain in strong linkage disequilibrium. To expand the utility of MS allele imputation across breeds, MS and SNP data from more than 8000 animals representing 39 breeds (Bos taurus and B. indicus) were used to predict 9410 SNP haplotypes, incorporating an average of 73 SNPs per haplotype, for which alleles from 12 MS markers could be accurately be imputed. Approximately 25% of the MS-SNP haplotypes were present in multiple breeds (N = 2 to 36 breeds). These shared haplotypes allowed for MS imputation in breeds that were not represented in the reference population with only a small increase in Mendelian inheritance inconsistancies. Our reported reference haplotypes can be used for any cattle breed and the reported methods can be applied to any species to aid the transition from MS to SNP genetic markers. While ~91% of the animals with imputed alleles for 12 MS markers had ≤1 Mendelian inheritance conflicts with their parents' reported MS genotypes, this figure was 96% for our reference animals, indicating potential errors in the reported MS genotypes. The workflow we suggest autocorrects for genotyping errors and rare haplotypes, by MS genotyping animals whose imputed MS alleles fail parentage verification, and then incorporating those animals into the reference dataset. PMID:24065982
Matthew Charles Mcclure
2013-09-01
Full Text Available To assist cattle producers transition from microsatellite (MS to single nucleotide polymorphism (SNP genotyping for parental verification we previously devised an effective and inexpensive method to impute MS alleles from SNP haplotypes. While the reported method was verified with only a limited data set (N=479 from Brown Swiss, Guernsey, Holstein, and Jersey cattle, some of the MS-SNP haplotype associations were concordant across these phylogenetically diverse breeds. This implied that some haplotypes predate modern breed formation and remain in strong linkage disequilibrium. To expand the utility of MS allele imputation across breeds, MS and SNP data from more than 8,000 animals representing 39 breeds (Bos taurus and B. indicus were used to predict 9,410 SNP haplotypes, incorporating an average of 73 SNPs per haplotype, for which alleles for 12 MS markers could be accurately be imputed. Approximately 25% of the MS-SNP haplotypes were present in multiple breeds (N=2 to 36 breeds. These shared haplotypes allowed for MS imputation in breeds that were not represented in the reference population with only a small increase in Mendelian inheritance inconsistancies. Our reported reference haplotypes can be used for any cattle breed and the reported methods can be applied to any species to aid the transition from MS to SNP genetic markers. While ~91% of the animals with imputed alleles for 12 MS markers had <1 Mendelian inheritance conflicts with their parents’ reported MS genotypes, this figure was 96% for our reference animals, indicating potential errors in the reported MS genotypes. The workflow we suggest autocorrects for genotyping errors and rare haplotypes, by MS genotyping animals whose imputed MS alleles fail parentage verification, and then incorporating those animals into the reference dataset.
Using full-cohort data in nested case-control and case-cohort studies by multiple imputation.
Keogh, Ruth H; White, Ian R
2013-10-15
In many large prospective cohorts, expensive exposure measurements cannot be obtained for all individuals. Exposure-disease association studies are therefore often based on nested case-control or case-cohort studies in which complete information is obtained only for sampled individuals. However, in the full cohort, there may be a large amount of information on cheaply available covariates and possibly a surrogate of the main exposure(s), which typically goes unused. We view the nested case-control or case-cohort study plus the remainder of the cohort as a full-cohort study with missing data. Hence, we propose using multiple imputation (MI) to utilise information in the full cohort when data from the sub-studies are analysed. We use the fully observed data to fit the imputation models. We consider using approximate imputation models and also using rejection sampling to draw imputed values from the true distribution of the missing values given the observed data. Simulation studies show that using MI to utilise full-cohort information in the analysis of nested case-control and case-cohort studies can result in important gains in efficiency, particularly when a surrogate of the main exposure is available in the full cohort. In simulations, this method outperforms counter-matching in nested case-control studies and a weighted analysis for case-cohort studies, both of which use some full-cohort information. Approximate imputation models perform well except when there are interactions or non-linear terms in the outcome model, where imputation using rejection sampling works well. Copyright © 2013 John Wiley & Sons, Ltd.
Cook Jonathan A
2008-08-01
Full Text Available Abstract Background Randomised controlled trials (RCTs are perceived as the gold-standard method for evaluating healthcare interventions, and increasingly include quality of life (QoL measures. The observed results are susceptible to bias if a substantial proportion of outcome data are missing. The review aimed to determine whether imputation was used to deal with missing QoL outcomes. Methods A random selection of 285 RCTs published during 2005/6 in the British Medical Journal, Lancet, New England Journal of Medicine and Journal of American Medical Association were identified. Results QoL outcomes were reported in 61 (21% trials. Six (10% reported having no missing data, 20 (33% reported ≤ 10% missing, eleven (18% 11%–20% missing, and eleven (18% reported >20% missing. Missingness was unclear in 13 (21%. Missing data were imputed in 19 (31% of the 61 trials. Imputation was part of the primary analysis in 13 trials, but a sensitivity analysis in six. Last value carried forward was used in 12 trials and multiple imputation in two. Following imputation, the most common analysis method was analysis of covariance (10 trials. Conclusion The majority of studies did not impute missing data and carried out a complete-case analysis. For those studies that did impute missing data, researchers tended to prefer simpler methods of imputation, despite more sophisticated methods being available.
Golino, Hudson F.; Gomes, Cristiano M. A.
2016-01-01
This paper presents a non-parametric imputation technique, named random forest, from the machine learning field. The random forest procedure has two main tuning parameters: the number of trees grown in the prediction and the number of predictors used. Fifty experimental conditions were created in the imputation procedure, with different…
2010-04-01
..., or with the organization's knowledge, approval or acquiescence. The organization's acceptance of the... conduct as follows: (a) Conduct imputed from an individual to an organization. We may impute the... other individual associated with an organization, to that organization when the improper...
Lotz Meredith J
2008-01-01
Full Text Available Abstract Background Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures × time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. Results We found that the optimal imputation algorithms (LSA, LLS, and BPCA are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Conclusion Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA
A multi breed reference improves genotype imputation accuracy in Nordic Red cattle
Brøndum, Rasmus Froberg; Ma, Peipei; Lund, Mogens Sandø;
2012-01-01
612,615 SNPs on chromosome 1-29 remained for analysis. Validation was done by masking markers in true HD data and imputing them using Beagle v. 3.3 and a reference group of either national Red, combined Red or combined Red and Holstein bulls. Results show a decrease in allele error rate from 2.64, 1......The objective of this study was to investigate if a multi breed reference would improve genotype imputation accuracy from 50K to high density (HD) single nucleotide polymorphism (SNP) marker data in Nordic Red Dairy Cattle, compared to using only a single breed reference, and to check.......39 and 0.87 percent to 1.75, 0.59 and 0.54 percent for respectively Danish, Swedish and Fi nnish Red when going from single national reference to a combined Red reference. The larger error rate in the Danish population was caused by a subgroup of 10 animals showing a large proportion of Holstein genetics...
A multi breed reference improves genotype imputation accuracy in Nordic Red cattle
Brøndum, Rasmus Froberg; Ma, Peipei; Lund, Mogens Sandø;
612,615 SNPs on chromosome 1-29 remained for analysis. Validation was done by masking markers in true HD data and imputing them using Beagle v. 3.3 and a reference group of either national Red, combined Red or combined Red and Holstein bulls. Results show a decrease in allele error rate from 2.64, 1......The objective of this study was to investigate if a multi breed reference would improve genotype imputation accuracy from 50K to high density (HD) single nucleotide polymorphism (SNP) marker data in Nordic Red Dairy Cattle, compared to using only a single breed reference, and to check.......39 and 0.87 percent to 1.75, 0.59 and 0.54 percent for respectively Danish, Swedish and Fi nnish Red when going from single national reference to a combined Red reference. The larger error rate in the Danish population was caused by a subgroup of 10 animals showing a large proportion of Holstein genetics...
Andreas Pfaffel
2016-09-01
Full Text Available Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are often faced with a small or moderate number of applicants but must still attempt to estimate the population correlation between predictor and criterion. Therefore, in the present study we investigated the accuracy of population correlation estimates and their associated standard error in terms of small and moderate sample sizes. We applied multiple imputation by chained equations for continuous and naturally dichotomous criterion variables. The results show that multiple imputation by chained equations is accurate for a continuous criterion variable, even for a small number of applicants when the selection ratio is not too small. In the case of a naturally dichotomous criterion variable, a small or moderate number of applicants leads to biased estimates when the selection ratio is small. In contrast, the standard error of the population correlation estimate is accurate over a wide range of conditions of sample size, selection ratio, true population correlation, for continuous and naturally dichotomous criterion variables, and for direct and indirect range restriction scenarios. The findings of this study provide empirical evidence about the accuracy of the correction, and support researchers and evaluators in their assessment of conditions under which correlation coefficients corrected for range restriction can be trusted.
Application of the Single Imputation Method to Estimate Missing Wind Speed Data in Malaysia
Nurulkamal Masseran
2013-07-01
Full Text Available In almost all research fields, the procedure for handling missing values must be addressed before a detailed analysis can be made. Thus, a suitable method of imputation should be chosen to address the missing value problem. Wind speed has been found in engineering practice to be the most significant parameter in wind power. However, researchers are sometimes faced with the problem of missing wind speed data caused by equipment failure. In this study, we attempt to implement four types of single imputation methods to estimate the wind speed data from three adjacent stations in Malaysia. The methods, known as the site-dependent effect method, the hour mean method, the last and next method, and the row mean method, are compared based on the index of agreement to identify the best method for estimating the missing values. The results indicate that the last and next is the best of the three methods for estimating the missing data for the wind stations considered.
Spatial Copula Model for Imputing Traffic Flow Data from Remote Microwave Sensors.
Ma, Xiaolei; Luan, Sen; Du, Bowen; Yu, Bin
2017-09-21
Issues of missing data have become increasingly serious with the rapid increase in usage of traffic sensors. Analyses of the Beijing ring expressway have showed that up to 50% of microwave sensors pose missing values. The imputation of missing traffic data must be urgently solved although a precise solution that cannot be easily achieved due to the significant number of missing portions. In this study, copula-based models are proposed for the spatial interpolation of traffic flow from remote traffic microwave sensors. Most existing interpolation methods only rely on covariance functions to depict spatial correlation and are unsuitable for coping with anomalies due to Gaussian consumption. Copula theory overcomes this issue and provides a connection between the correlation function and the marginal distribution function of traffic flow. To validate copula-based models, a comparison with three kriging methods is conducted. Results indicate that copula-based models outperform kriging methods, especially on roads with irregular traffic patterns. Copula-based models demonstrate significant potential to impute missing data in large-scale transportation networks.
FCMPSO: An Imputation for Missing Data Features in Heart Disease Classification
Salleh, Mohd Najib Mohd; Ashikin Samat, Nurul
2017-08-01
The application of data mining and machine learning in directing clinical research into possible hidden knowledge is becoming greatly influential in medical areas. Heart Disease is a killer disease around the world, and early prevention through efficient methods can help to reduce the mortality number. Medical data may contain many uncertainties, as they are fuzzy and vague in nature. Nonetheless, imprecise features data such as no values and missing values can affect quality of classification results. Nevertheless, the other complete features are still capable to give information in certain features. Therefore, an imputation approach based on Fuzzy C-Means and Particle Swarm Optimization (FCMPSO) is developed in preprocessing stage to help fill in the missing values. Then, the complete dataset is trained in classification algorithm, Decision Tree. The experiment is trained with Heart Disease dataset and the performance is analysed using accuracy, precision, and ROC values. Results show that the performance of Decision Tree is increased after the application of FCMSPO for imputation.
Whole-Genome Sequencing Coupled to Imputation Discovers Genetic Signals for Anthropometric Traits.
Tachmazidou, Ioanna; Süveges, Dániel; Min, Josine L; Ritchie, Graham R S; Steinberg, Julia; Walter, Klaudia; Iotchkova, Valentina; Schwartzentruber, Jeremy; Huang, Jie; Memari, Yasin; McCarthy, Shane; Crawford, Andrew A; Bombieri, Cristina; Cocca, Massimiliano; Farmaki, Aliki-Eleni; Gaunt, Tom R; Jousilahti, Pekka; Kooijman, Marjolein N; Lehne, Benjamin; Malerba, Giovanni; Männistö, Satu; Matchan, Angela; Medina-Gomez, Carolina; Metrustry, Sarah J; Nag, Abhishek; Ntalla, Ioanna; Paternoster, Lavinia; Rayner, Nigel W; Sala, Cinzia; Scott, William R; Shihab, Hashem A; Southam, Lorraine; St Pourcain, Beate; Traglia, Michela; Trajanoska, Katerina; Zaza, Gialuigi; Zhang, Weihua; Artigas, María S; Bansal, Narinder; Benn, Marianne; Chen, Zhongsheng; Danecek, Petr; Lin, Wei-Yu; Locke, Adam; Luan, Jian'an; Manning, Alisa K; Mulas, Antonella; Sidore, Carlo; Tybjaerg-Hansen, Anne; Varbo, Anette; Zoledziewska, Magdalena; Finan, Chris; Hatzikotoulas, Konstantinos; Hendricks, Audrey E; Kemp, John P; Moayyeri, Alireza; Panoutsopoulou, Kalliope; Szpak, Michal; Wilson, Scott G; Boehnke, Michael; Cucca, Francesco; Di Angelantonio, Emanuele; Langenberg, Claudia; Lindgren, Cecilia; McCarthy, Mark I; Morris, Andrew P; Nordestgaard, Børge G; Scott, Robert A; Tobin, Martin D; Wareham, Nicholas J; Burton, Paul; Chambers, John C; Smith, George Davey; Dedoussis, George; Felix, Janine F; Franco, Oscar H; Gambaro, Giovanni; Gasparini, Paolo; Hammond, Christopher J; Hofman, Albert; Jaddoe, Vincent W V; Kleber, Marcus; Kooner, Jaspal S; Perola, Markus; Relton, Caroline; Ring, Susan M; Rivadeneira, Fernando; Salomaa, Veikko; Spector, Timothy D; Stegle, Oliver; Toniolo, Daniela; Uitterlinden, André G; Barroso, Inês; Greenwood, Celia M T; Perry, John R B; Walker, Brian R; Butterworth, Adam S; Xue, Yali; Durbin, Richard; Small, Kerrin S; Soranzo, Nicole; Timpson, Nicholas J; Zeggini, Eleftheria
2017-06-01
Deep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the broader allelic architecture of 12 anthropometric traits associated with height, body mass, and fat distribution in up to 267,616 individuals. We report 106 genome-wide significant signals that have not been previously identified, including 9 low-frequency variants pointing to functional candidates. Of the 106 signals, 6 are in genomic regions that have not been implicated with related traits before, 28 are independent signals at previously reported regions, and 72 represent previously reported signals for a different anthropometric trait. 71% of signals reside within genes and fine mapping resolves 23 signals to one or two likely causal variants. We confirm genetic overlap between human monogenic and polygenic anthropometric traits and find signal enrichment in cis expression QTLs in relevant tissues. Our results highlight the potential of WGS strategies to enhance biologically relevant discoveries across the frequency spectrum. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Bridging a Survey Redesign Using Multiple Imputation: An Application to the 2014 CPS ASEC
Rothbaum Jonathan
2017-03-01
Full Text Available The Current Population Survey Annual Social and Economic Supplement (CPS ASEC serves as the data source for official income, poverty, and inequality statistics in the United States. In 2014, the CPS ASEC questionnaire was redesigned to improve data quality and to reduce misreporting, item nonresponse, and errors resulting from respondent fatigue. The sample was split into two groups, with nearly 70% receiving the traditional instrument and 30% receiving the redesigned instrument. Due to the relatively small redesign sample, analyses of changes in income and poverty between this and future years may lack sufficient power, especially for subgroups. The traditional sample is treated as if the responses were missing for income sources targeted by the redesign, and multiple imputation is used to generate plausible responses. A flexible imputation technique is used to place individuals into strata along two dimensions: 1 their probability of income recipiency and 2 their expected income conditional on recipiency for each income source. By matching on these two dimensions, this approach combines the ideas of propensity score matching and predictive means matching. In this article, this approach is implemented, the matching models are evaluated using diagnostics, and the results are analyzed.
Mägi, Reedik; Asimit, Jennifer L; Day-Williams, Aaron G; Zeggini, Eleftheria; Morris, Andrew P
2012-12-01
Genome-wide association studies have been successful in identifying loci contributing effects to a range of complex human traits. The majority of reproducible associations within these loci are with common variants, each of modest effect, which together explain only a small proportion of heritability. It has been suggested that much of the unexplained genetic component of complex traits can thus be attributed to rare variation. However, genome-wide association study genotyping chips have been designed primarily to capture common variation, and thus are underpowered to detect the effects of rare variants. Nevertheless, we demonstrate here, by simulation, that imputation from an existing scaffold of genome-wide genotype data up to high-density reference panels has the potential to identify rare variant associations with complex traits, without the need for costly re-sequencing experiments. By application of this approach to genome-wide association studies of seven common complex diseases, imputed up to publicly available reference panels, we identify genome-wide significant evidence of rare variant association in PRDM10 with coronary artery disease and multiple genes in the major histocompatibility complex (MHC) with type 1 diabetes. The results of our analyses highlight that genome-wide association studies have the potential to offer an exciting opportunity for gene discovery through association with rare variants, conceivably leading to substantial advancements in our understanding of the genetic architecture underlying complex human traits.
Low Power Floating Point Computation Sharing Multiplier for Signal Processing Applications
Sivanantham S
2013-04-01
Full Text Available Design of low power, higher performance digital signal processing elements are the major requirements in ultra deep sub-micron technology. This paper presents an IEEE-754 standard compatible single precision Floating-point Computation SHaring Multiplier (FCSHM scheme suitable for low-power and high-speed signal processing applications. The floating-point multiplier used at thefilter taps effectively uses the computation re-use concept. Experimental results on a 10-tap programmable FIR filter show that the proposed multiplier scheme can provide a power reduction of 39.7% and significant improvements in the performance compared to conventional floating-point carry save array multiplier implementations.
Fpga Implementation of 8-Bit Vedic Multiplier by Using Complex Numbers
Gundlapalle Nandakishore,
2014-06-01
Full Text Available The paper describes the implementation of 8-bit vedic multiplier using complex numbers previous technique describes that 8-bit vedic multiplier using barrel shifter by FPGA implementation comparing the both technique in this paper propagation delay is reduced so that processing of speed will be high 8-bit vedic multiplier using barrel shifter propagation delay nearly 22nsec but present technique 8-bit vedic multiplier using complex numbers where propagation delay is 19nsec. The design is implemented and verified by FPGA and ISE simulator. The core was implemented on the Spartan 3E starts board the preferred language is used in verilog.
T-fuzzy multiply positive implicative BCC-ideals of BCC-algebras
Jianming Zhan; Zhisong Tan
2003-01-01
The concept of fuzzy multiply positive BCC-ideals of BCC-algebras is introduced, and then some related results are obtained. Moreover, we introduce the concept of T-fuzzy multiply positive implicative BCC-ideals of BCC-algebras and investigate T-product of T-fuzzy multiply positive implicative BCC-ideals of BCC-algebras, examining its properties. Using a t-norm T, the direct product and T-product of T-fuzzy multiply positive implicative BCC-ideals of BCC-algebras are discussed and their...
Improved 64-bit Radix-16 Booth Multiplier Based on Partial Product Array Height Reduction
Antelo, Elisardo; Montuschi, Paolo; Nannarelli, Alberto
2016-01-01
In this paper, we describe an optimization for binary radix-16 (modified) Booth recoded multipliers to reduce the maximum height of the partial product columns to ï£®n/4ï£¹ for [Formula: see text] unsigned operands. This is in contrast to the conventional maximum height of ï£®(n+1)/4ï£¹. Therefor...... to be included in the partial product array without increasing the delay. The method can be extended to Booth recoded radix-8 multipliers, signed multipliers, combined signed/unsigned multipliers, and other values of n....
Module homomorphisms and multipliers on locally compact quantum groups
Ramezanpour, M
2009-01-01
For a Banach algebra $A$ with a bounded approximate identity, we investigate the $A$-module homomorphisms of certain introverted subspaces of $A^*$, and show that all $A$-module homomorphisms of $A^*$ are normal if and only if $A$ is an ideal of $A^{**}$. We obtain some characterizations of compactness and discreteness for a locally compact quantum group $\\G$. Furthermore, in the co-amenable case we prove that the multiplier algebra of $\\LL$ can be identified with $\\MG.$ As a consequence, we prove that $\\G$ is compact if and only if $\\LUC={\\rm WAP}(\\G)$ and $\\MG\\cong\\mathcal{Z}({\\rm LUC}(\\G)^*)$; which partially answer a problem raised by Volker Runde.
Antiproton beam profile measurements using Gas Electron Multipliers
Pinto, Serge Duarte; Spanggaard, Jens; Tranquille, Gerard
2011-01-01
The new beam profile measurement for the Antiproton Decelerator (AD) at CERN is based on a single Gas Electron Multiplier (GEM) with a 2D readout structure. This detector is very light, ~0.4% X_0, as required by the low energy of the antiprotons, 5.3 MeV. This overcomes the problems previously encountered with multi-wire proportional chambers (MWPC) for the same purpose, where beam interactions with the detector severely affect the obtained profiles. A prototype was installed and successfully tested in late 2010, with another five detectors now installed in the ASACUSA and AEgIS beam lines. We will provide a detailed description of the detector and discuss the results obtained. The success of these detectors in the AD makes GEM-based detectors likely candidates for upgrade of the beam profile monitors in all experimental areas at CERN. The various types of MWPC currently in use are aging and becoming increasingly difficult to maintain.
Lagrange multiplier for perishable inventory model considering warehouse capacity planning
Amran, Tiena Gustina; Fatima, Zenny
2017-06-01
This paper presented Lagrange Muktiplier approach for solving perishable raw material inventory planning considering warehouse capacity. A food company faced an issue of managing perishable raw materials and marinades which have limited shelf life. Another constraint to be considered was the capacity of the warehouse. Therefore, an inventory model considering shelf life and raw material warehouse capacity are needed in order to minimize the company's inventory cost. The inventory model implemented in this study was the adapted economic order quantity (EOQ) model which is optimized using Lagrange multiplier. The model and solution approach were applied to solve a case industry in a food manufacturer. The result showed that the total inventory cost decreased 2.42% after applying the proposed approach.
Domingues M. O.
2013-12-01
Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution.
Electromagnetic Radiation in Multiply Connected Robertson-Walker Cosmologies
Tomaschitz, R
1993-01-01
Maxwell's equations on a topologically nontrivial cosmological background are studied. The cosmology is locally determined by a Robertson-Walker line element, but the spacelike slices are open hyperbolic manifolds, whose topology and geometry may vary in time. In this context the spectral resolution of Maxwell's equations in terms of horospherical elementary waves generated at infinity of hyperbolic space is given. The wave fronts are orthogonal to bundles of unstable geodesic rays, and the eikonal of geometric optics appears just as the phase of the horospherical waves. This fact is used to attach to the unstable geodesic rays a quantum mechanical momentum. In doing so the quantized energy-momentum tensor of the radiation field is constructed in a geometrically and dynamically transparent way, without appealing to the intricacies of the second quantization. In particular Planck's radiation formula, and the bearing of the multiply connected topology on the fluctuations in the temperature of the background rad...
Scaled AAN for Fixed-Point Multiplier-Free IDCT
P. P. Zhu
2009-01-01
Full Text Available An efficient algorithm derived from AAN algorithm (proposed by Arai, Agui, and Nakajima in 1988 for computing the Inverse Discrete Cosine Transform (IDCT is presented. We replace the multiplications in conventional AAN algorithm with additions and shifts to realize the fixed-point and multiplier-free computation of IDCT and adopt coefficient and compensation matrices to improve the precision of the algorithm. Our 1D IDCT can be implemented by 46 additions and 20 shifts. Due to the absence of the multiplications, this modified algorithm takes less time than the conventional AAN algorithm. The algorithm has low drift in decoding due to the higher computational precision, which fully complies with IEEE 1180 and ISO/IEC 23002-1 specifications. The implementation of the novel fast algorithm for 32-bit hardware is discussed, and the implementations for 24-bit and 16-bit hardware are also introduced, which are more suitable for mobile communication devices.
Lagrange Multipliers and Third Order Scalar-Tensor Field Theories
Horndeski, Gregory W
2016-01-01
In a space of 4-dimensions, I will examine constrained variational problems in which the Lagrangian, and constraint scalar density, are concomitants of a (pseudo-Riemannian) metric tensor and its first two derivatives. The Lagrange multiplier for these constrained extremal problems will be a scalar field. For suitable choices of the Lagrangian, and constraint, we can obtain Euler-Lagrange equations which are second order in the scalar field and third order in the metric tensor. The effect of disformal transformations on the constraint Lagrangians, and their generalizations, is examined. This will yield other second order scalar-tensor Lagrangians which yield field equations which are at most of third order. No attempt is made to construct all possible third order scalar-tensor Euler-Lagrange equations in a 4-space, although nine classes of such field equations are presented. Two of these classes admit subclasses which yield conformally invariant field equations. A few remarks on scalar-tensor-connection theor...
L’ESPACE MULTIPLIE CHEZ SARTRE : HUIS-CLOS
Corina-Amelia GEORGESCU
2013-05-01
Full Text Available Du point de vue historique, le XXe siècle est le siècle le plus bouleversé de l’histoire connue de l’humanité, se caractérisant par plusieurs événements majeurs. C’est dans ce contexte que Jean-Paul Sartre se fraie un chemin dans la vie littéraire française en y apportant un souffle nouveau. Notre travail se propose d’analyser la pièce Huis clos dans le but de démontrer que lorque l’on parle de l’espace de l’emprisonnement, on doit prendre en considération qu’il ne s’agit pas d’un seul type d’espace et que celui-ci se multiplie justement pour rendre l’impossibilité d’y échapper.
Vortex generated fluid flows in multiply connected domains
Zemlyanova, Anna; Handley, Demond
2016-01-01
A fluid flow in a multiply connected domain generated by an arbitrary number of point vortices is considered. A stream function for this flow is constructed as a limit of a certain functional sequence using the method of images. The convergence of this sequence is discussed, and the speed of convergence is determined explicitly. The presented formulas allow for the easy computation of the values of the stream function with arbitrary precision in the case of well-separated cylinders. The considered problem is important for applications such as eddy flows in the oceans. Moreover, since finding the stream function of the flow is essentially identical to finding the modified Green's function for Laplace's equation, the presented method can be applied to a more general class of applied problems which involve solving the Dirichlet problem for Laplace's equation.
Fixed Width Booth Multiplier Based on PEB Circuit [
V.Vidya Devi
2012-04-01
Full Text Available In this brief, a probabilistic estimation bias (PEB circuit for a fixed-width two’s complement Boothmultiplier is proposed. The proposed PEB circuit is derived from theoretical computation, instead ofexhaustive simulations and heuristic compensation strategies that tend to introduce curve-fitting errors andexponential-grown simulation time. Consequently, the proposed PEB circuit provides a smaller area and alower truncation error compared with existing works. Implemented in an 8 × 8 2-D discrete cosinetransform (DCT core, the DCT core using the proposed PEB Booth multiplier improves the peak signalto-noise ratio by 17 dB with only a 2% area penalty compared with the direct-truncated method.
Proximity effects in cold gases of multiply charged atoms (Review)
Chikina, I.; Shikin, V.
2016-07-01
Possible proximity effects in gases of cold, multiply charged atoms are discussed. Here we deal with rarefied gases with densities nd of multiply charged (Z ≫ 1) atoms at low temperatures in the well-known Thomas-Fermi (TF) approximation, which can be used to evaluate the statistical properties of single atoms. In order to retain the advantages of the TF formalism, which is successful for symmetric problems, the external boundary conditions accounting for the finiteness of the density of atoms (donors), nd ≠ 0, are also symmetrized (using a spherical Wigner-Seitz cell) and formulated in a standard way that conserves the total charge within the cell. The model shows that at zero temperature in a rarefied gas of multiply charged atoms there is an effective long-range interaction Eproxi(nd), the sign of which depends on the properties of the outer shells of individual atoms. The long-range character of the interaction Eproxi is evaluated by comparing it with the properties of the well-known London dispersive attraction ELond(nd) interaction in gases. For the noble gases argon, krypton, and xenon Eproxi>0 and for the alkali and alkaline-earth elements Eproxi neutral complexes into charged fragments. This phenomenon appears consistently in the TF theory through the temperature dependence of the different versions of Eproxi. The anomaly in the thermal proximity effect shows up in the following way: for T ≠ 0 there is no equilibrium solution of TS statistics for single multiply charged atoms in a vacuum when the effect is present. Instability is suppressed in a Wigner-Seitz model under the assumption that there are no electron fluxes through the outer boundary R3 ∝ n-1d of a Wigner-Seitz cell. Eproxi corresponds to the definition of the correlation energy in a gas of interacting particles. This review is written so as to enable comparison of the results of the TF formalism with the standard assumptions of the correlation theory for classical plasmas. The classic
Simultaneous least squares fitter based on the Langrange multiplier method
Guan, Yinghui; Zheng, Yangheng; Zhu, Yong-Sheng
2013-01-01
We developed a least squares fitter used for extracting expected physics parameters from the correlated experimental data in high energy physics. This fitter considers the correlations among the observables and handles the nonlinearity using linearization during the $\\chi^2$ minimization. This method can naturally be extended to the analysis with external inputs. By incorporating with Langrange multipliers, the fitter includes constraints among the measured observables and the parameters of interest. We applied this fitter to the study of the $D^{0}-\\bar{D}^{0}$ mixing parameters as the test-bed based on MC simulation. The test results show that the fitter gives unbiased estimators with correct uncertainties and the approach is credible.
A Lagrange multiplier based divide and conquer finite element algorithm
Farhat, C.
1991-01-01
A novel domain decomposition method based on a hybrid variational principle is presented. Prior to any computation, a given finite element mesh is torn into a set of totally disconnected submeshes. First, an incomplete solution is computed in each subdomain. Next, the compatibility of the displacement field at the interface nodes is enforced via discrete, polynomial and/or piecewise polynomial Lagrange multipliers. In the static case, each floating subdomain induces a local singularity that is resolved very efficiently. The interface problem associated with this domain decomposition method is, in general, indefinite and of variable size. A dedicated conjugate projected gradient algorithm is developed for solving the latter problem when it is not feasible to explicitly assemble the interface operator. When implemented on local memory multiprocessors, the proposed methodology requires less interprocessor communication than the classical method of substructuring. It is also suitable for parallel/vector computers with shared memory and compares favorably with factorization based parallel direct methods.
Aging measurements with the gas electron multiplier (GEM)
Altunbas, M C; Kappler, S; Ketzer, B; Ropelewski, Leszek; Sauli, Fabio; Simon, F
2003-01-01
Continuing previous aging measurements with detectors based on the Gas Electron Multiplier (GEM), a $31\\times 31$cm$^2$ triple-GEM detector, as used in the small area tracking of the COMPASS experiment at CERN, was investigated. With a detector identical to those installed in the experiment, long-term, high-rate exposures to $8.9$keV X-ray radiation were performed to study its aging properties. In standard operation conditions, with Ar:CO$_2$ (70:30) filling and operated at an effective gain of $8.5\\cdot 10^3$, no change in gain and energy resolution is observed after collecting a total charge of 7mC/mm$^2$, corresponding to seven years of normal operation. This observation confirms previous results demonstrating the relative insensitivity of GEM detectors to aging, even when manufactured with common materials.
Multiplying steady-state culture in multi-reactor system.
Erm, Sten; Adamberg, Kaarel; Vilu, Raivo
2014-11-01
Cultivation of microorganisms in batch experiments is fast and economical but the conditions therein change constantly, rendering quantitative data interpretation difficult. By using chemostat with controlled environmental conditions the physiological state of microorganisms is fixed; however, the unavoidable stabilization phase makes continuous methods resource consuming. Material can be spared by using micro scale devices, which however have limited analysis and process control capabilities. Described herein are a method and a system combining the high throughput of batch with the controlled environment of continuous cultivations. Microorganisms were prepared in one bioreactor followed by culture distribution into a network of bioreactors and continuation of independent steady state experiments therein. Accelerostat cultivation with statistical analysis of growth parameters demonstrated non-compromised physiological state following distribution, thus the method effectively multiplied steady state culture of microorganisms. The theoretical efficiency of the system was evaluated in inhibitory compound analysis using repeated chemostat to chemostat transfers.
Audiovisual narratives : creative processes of SIdade and MultipliSIdade
2011-01-01
Resumo: Nesta dissertação faço uma reflexão sobre os processos criativos de dois audiovisuais narrativos de minha autoria, SIdade* e MultipliSIdade**. Como linha de pensamento, adoto a questão da relação entre o fragmento e o todo, em relação à poética e às técnicas de construção das obras. A poética do trabalho refere-se ao desenvolvimento de um olhar perceptivo sobre o ambiente urbano contemporâneo, tanto em relação aos fragmentos significantes que remetem a uma identidade do todo urbano, q...
Multiply-warped product metrics and reduction of Einstein equations
Gholami, F; Haji-Badali, A
2016-01-01
It is shown that for every multidimensional metric in the multiply warped product form $\\bar{M} = K\\times_{f_1} M_1\\times_{f_2}M_2$ with warp functions $f_1$, $f_2$, associated to the submanifolds $M_1$, $M_2$ of dimensions $n_1$, $n_2$ respectively, one can find the corresponding Einstein equations $\\bar{G}_{AB}=-\\bar{\\Lambda}\\bar{g}_{AB}$, with cosmological constant $\\bar{\\Lambda}$, which are reducible to the Einstein equations $G_{\\alpha\\beta} = -\\Lambda_1 g_{\\alpha\\beta}$ and $G_{ij} =-\\Lambda_2 h_{ij}$ on the submanifolds $M_1$, $M_2$, with cosmological constants ${\\Lambda_1}$ and ${\\Lambda_2}$, respectively, where $\\bar{\\Lambda}$, ${\\Lambda_1}$ and ${\\Lambda_2}$ are functions of ${f_1}$, ${f_2}$ and $n_1$, $n_2$.
On Lagrange Multipliers in Work with Quality and Reliability Assurance
Vidal, Rene Victor Valqui; Becker, P.
1986-01-01
In optimizing some property of a system, reliability say, a designer usually has to accept certain constraints regarding cost, completion time, volume, weight, etc. The solution of optimization problems with boundary constraints can be helped substantially by the use of Lagrange multipliers techn...... in the areas of sales promotion and teaching. These maps illuminate the logic structure of solution sequences. One such map is shown, illustrating the application of LMT in one of the examples....... techniques (LMT). With representative examples of increasing complexity, the wide applicability of LMT is illustrated. Two particular features are put in focus. First, an easy to follow yet powerful new graphical approach is presented, Second, the concept of Fuller-Polya maps is shown to be helpful...
Scaled AAN for Fixed-Point Multiplier-Free IDCT
Zhu, P. P.; Liu, J. G.; Dai, S. K.; Wang, G. Y.
2009-12-01
An efficient algorithm derived from AAN algorithm (proposed by Arai, Agui, and Nakajima in 1988) for computing the Inverse Discrete Cosine Transform (IDCT) is presented. We replace the multiplications in conventional AAN algorithm with additions and shifts to realize the fixed-point and multiplier-free computation of IDCT and adopt coefficient and compensation matrices to improve the precision of the algorithm. Our 1D IDCT can be implemented by 46 additions and 20 shifts. Due to the absence of the multiplications, this modified algorithm takes less time than the conventional AAN algorithm. The algorithm has low drift in decoding due to the higher computational precision, which fully complies with IEEE 1180 and ISO/IEC 23002-1 specifications. The implementation of the novel fast algorithm for 32-bit hardware is discussed, and the implementations for 24-bit and 16-bit hardware are also introduced, which are more suitable for mobile communication devices.
Multipliers of Weighted Semigroups and Associated Beurling Banach Algebras
S J Bhatt; P A Dabhi; H V Dedania
2011-11-01
Given a weighted discrete abelian semigroup $(S,)$, the semigroup $M_(S)$ of -bounded multipliers as well as the Rees quotient $M_(S)/S$ together with their respective weights $\\overline{}$ and $\\overline{}_q$ induced by are studied; for a large class of weights , the quotient $\\ell^1(M_(S),\\overline{})/\\ell^1(S,)$ is realized as a Beurling algebra on the quotient semigroup $M_(S)/S$; the Gel’fand spaces of these algebras are determined; and Banach algebra properties like semisimplicity, uniqueness of uniform norm and regularity of associated Beurling algebras on these semigroups are investigated. The involutive analogues of these are also considered. The results are exhibited in the context of several examples.
Planar varactor frequency multiplier devices with blocking barrier
Lieneweg, Udo (Inventor); Frerking, Margaret A. (Inventor); Maserjian, Joseph (Inventor)
1994-01-01
The invention relates to planar varactor frequency multiplier devices with a heterojunction blocking barrier for near millimeter wave radiation of moderate power from a fundamental input wave. The space charge limitation of the submillimeter frequency multiplier devices of the BIN(sup +) type is overcome by a diode structure comprising an n(sup +) doped layer of semiconductor material functioning as a low resistance back contact, a layer of semiconductor material with n-type doping functioning as a drift region grown on the back contact layer, a delta doping sheet forming a positive charge at the interface of the drift region layer with a barrier layer, and a surface metal contact. The layers thus formed on an n(sup +) doped layer may be divided into two isolated back-to-back BNN(sup +) diodes by separately depositing two surface metal contacts. By repeating the sequence of the drift region layer and the barrier layer with the delta doping sheet at the interfaces between the drift and barrier layers, a plurality of stacked diodes is formed. The novelty of the invention resides in providing n-type semiconductor material for the drift region in a GaAs/AlGaAs structure, and in stacking a plurality of such BNN(sup +) diodes stacked for greater output power with and connected back-to-back with the n(sup +) GaAs layer as an internal back contact and separate metal contact over an AlGaAs barrier layer on top of each stack.
Coulomb fission in multiply charged molecular clusters: Experiment and theory
Harris, Christopher; Baptiste, Joshua; Lindgren, Eric B.; Besley, Elena; Stace, Anthony J.
2017-04-01
A series of three multiply charged molecular clusters, (C6H6)nz+ (benzene), (CH3CNnz) + (acetonitrile), and (C4H8O)nz+ (tetrahydrofuran), where the charge z is either 3 or 4, have been studied for the purpose of identifying the patterns of behaviour close to the charge instability limit. Experiments show that on a time scale of ˜10-4 s, ions close to the limit undergo Coulomb fission where the observed pathways exhibit considerable asymmetry in the sizes of the charged fragments and are all associated with kinetic (ejection) energies of between 1.4 and 2.2 eV. Accurate kinetic energies have been determined through a computer simulation of peak profiles recorded in the experiments and the results modelled using a theory formulated to describe how charged particles of dielectric materials interact with one another [E. Bichoutskaia et al., J. Chem. Phys. 133, 024105 (2010)]. The calculated electrostatic interaction energy between separating fragments gives an accurate account for the measured kinetic energies and also supports the conclusion that +4 ions fragment into +3 and +1 products as opposed to the alternative of two +2 fragments. This close match between the theory and experiment reinforces the assumption that a significant fraction of excess charge resides on the surfaces of the fragment ions. It is proposed that the high degree of asymmetry seen in the fragmentation patterns of the multiply charged clusters is due, in part, to limits imposed by the time window during which observations are made.
Hardouin, Jean-Benoit
2011-07-14
Abstract Background Nowadays, more and more clinical scales consisting in responses given by the patients to some items (Patient Reported Outcomes - PRO), are validated with models based on Item Response Theory, and more specifically, with a Rasch model. In the validation sample, presence of missing data is frequent. The aim of this paper is to compare sixteen methods for handling the missing data (mainly based on simple imputation) in the context of psychometric validation of PRO by a Rasch model. The main indexes used for validation by a Rasch model are compared. Methods A simulation study was performed allowing to consider several cases, notably the possibility for the missing values to be informative or not and the rate of missing data. Results Several imputations methods produce bias on psychometrical indexes (generally, the imputation methods artificially improve the psychometric qualities of the scale). In particular, this is the case with the method based on the Personal Mean Score (PMS) which is the most commonly used imputation method in practice. Conclusions Several imputation methods should be avoided, in particular PMS imputation. From a general point of view, it is important to use an imputation method that considers both the ability of the patient (measured for example by his\\/her score), and the difficulty of the item (measured for example by its rate of favourable responses). Another recommendation is to always consider the addition of a random process in the imputation method, because such a process allows reducing the bias. Last, the analysis realized without imputation of the missing data (available case analyses) is an interesting alternative to the simple imputation in this context.
Sébille Véronique
2011-07-01
Full Text Available Abstract Background Nowadays, more and more clinical scales consisting in responses given by the patients to some items (Patient Reported Outcomes - PRO, are validated with models based on Item Response Theory, and more specifically, with a Rasch model. In the validation sample, presence of missing data is frequent. The aim of this paper is to compare sixteen methods for handling the missing data (mainly based on simple imputation in the context of psychometric validation of PRO by a Rasch model. The main indexes used for validation by a Rasch model are compared. Methods A simulation study was performed allowing to consider several cases, notably the possibility for the missing values to be informative or not and the rate of missing data. Results Several imputations methods produce bias on psychometrical indexes (generally, the imputation methods artificially improve the psychometric qualities of the scale. In particular, this is the case with the method based on the Personal Mean Score (PMS which is the most commonly used imputation method in practice. Conclusions Several imputation methods should be avoided, in particular PMS imputation. From a general point of view, it is important to use an imputation method that considers both the ability of the patient (measured for example by his/her score, and the difficulty of the item (measured for example by its rate of favourable responses. Another recommendation is to always consider the addition of a random process in the imputation method, because such a process allows reducing the bias. Last, the analysis realized without imputation of the missing data (available case analyses is an interesting alternative to the simple imputation in this context.
Andrew R Wood
Full Text Available Genome-wide association (GWA studies have been limited by the reliance on common variants present on microarrays or imputable from the HapMap Project data. More recently, the completion of the 1000 Genomes Project has provided variant and haplotype information for several million variants derived from sequencing over 1,000 individuals. To help understand the extent to which more variants (including low frequency (1% ≤ MAF <5% and rare variants (<1% can enhance previously identified associations and identify novel loci, we selected 93 quantitative circulating factors where data was available from the InCHIANTI population study. These phenotypes included cytokines, binding proteins, hormones, vitamins and ions. We selected these phenotypes because many have known strong genetic associations and are potentially important to help understand disease processes. We performed a genome-wide scan for these 93 phenotypes in InCHIANTI. We identified 21 signals and 33 signals that reached P<5×10(-8 based on HapMap and 1000 Genomes imputation, respectively, and 9 and 11 that reached a stricter, likely conservative, threshold of P<5×10(-11 respectively. Imputation of 1000 Genomes genotype data modestly improved the strength of known associations. Of 20 associations detected at P<5×10(-8 in both analyses (17 of which represent well replicated signals in the NHGRI catalogue, six were captured by the same index SNP, five were nominally more strongly associated in 1000 Genomes imputed data and one was nominally more strongly associated in HapMap imputed data. We also detected an association between a low frequency variant and phenotype that was previously missed by HapMap based imputation approaches. An association between rs112635299 and alpha-1 globulin near the SERPINA gene represented the known association between rs28929474 (MAF = 0.007 and alpha1-antitrypsin that predisposes to emphysema (P = 2.5×10(-12. Our data provide important proof of
Spin-down of compact stars and energy release of a first-order phase transition
Miao, Kang; Na-Na, Pan
2007-01-01
The deconfinement phase transition from hadronic matter to quark matter can continuously occur during spins down of neutron stars. It will lead to the release of latent heat if the transition is the first-order one. We have investigated the energy release of such deconfinement phase transition for rotating hybrid stars model which include mixed phase of hadronic matter and quark matter. The release of latent heat per baryon is calculated through studying a randomly process of infinitesimal compressing. Finally, we can self-consistently get the heating luminosity of deconfinement phase transition by imputing the EOS of mixed phase, and based on the equation of rotation structure of stars.
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false May the Federal Mediation and Conciliation Service impute...) FEDERAL MEDIATION AND CONCILIATION SERVICE GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) General Principles Relating to Suspension and Debarment Actions § 1471.630 May the Federal Mediation...
van Leeuwen, Elisabeth M.; Karssen, Lennart C.; Deelen, Joris; Isaacs, Aaron; Medina-Gomez, Carolina; Mbarek, Hamdi; Kanterakis, Alexandros; Trompet, Stella; Postmus, Iris; Verweij, Niek; van Enckevort, David J.; Huffman, Jennifer E.; White, Charles C.; Feitosa, Mary F.; Bartz, Traci M.; Manichaikul, Ani; Joshi, Peter K.; Peloso, Gina M.; Deelen, Patrick; van Dijk, Freerk; Willemsen, Gonneke; de Geus, Eco J.; Milaneschi, Yuri; Penninx, Brenda W. J. H.; Francioli, Laurent C.; Menelaou, Androniki; Pulit, Sara L.; Rivadeneira, Fernando; Hofman, Albert; Oostra, Ben A.; Franco, Oscar H.; Leach, Irene Mateo; Beekman, Marian; de Craen, Anton J. M.; Uh, Hae-Won; Trochet, Holly; Hocking, Lynne J.; Porteous, David J.; Sattar, Naveed; Packard, Chris J.; Buckley, Brendan M.; Brody, Jennifer A.; Bis, Joshua C.; Rotter, Jerome I.; Mychaleckyj, Josyf C.; Campbell, Harry; Duan, Qing; Lange, Leslie A.; Wilson, James F.; Hayward, Caroline; Polasek, Ozren; Vitart, Veronique; Rudan, Igor; Wright, Alan F.; Rich, Stephen S.; Psaty, Bruce M.; Borecki, Ingrid B.; Kearney, Patricia M.; Stott, David J.; Cupples, L. Adrienne; Jukema, J. Wouter; van der Harst, Pim; Sijbrands, Eric J.; Hottenga, Jouke-Jan; Uitterlinden, Andre G.; Swertz, Morris A.; van Ommen, Gert-Jan B.; de Bakker, Paul I. W.; Slagboom, P. Eline; Boomsma, Dorret I.; Wijmenga, Cisca; van Duijn, Cornelia M.
2015-01-01
Variants associated with blood lipid levels may be population-specific. To identify low-frequency variants associated with this phenotype, population-specific reference panels may be used. Here we impute nine large Dutch biobanks (similar to 35,000 samples) with the population-specific reference pan
Van Leeuwen, Elisabeth M.; Karssen, Lennart C.; Deelen, Joris; Isaacs, Aaron; Medina-Gomez, Carolina; Mbarek, Hamdi; Kanterakis, Alexandros; Trompet, Stella; Postmus, Iris; Verweij, Niek; Van Enckevort, David J.; Huffman, Jennifer E.; White, Charles C.; Feitosa, Mary F.; Bartz, Traci M.; Manichaikul, Ani; Joshi, Peter K.; Peloso, Gina M.; Deelen, Patrick; Van Dijk, Freerk; Willemsen, Gonneke; De Geus, Eco J.; Milaneschi, Yuri; Penninx, Brenda W J H; Francioli, Laurent C.; Menelaou, Androniki; Pulit, Sara L.; Rivadeneira, Fernando; Hofman, Albert; Oostra, Ben A.; Franco, Oscar H.; Leach, Irene Mateo; Beekman, Marian; De Craen, Anton J M; Uh, Hae Won; Trochet, Holly; Hocking, Lynne J.; Porteous, David J.; Sattar, Naveed; Packard, Chris J.; Buckley, Brendan M.; Brody, Jennifer A.; Bis, Joshua C.; Rotter, Jerome I.; Mychaleckyj, Josyf C.; Campbell, Harry; Duan, Qing; Lange, Leslie A.; Wilson, James F.; Hayward, Caroline; Polasek, Ozren; Vitart, Veronique; Rudan, Igor; Wright, Alan F.; Rich, Stephen S.; Psaty, Bruce M.; Borecki, Ingrid B.; Kearney, Patricia M.; Stott, David J.; Cupples, L. Adrienne; Jukema, J. Wouter; Van Der Harst, Pim; Sijbrands, Eric J.; Hottenga, Jouke Jan; Uitterlinden, Andre G.; Swertz, Morris A.; Van Ommen, Gert Jan B; De Bakker, Paul I W; Eline Slagboom, P.; Boomsma, Dorret I.; Wijmenga, Cisca; Van Duijn, Cornelia M.; Neerincx, Pieter B T; Elbers, Clara C.; Palamara, Pier Francesco; Peer, Itsik; Abdellaoui, Abdel; Kloosterman, Wigard P.; Van Oven, Mannis; Vermaat, Martijn; Li, Mingkun; Laros, Jeroen F J; Stoneking, Mark; De Knijff, Peter; Kayser, Manfred; Veldink, Jan H.; Van Den Berg, Leonard H.; Byelas, Heorhiy; Den Dunnen, Johan T.; Dijkstra, Martijn; Amin, Najaf; Van Der Velde, K. Joeri; Van Setten, Jessica; Kattenberg, Mathijs; Van Schaik, Barbera D C; Bot, Jan; Nijman, Isaäc J.; Mei, Hailiang; Koval, Vyacheslav; Ye, Kai; Lameijer, Eric Wubbo; Moed, Matthijs H.; Hehir-Kwa, Jayne Y.; Handsaker, Robert E.; Sunyaev, Shamil R.; Sohail, Mashaal; Hormozdiari, Fereydoun; Marschall, Tobias; Schönhuth, Alexander; Guryev, Victor; Suchiman, H. Eka D; Wolffenbuttel, Bruce H.; Platteel, Mathieu; Pitts, Steven J.; Potluri, Shobha; Cox, David R.; Li, Qibin; Li, Yingrui; Du, Yuanping; Chen, Ruoyan; Cao, Hongzhi; Li, Ning; Cao, Sujie; Wang, Jun; Bovenberg, Jasper A.; de Bakker, Paul I W
2015-01-01
Variants associated with blood lipid levels may be population-specific. To identify low-frequency variants associated with this phenotype, population-specific reference panels may be used. Here we impute nine large Dutch biobanks (∼35,000 samples) with the population-specific reference panel created
E.M. van Leeuwen (Elisa); L.C. Karssen (Lennart); J. Deelen (Joris); A. Isaacs (Aaron); M.C. Medina-Gomez (Carolina); H. Mbarek; A. Kanterakis (Alexandros); S. Trompet (Stella); D. Postmus (Douwe); N. Verweij (Niek); D. van Enckevort (David); J.E. Huffman (Jennifer); C.C. White (Charles); M.F. Feitosa (Mary Furlan); T.M. Bartz (Traci M.); A. Manichaikul (Ani); P.K. Joshi (Peter); G.M. Peloso (Gina); P. Deelen (Patrick); F. van Dijk (F.); G.A.H.M. Willemsen (Gonneke); E.J.C. de Geus (Eco); Y. Milaneschi (Yuri); B.W.J.H. Penninx (Brenda); L.C. Francioli (Laurent); A. Menelaou (Androniki); S.L. Pulit (Sara); F. Rivadeneira Ramirez (Fernando); A. Hofman (Albert); B.A. Oostra (Ben); O.H. Franco (Oscar); I.M. Leach (Irene Mateo); M. Beekman (Marian); A.J. de Craen (Anton); H.-W. Uh (Hae-Won); H. Trochet (Holly); L.J. Hocking (Lynne); D.J. Porteous (David J.); N. Sattar (Naveed); C.J. Packard (Chris J.); B.M. Buckley (Brendan M.); J. Brody (Jennifer); J.C. Bis (Joshua); J.I. Rotter (Jerome I.); J.C. Mychaleckyj (Josyf); H. Campbell (Harry); Q. Duan (Qing); L.A. Lange (Leslie); J.F. Wilson (James F); C. Hayward (Caroline); O. Polasek (Ozren); V. Vitart (Veronique); I. Rudan (Igor); A. Wright (Alan); S.S. Rich (Stephen S.); B.M. Psaty (Bruce); I.B. Borecki (Ingrid); P.M. Kearney (Patricia M.); D.J. Stott (David. J.); L.A. Cupples (Adrienne); J.W. Jukema (Jan Wouter); P. van der Harst (Pim); E.J.G. Sijbrands (Eric); J.J. Hottenga (Jouke Jan); A.G. Uitterlinden (André); M. Swertz (Morris); G.-J.B. Van Ommen (Gert-Jan B.); P.I.W. de Bakker (Paul); P. Eline Slagboom; D.I. Boomsma (Dorret); C. Wijmenga (Cisca); C.M. van Duijn (Cock); P.B.T. Neerincx (Pieter B T); C.C. Elbers (Clara); P.F. Palamara (Pier Francesco); I. Peer (Itsik); M. Abdellaoui (Mohammed); W.P. Kloosterman (Wigard); M. van Oven (Mannis); M. Vermaat (Martijn); M. Li (Mingkun); J.F.J. Laros (Jeroen F.); M. Stoneking (Mark); P. de Knijff (Peter); M.H. Kayser (Manfred); J.H. Veldink (Jan); L.H. van den Berg (Leonard); H. Byelas (Heorhiy); J.T. den Dunnen (Johan); M.K. Dijkstra; N. Amin (Najaf); K.J. Van Der Velde (K. Joeri); J. van Setten (Jessica); V.M. Kattenberg (Mathijs); F.D.M. Van Schaik (Fiona D.M.); J.J. Bot (Jan); I.J. Nijman (Isaac ); H. Mei (Hailiang); V. Koval (Vyacheslav); K. Ye (Kai); E.-W. Lameijer (Eric-Wubbo); H. Moed (Heleen); J. Hehir-Kwa (Jayne); R.E. Handsaker (Robert); S.R. Sunyaev (Shamil); M. Sohail (Mashaal); F. Hormozdiari (Fereydoun); T. Marschall (Tanja); A. Schönhuth (Alexander); V. Guryev (Victor); H.E.D. Suchiman (Eka); B.H.R. Wolffenbuttel (Bruce); I. Platteel (Inge); S.J. Pitts (Steven); S. Potluri (Shobha); D.R. Cox (David R.); Q. Li (Qibin); Y. Li (Yingrui); Y. Du (Yuanping); R. Chen (Ruoyan); H. Cao (Hongzhi); N. Li (Ning); S. Cao (Sujie); J. Wang (Jun); J.A. Bovenberg (Jasper)
2015-01-01
textabstractVariants associated with blood lipid levels may be population-specific. To identify low-frequency variants associated with this phenotype, population-specific reference panels may be used. Here we impute nine large Dutch biobanks (∼35,000 samples) with the population-specific reference p
Improved imputation of low-frequency and rare variants using the UK10K haplotype reference panel
J. Huang (Jie); B. Howie (Bryan); S. McCarthy (Shane); Y. Memari (Yasin); K. Walter (Klaudia); J.L. Min (Josine L.); P. Danecek (Petr); G. Malerba (Giovanni); E. Trabetti (Elisabetta); H.-F. Zheng (Hou-Feng); G. Gambaro (Giovanni); J.B. Richards (J. Brent); R. Durbin (Richard); N. Timpson (Nicholas); J. Marchini (Jonathan); N. Soranzo (Nicole); S. Al Turki (Saeed); A. Amuzu (Antoinette); C. Anderson (Carl); R. Anney (Richard); D. Antony (Dinu); M.S. Artigas; M. Ayub (Muhammad); S. Bala (Senduran); J. Barrett (Jeffrey); I. Barroso (Inês); P.L. Beales (Philip); M. Benn (Marianne); J. Bentham (Jamie); S. Bhattacharya (Shoumo); E. Birney (Ewan); D.H.R. Blackwood (Douglas); M. Bobrow (Martin); E. Bochukova (Elena); P.F. Bolton (Patrick F.); R. Bounds (Rebecca); C. Boustred (Chris); G. Breen (Gerome); M. Calissano (Mattia); K. Carss (Keren); J.P. Casas (Juan Pablo); J.C. Chambers (John C.); R. Charlton (Ruth); K. Chatterjee (Krishna); L. Chen (Lu); A. Ciampi (Antonio); S. Cirak (Sebahattin); P. Clapham (Peter); G. Clement (Gail); G. Coates (Guy); M. Cocca (Massimiliano); D.A. Collier (David); C. Cosgrove (Catherine); T. Cox (Tony); N.J. Craddock (Nick); L. Crooks (Lucy); S. Curran (Sarah); D. Curtis (David); A. Daly (Allan); I.N.M. Day (Ian N.M.); A.G. Day-Williams (Aaron); G.V. Dedoussis (George); T. Down (Thomas); Y. Du (Yuanping); C.M. van Duijn (Cock); I. Dunham (Ian); T. Edkins (Ted); R. Ekong (Rosemary); P. Ellis (Peter); D.M. Evans (David); I.S. Farooqi (I. Sadaf); D.R. Fitzpatrick (David R.); P. Flicek (Paul); J. Floyd (James); A.R. Foley (A. Reghan); C.S. Franklin (Christopher S.); M. Futema (Marta); L. Gallagher (Louise); P. Gasparini (Paolo); T.R. Gaunt (Tom); M. Geihs (Matthias); D. Geschwind (Daniel); C.M.T. Greenwood (Celia); H. Griffin (Heather); D. Grozeva (Detelina); X. Guo (Xiaosen); X. Guo (Xueqin); H. Gurling (Hugh); D. Hart (Deborah); A.E. Hendricks (Audrey E.); P.A. Holmans (Peter A.); L. Huang (Liren); T. Hubbard (Tim); S.E. Humphries (Steve E.); M.E. Hurles (Matthew); P.G. Hysi (Pirro); V. Iotchkova (Valentina); A. Isaacs (Aaron); D.K. Jackson (David K.); Y. Jamshidi (Yalda); J. Johnson (Jon); C. Joyce (Chris); K.J. Karczewski (Konrad); J. Kaye (Jane); T. Keane (Thomas); J.P. Kemp (John); K. Kennedy (Karen); A. Kent (Alastair); J. Keogh (Julia); F. Khawaja (Farrah); M.E. Kleber (Marcus E.); M. Van Kogelenberg (Margriet); A. Kolb-Kokocinski (Anja); J.S. Kooner (Jaspal S.); G. Lachance (Genevieve); C. Langenberg (Claudia); C. Langford (Cordelia); D. Lawson (Daniel); I. Lee (Irene); E.M. van Leeuwen (Elisa); M. Lek (Monkol); R. Li (Rui); Y. Li (Yingrui); J. Liang (Jieqin); H. Lin (Hong); R. Liu (Ryan); J. Lönnqvist (Jouko); L.R. Lopes (Luis R.); M.C. Lopes (Margarida); J. Luan; D.G. MacArthur (Daniel G.); M. Mangino (Massimo); G. Marenne (Gaëlle); W. März (Winfried); J. Maslen (John); A. Matchan (Angela); I. Mathieson (Iain); P. McGuffin (Peter); A.M. McIntosh (Andrew); A.G. McKechanie (Andrew G.); A. McQuillin (Andrew); S. Metrustry (Sarah); N. Migone (Nicola); H.M. Mitchison (Hannah M.); A. Moayyeri (Alireza); J. Morris (James); R. Morris (Richard); D. Muddyman (Dawn); F. Muntoni; B.G. Nordestgaard (Børge G.); K. Northstone (Kate); M.C. O'donovan (Michael); S. O'Rahilly (Stephen); A. Onoufriadis (Alexandros); K. Oualkacha (Karim); M.J. Owen (Michael J.); A. Palotie (Aarno); K. Panoutsopoulou (Kalliope); V. Parker (Victoria); J.R. Parr (Jeremy R.); L. Paternoster (Lavinia); T. Paunio (Tiina); F. Payne (Felicity); S.J. Payne (Stewart J.); J.R.B. Perry (John); O.P.H. Pietiläinen (Olli); V. Plagnol (Vincent); R.C. Pollitt (Rebecca C.); S. Povey (Sue); M.A. Quail (Michael A.); L. Quaye (Lydia); L. Raymond (Lucy); K. Rehnström (Karola); C.K. Ridout (Cheryl K.); S.M. Ring (Susan); G.R.S. Ritchie (Graham R.S.); N. Roberts (Nicola); R.L. Robinson (Rachel L.); D.B. Savage (David); P.J. Scambler (Peter); S. Schiffels (Stephan); M. Schmidts (Miriam); N. Schoenmakers (Nadia); R.H. Scott (Richard H.); R.A. Scott (Robert); R.K. Semple (Robert K.); E. Serra (Eva); S.I. Sharp (Sally I.); A.C. Shaw (Adam C.); H.A. Shihab (Hashem A.); S.-Y. Shin (So-Youn); D. Skuse (David); K.S. Small (Kerrin); C. Smee (Carol); G.D. Smith; L. Southam (Lorraine); O. Spasic-Boskovic (Olivera); T.D. Spector (Timothy); D. St. Clair (David); B. St Pourcain (Beate); J. Stalker (Jim); E. Stevens (Elizabeth); J. Sun (Jianping); G. Surdulescu (Gabriela); J. Suvisaari (Jaana); P. Syrris (Petros); I. Tachmazidou (Ioanna); R. Taylor (Rohan); J. Tian (Jing); M.D. Tobin (Martin); D. Toniolo (Daniela); M. Traglia (Michela); A. Tybjaerg-Hansen; A.M. Valdes; A.M. Vandersteen (Anthony M.); A. Varbo (Anette); P. Vijayarangakannan (Parthiban); P.M. Visscher (Peter); L.V. Wain (Louise); J.T. Walters (James); G. Wang (Guangbiao); J. Wang (Jun); Y. Wang (Yu); K. Ward (Kirsten); E. Wheeler (Eleanor); P.H. Whincup (Peter); T. Whyte (Tamieka); H.J. Williams (Hywel J.); K.A. Williamson (Kathleen); C. Wilson (Crispian); S.G. Wilson (Scott); K. Wong (Kim); C. Xu (Changjiang); J. Yang (Jian); G. Zaza (Gianluigi); E. Zeggini (Eleftheria); F. Zhang (Feng); P. Zhang (Pingbo); W. Zhang (Weihua)
2015-01-01
textabstractImputing genotypes from reference panels created by whole-genome sequencing (WGS) provides a cost-effective strategy for augmenting the single-nucleotide polymorphism (SNP) content of genome-wide arrays. The UK10K Cohorts project has generated a data set of 3,781 whole genomes sequenced
41 CFR 105-68.630 - May the General Services Administration impute conduct of one person to another?
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false May the General Services Administration impute conduct of one person to another? 105-68.630 Section 105-68.630 Public Contracts and Property Management Federal Property Management Regulations System (Continued) GENERAL...
Cesano, Jose Daniel
2009-01-01
The purpose of this paper is to describe the scope of the structure omission like instrument of personal imputation in the environment of the company. El presente trabajo tendrá por objeto de análisis el alcance de la estructura omisiva como instrumento de imputación personal en el ámbito de la empresa.
Minica, C.C.; Dolan, C.V.; Hottenga, J.J.; Willemsen, G.; Vink, J.M.; Boomsma, D.I.
2013-01-01
When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of tw
Van Leeuwen, Elisabeth M.; Karssen, Lennart C.; Deelen, Joris; Isaacs, Aaron; Medina-Gomez, Carolina; Mbarek, Hamdi; Kanterakis, Alexandros; Trompet, Stella; Postmus, Iris; Verweij, Niek; Van Enckevort, David J.; Huffman, Jennifer E.; White, Charles C.; Feitosa, Mary F.; Bartz, Traci M.; Manichaikul, Ani; Joshi, Peter K.; Peloso, Gina M.; Deelen, Patrick; Van Dijk, Freerk; Willemsen, Gonneke; De Geus, Eco J.; Milaneschi, Yuri; Penninx, Brenda W J H; Francioli, Laurent C.; Menelaou, Androniki; Pulit, Sara L.; Rivadeneira, Fernando; Hofman, Albert; Oostra, Ben A.; Franco, Oscar H.; Leach, Irene Mateo; Beekman, Marian; De Craen, Anton J M; Uh, Hae Won; Trochet, Holly; Hocking, Lynne J.; Porteous, David J.; Sattar, Naveed; Packard, Chris J.; Buckley, Brendan M.; Brody, Jennifer A.; Bis, Joshua C.; Rotter, Jerome I.; Mychaleckyj, Josyf C.; Campbell, Harry; Duan, Qing; Lange, Leslie A.; Wilson, James F.; Hayward, Caroline; Polasek, Ozren; Vitart, Veronique; Rudan, Igor; Wright, Alan F.; Rich, Stephen S.; Psaty, Bruce M.; Borecki, Ingrid B.; Kearney, Patricia M.; Stott, David J.; Cupples, L. Adrienne; Jukema, J. Wouter; Van Der Harst, Pim; Sijbrands, Eric J.; Hottenga, Jouke Jan; Uitterlinden, Andre G.; Swertz, Morris A.; Van Ommen, Gert Jan B; De Bakker, Paul I W; Eline Slagboom, P.; Boomsma, Dorret I.; Wijmenga, Cisca; Van Duijn, Cornelia M.; Neerincx, Pieter B T; Elbers, Clara C.; Palamara, Pier Francesco; Peer, Itsik; Abdellaoui, Abdel; Kloosterman, Wigard P.|info:eu-repo/dai/nl/304076953; Van Oven, Mannis; Vermaat, Martijn; Li, Mingkun; Laros, Jeroen F J; Stoneking, Mark; De Knijff, Peter; Kayser, Manfred; Veldink, Jan H.|info:eu-repo/dai/nl/266575722; Van Den Berg, Leonard H.|info:eu-repo/dai/nl/288255216; Byelas, Heorhiy; Den Dunnen, Johan T.; Dijkstra, Martijn; Amin, Najaf; Van Der Velde, K. Joeri; Van Setten, Jessica|info:eu-repo/dai/nl/345493990; Kattenberg, Mathijs; Van Schaik, Barbera D C; Bot, Jan; Nijman, Isaäc J.|info:eu-repo/dai/nl/185967833; Mei, Hailiang; Koval, Vyacheslav; Ye, Kai; Lameijer, Eric Wubbo; Moed, Matthijs H.; Hehir-Kwa, Jayne Y.; Handsaker, Robert E.; Sunyaev, Shamil R.; Sohail, Mashaal; Hormozdiari, Fereydoun; Marschall, Tobias; Schönhuth, Alexander; Guryev, Victor|info:eu-repo/dai/nl/343083132; Suchiman, H. Eka D; Wolffenbuttel, Bruce H.; Platteel, Mathieu; Pitts, Steven J.; Potluri, Shobha; Cox, David R.; Li, Qibin; Li, Yingrui; Du, Yuanping; Chen, Ruoyan; Cao, Hongzhi; Li, Ning; Cao, Sujie; Wang, Jun; Bovenberg, Jasper A.; de Bakker, Paul I W|info:eu-repo/dai/nl/342957082
2015-01-01
Variants associated with blood lipid levels may be population-specific. To identify low-frequency variants associated with this phenotype, population-specific reference panels may be used. Here we impute nine large Dutch biobanks (∼35,000 samples) with the population-specific reference panel created
2012-01-01
Background Multiple imputation is becoming increasingly popular. Theoretical considerations as well as simulation studies have shown that the inclusion of auxiliary variables is generally of benefit. Methods A simulation study of a linear regression with a response Y and two predictors X1 and X2 was performed on data with n = 50, 100 and 200 using complete cases or multiple imputation with 0, 10, 20, 40 and 80 auxiliary variables. Mechanisms of missingness were either 100% MCAR or 50% MAR + 50% MCAR. Auxiliary variables had low (r=.10) vs. moderate correlations (r=.50) with X’s and Y. Results The inclusion of auxiliary variables can improve a multiple imputation model. However, inclusion of too many variables leads to downward bias of regression coefficients and decreases precision. When the correlations are low, inclusion of auxiliary variables is not useful. Conclusion More research on auxiliary variables in multiple imputation should be performed. A preliminary rule of thumb could be that the ratio of variables to cases with complete data should not go below 1 : 3. PMID:23216665
Eulenburg, Christine; Suling, Anna; Neuser, Petra; Reuss, Alexander; Canzler, Ulrich; Fehm, Tanja; Luyten, Alexander; Hellriegel, Martin; Woelber, Linn; Mahner, Sven
2016-01-01
Propensity scoring (PS) is an established tool to account for measured confounding in non-randomized studies. These methods are sensitive to missing values, which are a common problem in observational data. The combination of multiple imputation of missing values and different propensity scoring
Eekhout, Iris; de Vet, Henrica C. W.; Twisk, Jos W. R.; Brand, Jaap P. L.; de Boer, Michiel R.; Heymans, Martijn W.
2014-01-01
Objectives: Regardless of the proportion of missing values, complete-case analysis is most frequently applied, although advanced techniques such as multiple imputation (MI) are available. The objective of this study was to explore the performance of simple and more advanced methods for handling miss
Zhang, Zhongrong; Yang, Xuan; Li, Hao; Li, Weide; Yan, Haowen; Shi, Fei
2017-10-01
The techniques for data analyses have been widely developed in past years, however, missing data still represent a ubiquitous problem in many scientific fields. In particular, dealing with missing spatiotemporal data presents an enormous challenge. Nonetheless, in recent years, a considerable amount of research has focused on spatiotemporal problems, making spatiotemporal missing data imputation methods increasingly indispensable. In this paper, a novel spatiotemporal hybrid method is proposed to verify and imputed spatiotemporal missing values. This new method, termed SOM-FLSSVM, flexibly combines three advanced techniques: self-organizing feature map (SOM) clustering, the fruit fly optimization algorithm (FOA) and the least squares support vector machine (LSSVM). We employ a cross-validation (CV) procedure and FOA swarm intelligence optimization strategy that can search available parameters and determine the optimal imputation model. The spatiotemporal underground water data for Minqin County, China, were selected to test the reliability and imputation ability of SOM-FLSSVM. We carried out a validation experiment and compared three well-studied models with SOM-FLSSVM using a different missing data ratio from 0.1 to 0.8 in the same data set. The results demonstrate that the new hybrid method performs well in terms of both robustness and accuracy for spatiotemporal missing data.
Improved imputation of low-frequency and rare variants using the UK10K haplotype reference panel
J. Huang (Jie); B. Howie (Bryan); S. McCarthy (Shane); Y. Memari (Yasin); K. Walter (Klaudia); J.L. Min (Josine L.); P. Danecek (Petr); G. Malerba (Giovanni); E. Trabetti (Elisabetta); H.-F. Zheng (Hou-Feng); G. Gambaro (Giovanni); J.B. Richards (Brent); R. Durbin (Richard); N.J. Timpson (Nicholas); J. Marchini (Jonathan); N. Soranzo (Nicole); S. Al Turki (Saeed); A. Amuzu (Antoinette); C. Anderson (Carl); R. Anney (Richard); D. Antony (Dinu); M.S. Artigas; M. Ayub (Muhammad); S. Bala (Senduran); J. Barrett (Jeffrey); I. Barroso (Inês); P.L. Beales (Philip); M. Benn (Marianne); J. Bentham (Jamie); S. Bhattacharya (Shoumo); E. Birney (Ewan); D.H.R. Blackwood (Douglas); M. Bobrow (Martin); E. Bochukova (Elena); P.F. Bolton (Patrick F.); R. Bounds (Rebecca); C. Boustred (Chris); G. Breen (Gerome); M. Calissano (Mattia); K. Carss (Keren); J.P. Casas (Juan Pablo); J.C. Chambers (John C.); R. Charlton (Ruth); K. Chatterjee (Krishna); L. Chen (Lu); A. Ciampi (Antonio); S. Cirak (Sebahattin); P. Clapham (Peter); G. Clement (Gail); G. Coates (Guy); M. Cocca (Massimiliano); D.A. Collier (David); C. Cosgrove (Catherine); T. Cox (Tony); N.J. Craddock (Nick); L. Crooks (Lucy); S. Curran (Sarah); D. Curtis (David); A. Daly (Allan); I.N.M. Day (Ian N.M.); A.G. Day-Williams (Aaron); G.V. Dedoussis (George); T. Down (Thomas); Y. Du (Yuanping); C.M. van Duijn (Cock); I. Dunham (Ian); T. Edkins (Ted); R. Ekong (Rosemary); P. Ellis (Peter); D.M. Evans (David); I.S. Farooqi (I. Sadaf); D.R. Fitzpatrick (David R.); P. Flicek (Paul); J. Floyd (James); A.R. Foley (A. Reghan); C.S. Franklin (Christopher S.); M. Futema (Marta); L. Gallagher (Louise); P. Gasparini (Paolo); T.R. Gaunt (Tom); M. Geihs (Matthias); D. Geschwind (Daniel); C.M.T. Greenwood (Celia); H. Griffin (Heather); D. Grozeva (Detelina); X. Guo (Xiaosen); X. Guo (Xueqin); H. Gurling (Hugh); D. Hart (Deborah); A.E. Hendricks (Audrey E.); P.A. Holmans (Peter A.); L. Huang (Liren); T. Hubbard (Tim); S.E. Humphries (Steve E.); M.E. Hurles (Matthew); P.G. Hysi (Pirro); V. Iotchkova (Valentina); A. Isaacs (Aaron); D.K. Jackson (David K.); Y. Jamshidi (Yalda); J. Johnson (Jon); C. Joyce (Chris); K.J. Karczewski (Konrad); J. Kaye (Jane); T. Keane (Thomas); J.P. Kemp (John); K. Kennedy (Karen); A. Kent (Alastair); J. Keogh (Julia); F. Khawaja (Farrah); M.E. Kleber (Marcus E.); M. Van Kogelenberg (Margriet); A. Kolb-Kokocinski (Anja); J.S. Kooner (Jaspal S.); G. Lachance (Genevieve); C. Langenberg (Claudia); C. Langford (Cordelia); D. Lawson (Daniel); I. Lee (Irene); E.M. van Leeuwen (Elisa); M. Lek (Monkol); R. Li (Rui); Y. Li (Yingrui); J. Liang (Jieqin); H. Lin (Hong); R. Liu (Ryan); J. Lönnqvist (Jouko); L.R. Lopes (Luis R.); M.C. Lopes (Margarida); J. Luan; D.G. MacArthur (Daniel G.); M. Mangino (Massimo); G. Marenne (Gaëlle); W. März (Winfried); J. Maslen (John); A. Matchan (Angela); I. Mathieson (Iain); P. McGuffin (Peter); A.M. McIntosh (Andrew); A.G. McKechanie (Andrew G.); A. McQuillin (Andrew); S. Metrustry (Sarah); N. Migone (Nicola); H.M. Mitchison (Hannah M.); A. Moayyeri (Alireza); J. Morris (James); R. Morris (Richard); D. Muddyman (Dawn); F. Muntoni; B.G. Nordestgaard (Børge G.); K. Northstone (Kate); M.C. O'donovan (Michael); S. O'Rahilly (Stephen); A. Onoufriadis (Alexandros); K. Oualkacha (Karim); M.J. Owen (Michael J.); A. Palotie (Aarno); K. Panoutsopoulou (Kalliope); V. Parker (Victoria); J.R. Parr (Jeremy R.); L. Paternoster (Lavinia); T. Paunio (Tiina); F. Payne (Felicity); S.J. Payne (Stewart J.); J.R.B. Perry (John); O.P.H. Pietiläinen (Olli); V. Plagnol (Vincent); R.C. Pollitt (Rebecca C.); S. Povey (Sue); M.A. Quail (Michael A.); L. Quaye (Lydia); L. Raymond (Lucy); K. Rehnström (Karola); C.K. Ridout (Cheryl K.); S.M. Ring (Susan); G.R.S. Ritchie (Graham R.S.); N. Roberts (Nicola); R.L. Robinson (Rachel L.); D.B. Savage (David); P.J. Scambler (Peter); S. Schiffels (Stephan); M. Schmidts (Miriam); N. Schoenmakers (Nadia); R.H. Scott (Richard H.); R.A. Scott (Robert); R.K. Semple (Robert K.); E. Serra (Eva); S.I. Sharp (Sally I.); A.C. Shaw (Adam C.); H.A. Shihab (Hashem A.); S.-Y. Shin (So-Youn); D. Skuse (David); K.S. Small (Kerrin); C. Smee (Carol); G.D. Smith; L. Southam (Lorraine); O. Spasic-Boskovic (Olivera); T.D. Spector (Timothy); D. St. Clair (David); B. St Pourcain (Beate); J. Stalker (Jim); E. Stevens (Elizabeth); J. Sun (Jianping); G. Surdulescu (Gabriela); J. Suvisaari (Jaana); P. Syrris (Petros); I. Tachmazidou (Ioanna); R. Taylor (Rohan)
2015-01-01
textabstractImputing genotypes from reference panels created by whole-genome sequencing (WGS) provides a cost-effective strategy for augmenting the single-nucleotide polymorphism (SNP) content of genome-wide arrays. The UK10K Cohorts project has generated a data set of 3,781 whole genomes sequenced
Improved imputation of low-frequency and rare variants using the UK10K haplotype reference panel
Huang, Jie; Howie, Bryan; Mccarthy, Shane
2015-01-01
Imputing genotypes from reference panels created by whole-genome sequencing (WGS) provides a cost-effective strategy for augmenting the single-nucleotide polymorphism (SNP) content of genome-wide arrays. The UK10K Cohorts project has generated a data set of 3,781 whole genomes sequenced at low de...
Rosner, Bernard; Colditz, Graham A.
2011-01-01
Purpose Age at menopause, a major marker in the reproductive life, may bias results for evaluation of breast cancer risk after menopause. Methods We follow 38,948 premenopausal women in 1980 and identify 2,586 who reported hysterectomy without bilateral oophorectomy, and 31,626 who reported natural menopause during 22 years of follow-up. We evaluate risk factors for natural menopause, impute age at natural menopause for women reporting hysterectomy without bilateral oophorectomy and estimate the hazard of reaching natural menopause in the next 2 years. We apply this imputed age at menopause to both increase sample size and to evaluate the relation between postmenopausal exposures and risk of breast cancer. Results Age, cigarette smoking, age at menarche, pregnancy history, body mass index, history of benign breast disease, and history of breast cancer were each significantly related to age at natural menopause; duration of oral contraceptive use and family history of breast cancer were not. The imputation increased sample size substantially and although some risk factors after menopause were weaker in the expanded model (height, and alcohol use), use of hormone therapy is less biased. Conclusions Imputing age at menopause increases sample size, broadens generalizability making it applicable to women with hysterectomy, and reduces bias. PMID:21441037
The long-run relationship between the Japanese credit and money multipliers
Mototsugu Fukushige
2013-01-01
The standard argument is that while money creation and credit creation have different channels, they provide the same theoretical size of multipliers. However, there is usually some difference in practice. Consequently, in this paper we investigate the long-run relationship between the credit and money multipliers in Japan.
A Floating Point Multiplier based FPGA Synthesis for Neural Networks Enhancement
F. BENREKIA,
2010-05-01
Full Text Available FPGA (Field Programmable Gate Array implementation of Artificial Neural Networks (ANNs calls for multipliers of various word lengths. In this paper, a new approach for designing a FloatingPoint Multiplier(FPM is developed and tested using VHDL. With VHDL (Very High Description Language analyzer and logic synthesis software, hardware prototypes could be implemented in FPGA.
THE REALIZATION OF MULTIPLIER HILBERT BIMODULE ON BIDUAL SPACE AND TIETZE EXTENSION THEOREM
无
2000-01-01
The multiplier bimodule of Hilbert bimodule is introduced in a way similar to [1],and its realization on a quotient of bidual space and Tietze extension theorem are obtained similar to that in C*-algebra case. As a result,the multiplier bimodule here is also a Hilbert bimodule.
Singular Lagrangian, Hamiltonization and Jacobi last multiplier for certain biological systems
Guha, Partha; Ghose Choudhury, Anindya
2013-07-01
We study the construction of singular Lagrangians using Jacobi's last multiplier (JLM). We also demonstrate the significance of the last multiplier in Hamiltonian theory by explicitly constructing the Hamiltonian of the Host-Parasite model and a Lotka-Volterra mutualistic system, both of which are well known first-order systems of differential equations arising in biology.
Implementation gap between the theory and practice of biodiversity offset multipliers
Bull, Joseph William; Lloyd, Samuel; Strange, Niels
2016-01-01
when considering, for example, ecological uncertainties. We propose even larger multipliers required to satisfy previously ignored considerations – including prospect theory, taboo trades, and power relationships. Conversely, our data analyses show that multipliers are smaller in practice, regularly...... used. Further research is necessary to determine reasons...
Jacobi Last Multiplier Method for Equations of Motion of Constrained Mechanical Systems
CHEN Xiang-Wei; MEI Feng-Xiang
2011-01-01
@@ The Jacobi last multiplier method for holonomic and nonholonomic mechanical systems is studied and some examples are given to attempt applications of the method.%The Jacobi last multiplier method for holonomic and nonholonomic mechanical systems is studied and some examples are given to attempt applications of the method.
Jackson-type and Bernstein-type inequalities for multipliers on Herz-type Hardy spaces
XIE LinSen; LAN JiaCheng; LAN SenHua; YAN DunYan
2009-01-01
We establish Jackson-type and Bernstein-type inequalities for multipliers on Herz-type Hardy spaces.These inequalities can be applied to some important operators in Fourier analysis,such as the Bochner-Riesz multiplier over the critical index,the generalized Bochner-Riesz mean and the generalized Able-Poisson operator.
Jackson-type and Bernstein-type inequalities for multipliers on Herz-type Hardy spaces
无
2009-01-01
We establish Jackson-type and Bernstein-type inequalities for multipliers on Herz-type Hardy spaces. These inequalities can be applied to some important operators in Fourier analysis, such as the Bochner-Riesz multiplier over the critical index, the generalized Bochner-Riesz mean and the generalized Able-Poisson operator.
WERKMAN, HA; JANSEN, C; KLEIN, JP; TENDUIS, HJ
1991-01-01
In a retrospective study involving 866 multiply-injured patients we demonstrated urinary tract injuries in 72 patients (8.3 per cent), 17 (2 per cent) of which were serious. Haematuria was a frequent finding in multiply-injured patients. In patients with serious lesions of the urinary tract, more th
Dimitrakopoulou, Vasiliki; Efthimiou, Orestis; Leucht, Stefan; Salanti, Georgia
2015-02-28
Missing outcome data are a problem commonly observed in randomized control trials that occurs as a result of participants leaving the study before its end. Missing such important information can bias the study estimates of the relative treatment effect and consequently affect the meta-analytic results. Therefore, methods on manipulating data sets with missing participants, with regard to incorporating the missing information in the analysis so as to avoid the loss of power and minimize the bias, are of interest. We propose a meta-analytic model that accounts for possible error in the effect sizes estimated in studies with last observation carried forward (LOCF) imputed patients. Assuming a dichotomous outcome, we decompose the probability of a successful unobserved outcome taking into account the sensitivity and specificity of the LOCF imputation process for the missing participants. We fit the proposed model within a Bayesian framework, exploring different prior formulations for sensitivity and specificity. We illustrate our methods by performing a meta-analysis of five studies comparing the efficacy of amisulpride versus conventional drugs (flupenthixol and haloperidol) on patients diagnosed with schizophrenia. Our meta-analytic models yield estimates similar to meta-analysis with LOCF-imputed patients. Allowing for uncertainty in the imputation process, precision is decreased depending on the priors used for sensitivity and specificity. Results on the significance of amisulpride versus conventional drugs differ between the standard LOCF approach and our model depending on prior beliefs on the imputation process. Our method can be regarded as a useful sensitivity analysis that can be used in the presence of concerns about the LOCF process.
Sulovari, Arvis; Li, Dawei
2014-07-19
Genome-wide association studies (GWAS) have successfully identified genes associated with complex human diseases. Although much of the heritability remains unexplained, combining single nucleotide polymorphism (SNP) genotypes from multiple studies for meta-analysis will increase the statistical power to identify new disease-associated variants. Meta-analysis requires same allele definition (nomenclature) and genome build among individual studies. Similarly, imputation, commonly-used prior to meta-analysis, requires the same consistency. However, the genotypes from various GWAS are generated using different genotyping platforms, arrays or SNP-calling approaches, resulting in use of different genome builds and allele definitions. Incorrect assumptions of identical allele definition among combined GWAS lead to a large portion of discarded genotypes or incorrect association findings. There is no published tool that predicts and converts among all major allele definitions. In this study, we have developed a tool, GACT, which stands for Genome build and Allele definition Conversion Tool, that predicts and inter-converts between any of the common SNP allele definitions and between the major genome builds. In addition, we assessed several factors that may affect imputation quality, and our results indicated that inclusion of singletons in the reference had detrimental effects while ambiguous SNPs had no measurable effect. Unexpectedly, exclusion of genotypes with missing rate > 0.001 (40% of study SNPs) showed no significant decrease of imputation quality (even significantly higher when compared to the imputation with singletons in the reference), especially for rare SNPs. GACT is a new, powerful, and user-friendly tool with both command-line and interactive online versions that can accurately predict, and convert between any of the common allele definitions and between genome builds for genome-wide meta-analysis and imputation of genotypes from SNP-arrays or deep
On-Chip Power-Combining for High-Power Schottky Diode Based Frequency Multipliers
Siles Perez, Jose Vicente (Inventor); Chattopadhyay, Goutam (Inventor); Lee, Choonsup (Inventor); Schlecht, Erich T. (Inventor); Jung-Kubiak, Cecile D. (Inventor); Mehdi, Imran (Inventor)
2015-01-01
A novel MMIC on-chip power-combined frequency multiplier device and a method of fabricating the same, comprising two or more multiplying structures integrated on a single chip, wherein each of the integrated multiplying structures are electrically identical and each of the multiplying structures include one input antenna (E-probe) for receiving an input signal in the millimeter-wave, submillimeter-wave or terahertz frequency range inputted on the chip, a stripline based input matching network electrically connecting the input antennas to two or more Schottky diodes in a balanced configuration, two or more Schottky diodes that are used as nonlinear semiconductor devices to generate harmonics out of the input signal and produce the multiplied output signal, stripline based output matching networks for transmitting the output signal from the Schottky diodes to an output antenna, and an output antenna (E-probe) for transmitting the output signal off the chip into the output waveguide transmission line.
Mendel-GPU: haplotyping and genotype imputation on graphics processing units.
Chen, Gary K; Wang, Kai; Stram, Alex H; Sobel, Eric M; Lange, Kenneth
2012-11-15
In modern sequencing studies, one can improve the confidence of genotype calls by phasing haplotypes using information from an external reference panel of fully typed unrelated individuals. However, the computational demands are so high that they prohibit researchers with limited computational resources from haplotyping large-scale sequence data. Our graphics processing unit based software delivers haplotyping and imputation accuracies comparable to competing programs at a fraction of the computational cost and peak memory demand. Mendel-GPU, our OpenCL software, runs on Linux platforms and is portable across AMD and nVidia GPUs. Users can download both code and documentation at http://code.google.com/p/mendel-gpu/. gary.k.chen@usc.edu. Supplementary data are available at Bioinformatics online.
Impute DC link (IDCL) cell based power converters and control thereof
Divan, Deepakraj M.; Prasai, Anish; Hernendez, Jorge; Moghe, Rohit; Iyer, Amrit; Kandula, Rajendra Prasad
2016-04-26
Power flow controllers based on Imputed DC Link (IDCL) cells are provided. The IDCL cell is a self-contained power electronic building block (PEBB). The IDCL cell may be stacked in series and parallel to achieve power flow control at higher voltage and current levels. Each IDCL cell may comprise a gate drive, a voltage sharing module, and a thermal management component in order to facilitate easy integration of the cell into a variety of applications. By providing direct AC conversion, the IDCL cell based AC/AC converters reduce device count, eliminate the use of electrolytic capacitors that have life and reliability issues, and improve system efficiency compared with similarly rated back-to-back inverter system.
Fiedler, H. [UNEP Chemicals, Chatelaine (Switzerland)
2004-09-15
The Stockholm Convention on Persistent Organic Pollutants (POPs) entered into force on 17 May 2004 with 50 Parties. In May 2004, 59 countries had ratified or acceded the Convention. The objective of the Convention is ''to protect human health and the environment from persistent organic pollutants''. For intentionally produced POPs, e.g., pesticides and industrial chemicals such as hexachlorobenzene and polychlorinated biphenyls, this will be achieved by stop of production and use. For unintentionally generated POPs, such as polychlorinated dibenzo-pdioxins (PCDD) and polychlorinated dibenzofurans (PCDF), measures have to be taken to ''reduce the total releases derived from anthropogenic sources''; the final goal is ultimate elimination, where feasible. Under the Convention, Parties have to establish and maintain release inventories to prove the continuous release reduction. Since many countries do not have the technical and financial capacity to measure all releases from all potential PCDD/PCDF sources, UNEP Chemicals has developed the ''Standardized Toolkit for the Identification of Quantification of Dioxin and Furan Releases'' (''Toolkit'' for short), a methodology to estimate annual releases from a number of sources. With this methodology, annual releases can be estimated by multiplying process-specific default emission factors provided in the Toolkit with national activity data. At the seventh session of the Intergovernmental Negotiating Committee, the Toolkit was recommended to be used by countries when reporting national release data to the Conference of the Parties. The Toolkit is especially used by developing countries and countries with economies in transition where no measured data are available. Results from Uruguay, Thailand, Jordan, Philippines, and Brunei Darussalam have been published.
Partial spectral multipliers and partial Riesz transforms for degenerate operators
ter Elst, A F M
2012-01-01
We consider degenerate differential operators $A = \\displaystyle{\\sum_{k,j=1}^d \\partial_k (a_{kj} \\partial_j)}$ on $L^2(\\mathbb{R}^d)$ with real symmetric bounded measurable coefficients. Given a function $\\chi \\in C_b^\\infty(\\mathbb{R}^d)$ (respectively, $\\Omega$ a bounded Lipschitz domain) and suppose that $(a_{kj}) \\ge \\mu > 0$ a.e.\\ on $ \\supp \\chi$ (resp., a.e.\\ on $\\Omega$). We prove a spectral multiplier type result: if $F\\colon [0, \\infty) \\to \\mathbb{C}$ is such that $\\sup_{t > 0} \\| \\varphi(.) F(t .) \\|_{C^s} d/2$ then $M_\\chi F(I+A) M_\\chi$ is weak type $(1,1)$ (resp.\\ $P_\\Omega F(I+A) P_\\Omega$ is weak type $(1,1)$). We also prove boundedness on $L^p$ for all $p \\in (1,2]$ of the partial Riesz transforms $M_\\chi \
Suborbital Soft X-Ray Spectroscopy with Gaseous Electron Multipliers
Rogers, Thomas D.
This thesis consists of the design, fabrication, and launch of a sounding rocket payload to observe the spectrum of the soft X-ray emission (0.1-1 keV) from the Cygnus Loop supernova remnant. This instrument, designated the Off-plane Grating Rocket for Extended Source Spectroscopy (OGRESS), was launched from White Sands Missile Range on May 2nd, 2015. The X-ray spectrograph incorporated a wire-grid focuser feeding an array of gratings in the extreme off-plane mount which dispersed the spectrum onto Gaseous Electron Multiplier (GEM) detectors. The gain characteristics of OGRESS's GEM detectors were fully characterized with respect to applied voltage and internal gas pressure, allowing operational settings to be optimized. The GEMs were optimized to operate below laboratory atmospheric pressure, allowing lower applied voltages, thus reducing the risk of both electrical arcing and tearing of the thin detector windows. The instrument recorded 388 seconds of data and found highly uniform count distributions over both detector faces, in sharp contrast to the expected thermal line spectrum. This signal is attributed to X-ray fluorescence lines generated inside the spectrograph. The radiation is produced when thermal ionospheric particles are accelerated into the interior walls of the spectrograph by the high voltages of the detector windows. A fluorescence model was found to fit the flight data better than modeled supernova spectra. Post-flight testing and analysis revealed that electrons produce distinct signal on the detectors which can also be successfully modeled as fluorescence emission.
Yan-Xia Lin
2015-12-01
Full Text Available Lin (2014 developed a framework of the method of the sample-moment-based density approximant, for estimating the probability density function of microdata based on noise multiplied data. Theoretically, it provides a promising method for data users in generating the synthetic data of the original data without accessing the original data; however, technical issues can cause problems implementing the method. In this paper, we describe a software package called MaskDensity14, written in the R language, that uses a computational approach to solve the technical issues and makes the method of the sample-moment-based density approximant feasible. MaskDensity14 has applications in many areas, such as sharing clinical trial data and survey data without releasing the original data.
Design and Performance Analysis of Reversible Logic Four Quadrant Multiplier Using CSLA and CLAA
Mr. P. Dileep Kumar Reddy
2014-03-01
Full Text Available Multiplication is a fundamental operation in most signal processing algorithms. Multipliers have large area, long latency and consume considerable power. Therefore low-power multiplier design has been an important part in low- power VLSI system design. There has been extensive work on low-power multipliers at technology, physical, circuit and logic levels. A system’s performance is generally determined by the performance of the multiplier because the multiplier is generally the slowest element in the system. Furthermore, it is generally the most area consuming. Hence, optimizing the speed and area of the multiplier is a major design issue. However, area and speed are usually conflicting constraints so that improving speed results mostly in larger areas. As a result, a whole spectrum of multipliers with different area- speed constraints has been designed with reversible logic gates. The reversible logic has the promising applications in emerging computing paradigm such as quantum computing, quantum dot cellular automata, optical computing, etc. In reversible logic gates there is a unique one-to-one mapping between the inputs and outputs.
Dyadic Bivariate Fourier Multipliers for Multi-Wavelets in L2(R2)
Zhongyan Li∗; Xiaodi Xu
2015-01-01
The single 2 dilation orthogonal wavelet multipliers in one dimensional case and single A-dilation (where A is any expansive matrix with integer entries and|detA|=2) wavelet multipliers in high dimensional case were completely characterized by the Wutam Consortium (1998) and Z. Y. Li, et al. (2010). But there exist no more results on orthogonal multivariate wavelet matrix multipliers corresponding integer expansive dilation matrix with the absolute value of determinant not 2 in L2(R2). In this paper, we choose as the dilation matrix and consider the 2I2-dilation orthogonal multivariate wavelet Y={y1,y2,y3}, (which is called a dyadic bivariate wavelet) multipliers. We call the 3×3 matrix-valued function A(s)=[ fi,j(s)]3×3, where fi,j are measurable functions, a dyadic bivariate matrix Fourier wavelet multiplier if the inverse Fourier transform of A(s)(cy1(s),cy2(s),cy3(s))⊤ = ( b g1(s), b g2(s), b g3(s))⊤ is a dyadic bivariate wavelet whenever (y1,y2,y3) is any dyadic bivariate wavelet. We give some conditions for dyadic matrix bivariate wavelet multipliers. The results extended that of Z. Y. Li and X. L. Shi (2011). As an application, we construct some useful dyadic bivariate wavelets by using dyadic Fourier matrix wavelet multipliers and use them to image denoising.
Imputation of the rare HOXB13 G84E mutation and cancer risk in a large population-based cohort.
Thomas J Hoffmann
2015-01-01
Full Text Available An efficient approach to characterizing the disease burden of rare genetic variants is to impute them into large well-phenotyped cohorts with existing genome-wide genotype data using large sequenced referenced panels. The success of this approach hinges on the accuracy of rare variant imputation, which remains controversial. For example, a recent study suggested that one cannot adequately impute the HOXB13 G84E mutation associated with prostate cancer risk (carrier frequency of 0.0034 in European ancestry participants in the 1000 Genomes Project. We show that by utilizing the 1000 Genomes Project data plus an enriched reference panel of mutation carriers we were able to accurately impute the G84E mutation into a large cohort of 83,285 non-Hispanic White participants from the Kaiser Permanente Research Program on Genes, Environment and Health Genetic Epidemiology Research on Adult Health and Aging cohort. Imputation authenticity was confirmed via a novel classification and regression tree method, and then empirically validated analyzing a subset of these subjects plus an additional 1,789 men from Kaiser specifically genotyped for the G84E mutation (r2 = 0.57, 95% CI = 0.37–0.77. We then show the value of this approach by using the imputed data to investigate the impact of the G84E mutation on age-specific prostate cancer risk and on risk of fourteen other cancers in the cohort. The age-specific risk of prostate cancer among G84E mutation carriers was higher than among non-carriers. Risk estimates from Kaplan-Meier curves were 36.7% versus 13.6% by age 72, and 64.2% versus 24.2% by age 80, for G84E mutation carriers and non-carriers, respectively (p = 3.4x10-12. The G84E mutation was also associated with an increase in risk for the fourteen other most common cancers considered collectively (p = 5.8x10-4 and more so in cases diagnosed with multiple cancer types, both those including and not including prostate cancer, strongly suggesting
Puett Robin C
2009-10-01
Full Text Available Abstract Background There is increasing interest in the study of place effects on health, facilitated in part by geographic information systems. Incomplete or missing address information reduces geocoding success. Several geographic imputation methods have been suggested to overcome this limitation. Accuracy evaluation of these methods can be focused at the level of individuals and at higher group-levels (e.g., spatial distribution. Methods We evaluated the accuracy of eight geo-imputation methods for address allocation from ZIP codes to census tracts at the individual and group level. The spatial apportioning approaches underlying the imputation methods included four fixed (deterministic and four random (stochastic allocation methods using land area, total population, population under age 20, and race/ethnicity as weighting factors. Data included more than 2,000 geocoded cases of diabetes mellitus among youth aged 0-19 in four U.S. regions. The imputed distribution of cases across tracts was compared to the true distribution using a chi-squared statistic. Results At the individual level, population-weighted (total or under age 20 fixed allocation showed the greatest level of accuracy, with correct census tract assignments averaging 30.01% across all regions, followed by the race/ethnicity-weighted random method (23.83%. The true distribution of cases across census tracts was that 58.2% of tracts exhibited no cases, 26.2% had one case, 9.5% had two cases, and less than 3% had three or more. This distribution was best captured by random allocation methods, with no significant differences (p-value > 0.90. However, significant differences in distributions based on fixed allocation methods were found (p-value Conclusion Fixed imputation methods seemed to yield greatest accuracy at the individual level, suggesting use for studies on area-level environmental exposures. Fixed methods result in artificial clusters in single census tracts. For studies
Imputation of the Rare HOXB13 G84E Mutation and Cancer Risk in a Large Population-Based Cohort
Hoffmann, Thomas J.; Sakoda, Lori C.; Shen, Ling; Jorgenson, Eric; Habel, Laurel A.; Liu, Jinghua; Kvale, Mark N.; Asgari, Maryam M.; Banda, Yambazi; Corley, Douglas; Kushi, Lawrence H.; Quesenberry, Charles P.; Schaefer, Catherine; Van Den Eeden, Stephen K.; Risch, Neil; Witte, John S.
2015-01-01
An efficient approach to characterizing the disease burden of rare genetic variants is to impute them into large well-phenotyped cohorts with existing genome-wide genotype data using large sequenced referenced panels. The success of this approach hinges on the accuracy of rare variant imputation, which remains controversial. For example, a recent study suggested that one cannot adequately impute the HOXB13 G84E mutation associated with prostate cancer risk (carrier frequency of 0.0034 in European ancestry participants in the 1000 Genomes Project). We show that by utilizing the 1000 Genomes Project data plus an enriched reference panel of mutation carriers we were able to accurately impute the G84E mutation into a large cohort of 83,285 non-Hispanic White participants from the Kaiser Permanente Research Program on Genes, Environment and Health Genetic Epidemiology Research on Adult Health and Aging cohort. Imputation authenticity was confirmed via a novel classification and regression tree method, and then empirically validated analyzing a subset of these subjects plus an additional 1,789 men from Kaiser specifically genotyped for the G84E mutation (r2 = 0.57, 95% CI = 0.37−0.77). We then show the value of this approach by using the imputed data to investigate the impact of the G84E mutation on age-specific prostate cancer risk and on risk of fourteen other cancers in the cohort. The age-specific risk of prostate cancer among G84E mutation carriers was higher than among non-carriers. Risk estimates from Kaplan-Meier curves were 36.7% versus 13.6% by age 72, and 64.2% versus 24.2% by age 80, for G84E mutation carriers and non-carriers, respectively (p = 3.4×10−12). The G84E mutation was also associated with an increase in risk for the fourteen other most common cancers considered collectively (p = 5.8×10−4) and more so in cases diagnosed with multiple cancer types, both those including and not including prostate cancer, strongly suggesting pleiotropic effects
Configurable multiplier modules for an adaptive computing system
O. A. Pfänder
2006-01-01
Full Text Available The importance of reconfigurable hardware is increasing steadily. For example, the primary approach of using adaptive systems based on programmable gate arrays and configurable routing resources has gone mainstream and high-performance programmable logic devices are rivaling traditional application-specific hardwired integrated circuits. Also, the idea of moving from the 2-D domain into a 3-D design which stacks several active layers above each other is gaining momentum in research and industry, to cope with the demand for smaller devices with a higher scale of integration. However, optimized arithmetic blocks in course-grain reconfigurable arrays as well as field-programmable architectures still play an important role. In countless digital systems and signal processing applications, the multiplication is one of the critical challenges, where in many cases a trade-off between area usage and data throughput has to be made. But the a priori choice of word-length and number representation can also be replaced by a dynamic choice at run-time, in order to improve flexibility, area efficiency and the level of parallelism in computation. In this contribution, we look at an adaptive computing system called 3-D-SoftChip to point out what parameters are crucial to implement flexible multiplier blocks into optimized elements for accelerated processing. The 3-D-SoftChip architecture uses a novel approach to 3-dimensional integration based on flip-chip bonding with indium bumps. The modular construction, the introduction of interfaces to realize the exchange of intermediate data, and the reconfigurable sign handling approach will be explained, as well as a beneficial way to handle and distribute the numerous required control signals.
Computation of Floquet Multipliers Using an Iterative Method for Variational Equations
Nureki, Yu; Murashige, Sunao
This paper proposes a new method to numerically obtain Floquet multipliers which characterize stability of periodic orbits of ordinary differential equations. For sufficiently smooth periodic orbits, we can compute Floquet multipliers using some standard numerical methods with enough accuracy. However, it has been reported that these methods may produce incorrect results under some conditions. In this work, we propose a new iterative method to compute Floquet multipliers using eigenvectors of matrix solutions of the variational equations. Numerical examples show effectiveness of the proposed method.
Parametric Model for the Response of a Photo-multiplier Tube
Aguilar, M.; Alcaraz, J.; Berdugo, J.; Casaus, J.; Delgado, C.; Diaz, C.; Lanciotti, E.; Mana, C.; Marin, J.; Martinez, G.; Molla, M.; Palomares, C.; Rodriguez, J.; Sanchez, E.; Sevilla, A.; Torrento, A.
2005-07-01
When a photon impinges upon a photon-multiplier tube, an electron is emitted with certain probability and, after several amplification stages, an electron shower is collected at the anode. However, when the first electron is emitted from one of the amplification dynodes or the photon-multiplier is operated under untoward conditions (external magnetic fields...) smaller showers are collected. In this paper, we present a bi-parametric model which describers the response of a photo-multiplier tube over a wide range of circumstances. (Author)
Improved Faddeev-Jackiw quantization of the electromagnetic field and Lagrange multiplier fields
YANG Jin-Long; HUANG Yong-Chang
2008-01-01
We use the improved Faddeev-Jackiw quantization method to quantize the electromagnetic field and its Lagrange multiplier fields.The method's comparison with the usual Faddeev-Jackiw method and the Dirac method is given.We show that this method is equivalent to the Dirac method and also retains all the merits of the usual Faddeev-Jackiw method.Moreover,it is simpler than the usual one if one needs to obtain new secondary constraints.Therefore,the improved Faddeev-Jackiw method is essential.Meanwhile,we find the new meaning of the Lagrange multipliers and explain the Faddeev-Jackiw generalized brackets concerning the Lagrange multipliers.
Duma, M
2013-09-01
Full Text Available We propose a hybrid missing data imputation technique using positive selection and correlation-based feature selection for insurance data. The hybrid is used to help supervised learning methods improve their classification accuracy and resilience...
Eulenburg, Christine; Suling, Anna; Neuser, Petra; Reuss, Alexander; Canzler, Ulrich; Fehm, Tanja; Luyten, Alexander; Hellriegel, Martin; Woelber, Linn; Mahner, Sven
2016-01-01
Propensity scoring (PS) is an established tool to account for measured confounding in non-randomized studies. These methods are sensitive to missing values, which are a common problem in observational data. The combination of multiple imputation of missing values and different propensity scoring techniques is addressed in this work. For a sample of lymph node-positive vulvar cancer patients, we re-analyze associations between the application of radiotherapy and disease-related and non-related survival. Inverse-probability-of-treatment-weighting (IPTW) and PS stratification are applied after multiple imputation by chained equation (MICE). Methodological issues are described in detail. Interpretation of the results and methodological limitations are discussed.
Nearest neighbor imputation using spatial-temporal correlations in wireless sensor networks.
Li, YuanYuan; Parker, Lynne E
2014-01-01
Missing data is common in Wireless Sensor Networks (WSNs), especially with multi-hop communications. There are many reasons for this phenomenon, such as unstable wireless communications, synchronization issues, and unreliable sensors. Unfortunately, missing data creates a number of problems for WSNs. First, since most sensor nodes in the network are battery-powered, it is too expensive to have the nodes retransmit missing data across the network. Data re-transmission may also cause time delays when detecting abnormal changes in an environment. Furthermore, localized reasoning techniques on sensor nodes (such as machine learning algorithms to classify states of the environment) are generally not robust enough to handle missing data. Since sensor data collected by a WSN is generally correlated in time and space, we illustrate how replacing missing sensor values with spatially and temporally correlated sensor values can significantly improve the network's performance. However, our studies show that it is important to determine which nodes are spatially and temporally correlated with each other. Simple techniques based on Euclidean distance are not sufficient for complex environmental deployments. Thus, we have developed a novel Nearest Neighbor (NN) imputation method that estimates missing data in WSNs by learning spatial and temporal correlations between sensor nodes. To improve the search time, we utilize a kd-tree data structure, which is a non-parametric, data-driven binary search tree. Instead of using traditional mean and variance of each dimension for kd-tree construction, and Euclidean distance for kd-tree search, we use weighted variances and weighted Euclidean distances based on measured percentages of missing data. We have evaluated this approach through experiments on sensor data from a volcano dataset collected by a network of Crossbow motes, as well as experiments using sensor data from a highway traffic monitoring application. Our experimental results
Sepúlveda, Nuno; Manjurano, Alphaxard; Drakeley, Chris; Clark, Taane G
2014-07-01
Multiple imputation based on chained equations (MICE) is an alternative missing genotype method that can use genetic and nongenetic auxiliary data to inform the imputation process. Previously, MICE was successfully tested on strongly linked genetic data. We have now tested it on data of the HBA2 gene which, by the experimental design used in a malaria association study in Tanzania, shows a high missing data percentage and is weakly linked with the remaining genetic markers in the data set. We constructed different imputation models and studied their performance under different missing data conditions. Overall, MICE failed to accurately predict the true genotypes. However, using the best imputation model for the data, we obtained unbiased estimates for the genetic effects, and association signals of the HBA2 gene on malaria positivity. When the whole data set was analyzed with the same imputation model, the association signal increased from 0.80 to 2.70 before and after imputation, respectively. Conversely, postimputation estimates for the genetic effects remained the same in relation to the complete case analysis but showed increased precision. We argue that these postimputation estimates are reasonably unbiased, as a result of a good study design based on matching key socio-environmental factors.
Liu, Benmei; Yu, Mandi; Graubard, Barry I; Troiano, Richard P; Schenker, Nathaniel
2016-12-10
The Physical Activity Monitor component was introduced into the 2003-2004 National Health and Nutrition Examination Survey (NHANES) to collect objective information on physical activity including both movement intensity counts and ambulatory steps. Because of an error in the accelerometer device initialization process, the steps data were missing for all participants in several primary sampling units, typically a single county or group of contiguous counties, who had intensity count data from their accelerometers. To avoid potential bias and loss in efficiency in estimation and inference involving the steps data, we considered methods to accurately impute the missing values for steps collected in the 2003-2004 NHANES. The objective was to come up with an efficient imputation method that minimized model-based assumptions. We adopted a multiple imputation approach based on additive regression, bootstrapping and predictive mean matching methods. This method fits alternative conditional expectation (ace) models, which use an automated procedure to estimate optimal transformations for both the predictor and response variables. This paper describes the approaches used in this imputation and evaluates the methods by comparing the distributions of the original and the imputed data. A simulation study using the observed data is also conducted as part of the model diagnostics. Finally, some real data analyses are performed to compare the before and after imputation results. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Galina A. Manokhina
2012-11-01
Full Text Available The article highlights the main questions concerning possible consequences of replacement of nowadays operating system in the form of a single tax in reference to imputed income with patent system of the taxation. The main advantages and drawbacks of new system of the taxation are shown, including the opinion that not the replacement of one special mode of the taxation with another is more effective, but the introduction of patent a taxation system as an auxilary system.
Design of Pipeline Multiplier Based on Modified Booth's Algorithm and Wallace Tree
Yao, Aihong; Li, Ling; Sun, Mengzhe
A design of 32*32 bit pipelined multiplier is presented in this paper. The proposed multiplier is based on the modified booth algorithm and Wallace tree structure. In order to improve the throughput rate of the multiplier, pipeline architecture is introduced to the Wallace tree. Carry Select Adder is deployed to reduce the propagation delay of carry signal for the final level 64-bit adder. The multiplier is fully implemented with Verilog HDL and synthesized successfully with Quartus II. The experiment result shows that the resource consumption and power consumption is reduced to 2560LE and 120mW, the operating frequency is improved from 136.21MHz to 165.07MHz.
Chen, Shaobo; Chen, Pingxiuqi; Shao, Qiliang; Basha Shaik, Nazeem; Xie, Jiafeng
2017-05-01
The elliptic curve cryptography (ECC) provides much stronger security per bits compared to the traditional cryptosystem, and hence it is an ideal role in secure communication in smart grid. On the other side, secure implementation of finite field multiplication over GF(2 m ) is considered as the bottle neck of ECC. In this paper, we present a novel obfuscation strategy for secure implementation of systolic field multiplier for ECC in smart grid. First, for the first time, we propose a novel obfuscation technique to derive a novel obfuscated systolic finite field multiplier for ECC implementation. Then, we employ the DNA cryptography coding strategy to obfuscate the field multiplier further. Finally, we obtain the area-time-power complexity of the proposed field multiplier to confirm the efficiency of the proposed design. The proposed design is highly obfuscated with low overhead, suitable for secure cryptosystem in smart grid.
VHDL IMPLEMENTATION AND COMPARISON OF COMPLEX MUL-TIPLIER USING BOOTH’S AND VEDIC ALGORITHM
Rajashri K. Bhongade
2015-11-01
Full Text Available For designing of complex number multiplier basic idea is adopted from designing of multiplier. An ancient Indian mathematics "Vedas" is used for designing the multiplier unit. There are 16 sutra in Vedas, from that the Urdhva Tiryakb-hyam sutra (method was selected for implementation complex multiplication and basically Urdhva Tiryakbhyam sutra appli-cable to all cases of multiplication. Any multi-bit multiplication can be reduced down to single bit multiplication and addition by using Urdhva Tiryakbhyam sutra is performed by vertically and crosswise. The partial products and sums are generated in single step which reduces the carry propagation from LSB to MSB by using these formulas. In this paper simulation result for 4bit complex no. multiplication using Booth‟s algorithm and using Vedic sutra are illustrated. The implementation of the Vedic mathematics and their application to the complex multiplier was checked parameter like propagation delay.
Karatsuba-Ofman Multiplier with Integrated Modular Reduction for GF(2m
CUEVAS-FARFAN, E.
2013-05-01
Full Text Available In this paper a novel GF(2m multiplier based on Karatsuba-Ofman Algorithm is presented. A binary field multiplication in polynomial basis is typically viewed as a two steps process, a polynomial multiplication followed by a modular reduction step. This research proposes a modification to the original Karatsuba-Ofman Algorithm in order to integrate the modular reduction inside the polynomial multiplication step. Modular reduction is achieved by using parallel linear feedback registers. The new algorithm is described in detail and results from a hardware implementation on FPGA technology are discussed. The hardware architecture is described in VHDL and synthesized for a Virtex-6 device. Although the proposed field multiplier can be implemented for arbitrary finite fields, the targeted finite fields are recommended for Elliptic Curve Cryptography. Comparing other KOA multipliers, our proposed multiplier uses 36% less area resources and improves the maximum delay in 10%.
Design of High speed Low Power Reversible Vedic multiplier and Reversible Divider
Srikanth G Department of Electronics & Communication Engineerig, Indur Institute of Engineering & Technology, Siddipet, Medak, JNTUH University, Telangana, India.
2014-09-01
Full Text Available This paper bring out a 32X32 bit reversible Vedic multiplier using "Urdhva Tiryakabhayam" sutra meaning vertical and crosswise, is designed using reversible logic gates, which is the first of its kind. Also in this paper we propose a new reversible unsigned division circuit. This circuit is designed using reversible components like reversible parallel adder, reversible left-shift register, reversible multiplexer, reversible n-bit register with parallel load line. The reversible vedic multiplier and reversible divider modules have been written in Verilog HDL and then synthesized and simulated using Xilinx ISE 9.2i. This reversible vedic multiplier results shows less delay and less power consumption by comparing with array multiplier.
Gazel Ser
2015-12-01
Full Text Available The purpose of this study was to evaluate the performance of multiple imputation method in case that missing observation structure is at random and completely at random from the approach of general linear mixed model. The application data of study was consisted of a total 77 heads of Norduz ram lambs at 7 months of age. After slaughtering, pH values measured at five different time points were determined as dependent variable. In addition, hot carcass weight, muscle glycogen level and fasting durations were included as independent variables in the model. In the dependent variable without missing observation, two missing observation structures including Missing Completely at Random (MCAR and Missing at Random (MAR were created by deleting the observations at certain rations (10% and 25%. After that, in data sets that have missing observation structure, complete data sets were obtained using MI (multiple imputation. The results obtained by applying general linear mixed model to the data sets that were completed using MI method were compared to the results regarding complete data. In the mixed model which was applied to the complete data and MI data sets, results whose covariance structures were the same and parameter estimations and standard estimations were rather close to the complete data are obtained. As a result, in this study, it was ensured that reliable information was obtained in mixed model in case of choosing MI as imputation method in missing observation structure and rates of both cases.
Gottfredson, Nisha C; Sterba, Sonya K; Jackson, Kristina M
2017-01-01
Random coefficient-dependent (RCD) missingness is a non-ignorable mechanism through which missing data can arise in longitudinal designs. RCD, for which we cannot test, is a problematic form of missingness that occurs if subject-specific random effects correlate with propensity for missingness or dropout. Particularly when covariate missingness is a problem, investigators typically handle missing longitudinal data by using single-level multiple imputation procedures implemented with long-format data, which ignores within-person dependency entirely, or implemented with wide-format (i.e., multivariate) data, which ignores some aspects of within-person dependency. When either of these standard approaches to handling missing longitudinal data is used, RCD missingness leads to parameter bias and incorrect inference. We explain why multilevel multiple imputation (MMI) should alleviate bias induced by a RCD missing data mechanism under conditions that contribute to stronger determinacy of random coefficients. We evaluate our hypothesis with a simulation study. Three design factors are considered: intraclass correlation (ICC; ranging from .25 to .75), number of waves (ranging from 4 to 8), and percent of missing data (ranging from 20 to 50%). We find that MMI greatly outperforms the single-level wide-format (multivariate) method for imputation under a RCD mechanism. For the MMI analyses, bias was most alleviated when the ICC is high, there were more waves of data, and when there was less missing data. Practical recommendations for handling longitudinal missing data are suggested.
Multiply Surface-Functionalized Nanoporous Carbon for Vehicular Hydrogen Storage
Pfeifer, Peter [Univ. of Missouri, Columbia, MO (United States). Dept. of Physics; Gillespie, Andrew [Univ. of Missouri, Columbia, MO (United States). Dept. of Physics; Stalla, David [Univ. of Missouri, Columbia, MO (United States). Dept. of Physics; Dohnke, Elmar [Univ. of Missouri, Columbia, MO (United States). Dept. of Physics
2017-02-20
The purpose of the project “Multiply Surface-Functionalized Nanoporous Carbon for Vehicular Hydrogen Storage” is the development of materials that store hydrogen (H_{2}) by adsorption in quantities and at conditions that outperform current compressed-gas H_{2} storage systems for electric power generation from hydrogen fuel cells (HFCs). Prominent areas of interest for HFCs are light-duty vehicles (“hydrogen cars”) and replacement of batteries with HFC systems in a wide spectrum of applications, ranging from forklifts to unmanned areal vehicles to portable power sources. State-of-the-art compressed H_{2} tanks operate at pressures between 350 and 700 bar at ambient temperature and store 3-4 percent of H_{2} by weight (wt%) and less than 25 grams of H_{2} per liter (g/L) of tank volume. Thus, the purpose of the project is to engineer adsorbents that achieve storage capacities better than compressed H_{2} at pressures less than 350 bar. Adsorption holds H_{2} molecules as a high-density film on the surface of a solid at low pressure, by virtue of attractive surface-gas interactions. At a given pressure, the density of the adsorbed film is the higher the stronger the binding of the molecules to the surface is (high binding energies). Thus, critical for high storage capacities are high surface areas, high binding energies, and low void fractions (high void fractions, such as in interstitial space between adsorbent particles, “waste” storage volume by holding hydrogen as non-adsorbed gas). Coexistence of high surface area and low void fraction makes the ideal adsorbent a nanoporous monolith, with pores wide enough to hold high-density hydrogen films, narrow enough to minimize storage as non-adsorbed gas, and thin walls between pores to minimize the volume occupied by solid instead of hydrogen. A monolith can be machined to fit into a rectangular tank (low pressure, conformable tank), cylindrical tank
Low Power Floating Point Computation Sharing Multiplier for Signal Processing Applications
Sivanantham S; Jagannadha Naidu K; Balamurugan S; Bhuvana Phaneendra D
2013-01-01
Design of low power, higher performance digital signal processing elements are the major requirements in ultra deep sub-micron technology. This paper presents an IEEE-754 standard compatible single precision Floating-point Computation SHaring Multiplier (FCSHM) scheme suitable for low-power and high-speed signal processing applications. The floating-point multiplier used at thefilter taps effectively uses the computation re-use concept. Experimental results on a 10-tap programmable FIR filter...
A 260-340 GHz Dual Chip Frequency Tripler for THz Frequency Multiplier Chains
Maestrini, Alain; Tripon-Canseliet, Charlotte; Ward, John S.; Gill, John J.; Mehdi, Imran
2006-01-01
We designed and fabricated a fix-tuned balanced frequency tripler working in the 260-340 GHz band to be the first stage of a x3x3x3 multiplier chain to 2.7 THz. The design of a dual-chip version of this multiplier featuring an input splitter / output combiner as part of the input / output matching networks of both chips - with no degradation of the expected bandwidth and efficiency- will be presented.
Thingholm, Tine E; Jensen, Ole N; Robinson, Phillip J
2008-01-01
spectrometric analysis, such as immobilized metal affinity chromatography or titanium dioxide the coverage of the phosphoproteome of a given sample is limited. Here we report a simple and rapid strategy - SIMAC - for sequential separation of mono-phosphorylated peptides and multiply phosphorylated peptides from...... and an optimized titanium dioxide chromatographic method. More than double the total number of identified phosphorylation sites was obtained with SIMAC, primarily from a three-fold increase in recovery of multiply phosphorylated peptides....
Investigation of the Decelerating Field of an Electron Multiplier under Negative Ion Impact
Larsen, Elfinn; Kjeldgaard, K.
1973-01-01
The effect of the decelerating field of an electron multiplier towards negative ions was investigated under standard mass spectrometric conditions. Diminishing of this decelerating field by changing of the potential of the electron multiplier increased the overall sensitivity to negative ions...... by a factor of 100. The secondary electron emission coefficient for the negative halogen ions relative to I− were measured on CuBe at the kinetic energies 1.0 and 1.5 keV....
A Revive on 32×32 Bit Multiprecision Dynamic Voltage Scaling Multiplier with Operands Scheduler
Mrs.S.N.Rawat
2016-02-01
Full Text Available In this paper, we present a Multiprecision (MP reconfigurable multiplier that incorporates variable precision, parallel processing (PP, razor-based dynamic voltage scaling (DVS, and dedicated MP operands scheduling to provide optimum performance for a variety of operating conditions. All of the building blocks of the proposed reconfigurable multiplier can either work as independent smaller-precision multipliers or work in parallel to perform higher-precision multiplications. Given the user’s requirements (e.g., throughput, a dynamic voltage/ frequency scaling management unit configures the multiplier to operate at the proper precision and frequency. Adapting to the run-time workload of the targeted application, razor flip-flops together with a dithering voltage unit then configure the multiplier to achieve the lowest power consumption. The single-switch dithering voltage unit and razor flip-flops help to reduce the voltage safety margins and overhead typically associated to DVS to the lowest level. The large silicon area and power overhead typically associated to reconfigurability features are removed. Finally, the proposed novel MP multiplier can further benefit from an operands scheduler that rearranges the input data, hence to determine the optimum voltage and frequency operating conditions for minimum power consumption. This low-power MP multiplier is fabricated in AMIS 0.35-μm technology. Experimental results show that the proposed MP design features a 28.2% and 15.8% reduction in circuit area and power consumption compared with conventional fixed-width multiplier. When combining this MP design with error-tolerant razor-based DVS, PP, and the proposed novel operands scheduler, 77.7%–86.3% total power reduction is achieved with a total silicon area overhead as low as 11.1%. This paper successfully demonstrates that a MP architecture can allow more aggressive frequency/supply voltage scaling for improved power efficiency
Solution of second order linear fuzzy difference equation by Lagrange's multiplier method
Sankar Prasad Mondal
2016-06-01
Full Text Available In this paper we execute the solution procedure for second order linear fuzzy difference equation by Lagrange's multiplier method. In crisp sense the difference equation are easy to solve, but when we take in fuzzy sense it forms a system of difference equation which is not so easy to solve. By the help of Lagrange's multiplier we can solved it easily. The results are illustrated by two different numerical examples and followed by two applications.
GATE REPLACEMENT TECHNIQUE FOR REDUCING LEAKAGE CURRENT IN WALLACE TREE MULTIPLIER
Naveen Raman
2013-01-01
Full Text Available Leakage power has become more significant in the power dissipation of todayâs CMOS circuits. This affects the portable battery operated devices directly. The multipliers are the main key for designing an energy efficient processor, where the multiplier design decides the digital signal processors efficiency. In this study gate replacement technique is used to reduce the leakage power in 4Ã4 Wallace tree multiplier architecture which has been designed by using one bit full adders. This technique replaces the gate which is at worst leakage state by a library gate .In this technique the actual output logic state is maintained in active mode. The main objective of our study is to calculate leakage power in 4Ã4 Wallace tree multiplier by applied gate replacement technique and it is compared with 4Ã4 Wallace tree full adder multiplier. The proposed method reduces 43% of leakage power in 4Ã4 Wallace tree multiplier.
VLSI Implementation of Fault Tolerance Multiplier based on Reversible Logic Gate
Ahmad, Nabihah; Hakimi Mokhtar, Ahmad; Othman, Nurmiza binti; Fhong Soon, Chin; Rahman, Ab Al Hadi Ab
2017-08-01
Multiplier is one of the essential component in the digital world such as in digital signal processing, microprocessor, quantum computing and widely used in arithmetic unit. Due to the complexity of the multiplier, tendency of errors are very high. This paper aimed to design a 2×2 bit Fault Tolerance Multiplier based on Reversible logic gate with low power consumption and high performance. This design have been implemented using 90nm Complemetary Metal Oxide Semiconductor (CMOS) technology in Synopsys Electronic Design Automation (EDA) Tools. Implementation of the multiplier architecture is by using the reversible logic gates. The fault tolerance multiplier used the combination of three reversible logic gate which are Double Feynman gate (F2G), New Fault Tolerance (NFT) gate and Islam Gate (IG) with the area of 160μm x 420.3μm (67.25 mm2). This design achieved a low power consumption of 122.85μW and propagation delay of 16.99ns. The fault tolerance multiplier proposed achieved a low power consumption and high performance which suitable for application of modern computing as it has a fault tolerance capabilities.
FPGA Implementation of 16-bit Multipliers based upon Vedic Mathematic Approach
Zulhelmi .
2014-03-01
Full Text Available This paper proposes design and implementation of a 16-bit multiplier based upon Vedic mathematicapproach, where the design has been targeted to the Xilinx Field Programmable Gate Arrays (FPGAs board, deviceXC5VLX30. The approach is different from a number of approaches that have been used to realize multipliers. Ithas been reported that previous algorithms such as Booth, Modified Booth, and Carry Save Multipliers only suitablefor improving speed or decreasing area utilization; therefore, those algorithms are not appropriate for designingmultipliers that are used for digital signal processing (DSP applications. Moreover, they are not flexible to beimplemented on FPGAs or on a single chip using application specific integration circuits (ASICs. Vedic approach,on the other hand, can be used to design multipliers with optimum speed and less area utilization. In addition, it isreliable to be implemented on FPGAs or on a single chip. Behavioral and post-route simulation results prove that theproposed multiplier shows better performance in terms of speed compared to the other reported multipliers whenbeing implemented on the FPGA. In terms of area utilization, better results are also obtained.
Imputation-based analysis of association studies: candidate regions and quantitative traits.
Bertrand Servin
2007-07-01
Full Text Available We introduce a new framework for the analysis of association studies, designed to allow untyped variants to be more effectively and directly tested for association with a phenotype. The idea is to combine knowledge on patterns of correlation among SNPs (e.g., from the International HapMap project or resequencing data in a candidate region of interest with genotype data at tag SNPs collected on a phenotyped study sample, to estimate ("impute" unmeasured genotypes, and then assess association between the phenotype and these estimated genotypes. Compared with standard single-SNP tests, this approach results in increased power to detect association, even in cases in which the causal variant is typed, with the greatest gain occurring when multiple causal variants are present. It also provides more interpretable explanations for observed associations, including assessing, for each SNP, the strength of the evidence that it (rather than another correlated SNP is causal. Although we focus on association studies with quantitative phenotype and a relatively restricted region (e.g., a candidate gene, the framework is applicable and computationally practical for whole genome association studies. Methods described here are implemented in a software package, Bim-Bam, available from the Stephens Lab website http://stephenslab.uchicago.edu/software.html.
Meta-analysis and imputation refines the association of 15q25 with smoking quantity
Liu, Jason Z.; Tozzi, Federica; Waterworth, Dawn M.; Pillai, Sreekumar G.; Muglia, Pierandrea; Middleton, Lefkos; Berrettini, Wade; Knouff, Christopher W.; Yuan, Xin; Waeber, Gérard; Vollenweider, Peter; Preisig, Martin; Wareham, Nicholas J; Zhao, Jing Hua; Loos, Ruth J.F.; Barroso, Inês; Khaw, Kay-Tee; Grundy, Scott; Barter, Philip; Mahley, Robert; Kesaniemi, Antero; McPherson, Ruth; Vincent, John B.; Strauss, John; Kennedy, James L.; Farmer, Anne; McGuffin, Peter; Day, Richard; Matthews, Keith; Bakke, Per; Gulsvik, Amund; Lucae, Susanne; Ising, Marcus; Brueckl, Tanja; Horstmann, Sonja; Wichmann, H.-Erich; Rawal, Rajesh; Dahmen, Norbert; Lamina, Claudia; Polasek, Ozren; Zgaga, Lina; Huffman, Jennifer; Campbell, Susan; Kooner, Jaspal; Chambers, John C; Burnett, Mary Susan; Devaney, Joseph M.; Pichard, Augusto D.; Kent, Kenneth M.; Satler, Lowell; Lindsay, Joseph M.; Waksman, Ron; Epstein, Stephen; Wilson, James F.; Wild, Sarah H.; Campbell, Harry; Vitart, Veronique; Reilly, Muredach P.; Li, Mingyao; Qu, Liming; Wilensky, Robert; Matthai, William; Hakonarson, Hakon H.; Rader, Daniel J.; Franke, Andre; Wittig, Michael; Schäfer, Arne; Uda, Manuela; Terracciano, Antonio; Xiao, Xiangjun; Busonero, Fabio; Scheet, Paul; Schlessinger, David; St Clair, David; Rujescu, Dan; Abecasis, Gonçalo R.; Grabe, Hans Jörgen; Teumer, Alexander; Völzke, Henry; Petersmann, Astrid; John, Ulrich; Rudan, Igor; Hayward, Caroline; Wright, Alan F.; Kolcic, Ivana; Wright, Benjamin J; Thompson, John R; Balmforth, Anthony J.; Hall, Alistair S.; Samani, Nilesh J.; Anderson, Carl A.; Ahmad, Tariq; Mathew, Christopher G.; Parkes, Miles; Satsangi, Jack; Caulfield, Mark; Munroe, Patricia B.; Farrall, Martin; Dominiczak, Anna; Worthington, Jane; Thomson, Wendy; Eyre, Steve; Barton, Anne; Mooser, Vincent; Francks, Clyde; Marchini, Jonathan
2013-01-01
Smoking is a leading global cause of disease and mortality1. We performed a genomewide meta-analytic association study of smoking-related behavioral traits in a total sample of 41,150 individuals drawn from 20 disease, population, and control cohorts. Our analysis confirmed an effect on smoking quantity (SQ) at a locus on 15q25 (P=9.45e-19) that includes three genes encoding neuronal nicotinic acetylcholine receptor subunits (CHRNA5, CHRNA3, CHRNB4). We used data from the 1000 Genomes project to investigate the region using imputation, which allowed analysis of virtually all common variants in the region and offered a five-fold increase in coverage over the HapMap. This increased the spectrum of potentially causal single nucleotide polymorphisms (SNPs), which included a novel SNP that showed the highest significance, rs55853698, located within the promoter region of CHRNA5. Conditional analysis also identified a secondary locus (rs6495308) in CHRNA3. PMID:20418889
The search for stable prognostic models in multiple imputed data sets
de Vet Henrica CW
2010-09-01
Full Text Available Abstract Background In prognostic studies model instability and missing data can be troubling factors. Proposed methods for handling these situations are bootstrapping (B and Multiple imputation (MI. The authors examined the influence of these methods on model composition. Methods Models were constructed using a cohort of 587 patients consulting between January 2001 and January 2003 with a shoulder problem in general practice in the Netherlands (the Dutch Shoulder Study. Outcome measures were persistent shoulder disability and persistent shoulder pain. Potential predictors included socio-demographic variables, characteristics of the pain problem, physical activity and psychosocial factors. Model composition and performance (calibration and discrimination were assessed for models using a complete case analysis, MI, bootstrapping or both MI and bootstrapping. Results Results showed that model composition varied between models as a result of how missing data was handled and that bootstrapping provided additional information on the stability of the selected prognostic model. Conclusion In prognostic modeling missing data needs to be handled by MI and bootstrap model selection is advised in order to provide information on model stability.
Mercer, Theresa G; Frostick, Lynne E; Walmsley, Anthony D
2011-10-15
This paper presents a statistical technique that can be applied to environmental chemistry data where missing values and limit of detection levels prevent the application of statistics. A working example is taken from an environmental leaching study that was set up to determine if there were significant differences in levels of leached arsenic (As), chromium (Cr) and copper (Cu) between lysimeters containing preservative treated wood waste and those containing untreated wood. Fourteen lysimeters were setup and left in natural conditions for 21 weeks. The resultant leachate was analysed by ICP-OES to determine the As, Cr and Cu concentrations. However, due to the variation inherent in each lysimeter combined with the limits of detection offered by ICP-OES, the collected quantitative data was somewhat incomplete. Initial data analysis was hampered by the number of 'missing values' in the data. To recover the dataset, the statistical tool of Statistical Multiple Imputation (SMI) was applied, and the data was re-analysed successfully. It was demonstrated that using SMI did not affect the variance in the data, but facilitated analysis of the complete dataset.
Analysis of Case-Control Association Studies: SNPs, Imputation and Haplotypes
Chatterjee, Nilanjan
2009-11-01
Although prospective logistic regression is the standard method of analysis for case-control data, it has been recently noted that in genetic epidemiologic studies one can use the "retrospective" likelihood to gain major power by incorporating various population genetics model assumptions such as Hardy-Weinberg-Equilibrium (HWE), gene-gene and gene-environment independence. In this article we review these modern methods and contrast them with the more classical approaches through two types of applications (i) association tests for typed and untyped single nucleotide polymorphisms (SNPs) and (ii) estimation of haplotype effects and haplotype-environment interactions in the presence of haplotype-phase ambiguity. We provide novel insights to existing methods by construction of various score-tests and pseudo-likelihoods. In addition, we describe a novel two-stage method for analysis of untyped SNPs that can use any flexible external algorithm for genotype imputation followed by a powerful association test based on the retrospective likelihood. We illustrate applications of the methods using simulated and real data. © Institute of Mathematical Statistics, 2009.
Machine Learning Data Imputation and Classification in a Multicohort Hypertension Clinical Study.
Seffens, William; Evans, Chad; Taylor, Herman
2015-01-01
Health-care initiatives are pushing the development and utilization of clinical data for medical discovery and translational research studies. Machine learning tools implemented for Big Data have been applied to detect patterns in complex diseases. This study focuses on hypertension and examines phenotype data across a major clinical study called Minority Health Genomics and Translational Research Repository Database composed of self-reported African American (AA) participants combined with related cohorts. Prior genome-wide association studies for hypertension in AAs presumed that an increase of disease burden in susceptible populations is due to rare variants. But genomic analysis of hypertension, even those designed to focus on rare variants, has yielded marginal genome-wide results over many studies. Machine learning and other nonparametric statistical methods have recently been shown to uncover relationships in complex phenotypes, genotypes, and clinical data. We trained neural networks with phenotype data for missing-data imputation to increase the usable size of a clinical data set. Validity was established by showing performance effects using the expanded data set for the association of phenotype variables with case/control status of patients. Data mining classification tools were used to generate association rules.
Multiple imputation to evaluate the impact of an assay change in national surveys.
Sternberg, Maya
2017-07-30
National health surveys, such as the National Health and Nutrition Examination Survey, are used to monitor trends of nutritional biomarkers. These surveys try to maintain the same biomarker assay over time, but there are a variety of reasons why the assay may change. In these cases, it is important to evaluate the potential impact of a change so that any observed fluctuations in concentrations over time are not confounded by changes in the assay. To this end, a subset of stored specimens previously analyzed with the old assay is retested using the new assay. These paired data are used to estimate an adjustment equation, which is then used to 'adjust' all the old assay results and convert them into 'equivalent' units of the new assay. In this paper, we present a new way of approaching this problem using modern statistical methods designed for missing data. Using simulations, we compare the proposed multiple imputation approach with the adjustment equation approach currently in use. We also compare these approaches using real National Health and Nutrition Examination Survey data for 25-hydroxyvitamin D. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Imputation-based meta-analysis of severe malaria in three African populations.
Gavin Band
2013-05-01
Full Text Available Combining data from genome-wide association studies (GWAS conducted at different locations, using genotype imputation and fixed-effects meta-analysis, has been a powerful approach for dissecting complex disease genetics in populations of European ancestry. Here we investigate the feasibility of applying the same approach in Africa, where genetic diversity, both within and between populations, is far more extensive. We analyse genome-wide data from approximately 5,000 individuals with severe malaria and 7,000 population controls from three different locations in Africa. Our results show that the standard approach is well powered to detect known malaria susceptibility loci when sample sizes are large, and that modern methods for association analysis can control the potential confounding effects of population structure. We show that pattern of association around the haemoglobin S allele differs substantially across populations due to differences in haplotype structure. Motivated by these observations we consider new approaches to association analysis that might prove valuable for multicentre GWAS in Africa: we relax the assumptions of SNP-based fixed effect analysis; we apply Bayesian approaches to allow for heterogeneity in the effect of an allele on risk across studies; and we introduce a region-based test to allow for heterogeneity in the location of causal alleles.
Imputation-Based Meta-Analysis of Severe Malaria in Three African Populations
Band, Gavin; Le, Quang Si; Jostins, Luke; Pirinen, Matti; Kivinen, Katja; Jallow, Muminatou; Sisay-Joof, Fatoumatta; Bojang, Kalifa; Pinder, Margaret; Sirugo, Giorgio; Conway, David J.; Nyirongo, Vysaul; Kachala, David; Molyneux, Malcolm; Taylor, Terrie; Ndila, Carolyne; Peshu, Norbert; Marsh, Kevin; Williams, Thomas N.; Alcock, Daniel; Andrews, Robert; Edkins, Sarah; Gray, Emma; Hubbart, Christina; Jeffreys, Anna; Rowlands, Kate; Schuldt, Kathrin; Clark, Taane G.; Small, Kerrin S.; Teo, Yik Ying; Kwiatkowski, Dominic P.; Rockett, Kirk A.; Barrett, Jeffrey C.; Spencer, Chris C. A.
2013-01-01
Combining data from genome-wide association studies (GWAS) conducted at different locations, using genotype imputation and fixed-effects meta-analysis, has been a powerful approach for dissecting complex disease genetics in populations of European ancestry. Here we investigate the feasibility of applying the same approach in Africa, where genetic diversity, both within and between populations, is far more extensive. We analyse genome-wide data from approximately 5,000 individuals with severe malaria and 7,000 population controls from three different locations in Africa. Our results show that the standard approach is well powered to detect known malaria susceptibility loci when sample sizes are large, and that modern methods for association analysis can control the potential confounding effects of population structure. We show that pattern of association around the haemoglobin S allele differs substantially across populations due to differences in haplotype structure. Motivated by these observations we consider new approaches to association analysis that might prove valuable for multicentre GWAS in Africa: we relax the assumptions of SNP–based fixed effect analysis; we apply Bayesian approaches to allow for heterogeneity in the effect of an allele on risk across studies; and we introduce a region-based test to allow for heterogeneity in the location of causal alleles. PMID:23717212
Iotchkova, Valentina; Huang, Jie; Morris, John A; Jain, Deepti; Barbieri, Caterina; Walter, Klaudia; Min, Josine L; Chen, Lu; Astle, William; Cocca, Massimilian; Deelen, Patrick; Elding, Heather; Farmaki, Aliki-Eleni; Franklin, Christopher S; Franberg, Mattias; Gaunt, Tom R; Hofman, Albert; Jiang, Tao; Kleber, Marcus E; Lachance, Genevieve; Luan, Jian'an; Malerba, Giovanni; Matchan, Angela; Mead, Daniel; Memari, Yasin; Ntalla, Ioanna; Panoutsopoulou, Kalliope; Pazoki, Raha; Perry, John R B; Rivadeneira, Fernando; Sabater-Lleal, Maria; Sennblad, Bengt; Shin, So-Youn; Southam, Lorraine; Traglia, Michela; van Dijk, Freerk; van Leeuwen, Elisabeth M; Zaza, Gianluigi; Zhang, Weihua; Amin, Najaf; Butterworth, Adam; Chambers, John C; Dedoussis, George; Dehghan, Abbas; Franco, Oscar H; Franke, Lude; Frontini, Mattia; Gambaro, Giovanni; Gasparini, Paolo; Hamsten, Anders; Issacs, Aaron; Kooner, Jaspal S; Kooperberg, Charles; Langenberg, Claudia; Marz, Winfried; Scott, Robert A; Swertz, Morris A; Toniolo, Daniela; Uitterlinden, Andre G; van Duijn, Cornelia M; Watkins, Hugh; Zeggini, Eleftheria; Maurano, Mathew T; Timpson, Nicholas J; Reiner, Alexander P; Auer, Paul L; Soranzo, Nicole
2016-11-01
Large-scale whole-genome sequence data sets offer novel opportunities to identify genetic variation underlying human traits. Here we apply genotype imputation based on whole-genome sequence data from the UK10K and 1000 Genomes Project into 35,981 study participants of European ancestry, followed by association analysis with 20 quantitative cardiometabolic and hematological traits. We describe 17 new associations, including 6 rare (minor allele frequency (MAF) < 1%) or low-frequency (1% < MAF < 5%) variants with platelet count (PLT), red blood cell indices (MCH and MCV) and HDL cholesterol. Applying fine-mapping analysis to 233 known and new loci associated with the 20 traits, we resolve the associations of 59 loci to credible sets of 20 or fewer variants and describe trait enrichments within regions of predicted regulatory function. These findings improve understanding of the allelic architecture of risk factors for cardiometabolic and hematological diseases and provide additional functional insights with the identification of potentially novel biological targets.
Rutledge John
2011-05-01
Full Text Available Abstract Background Standard mean imputation for missing values in the Western Ontario and Mc Master (WOMAC Osteoarthritis Index limits the use of collected data and may lead to bias. Probability model-based imputation methods overcome such limitations but were never before applied to the WOMAC. In this study, we compare imputation results for the Expectation Maximization method (EM and the mean imputation method for WOMAC in a cohort of total hip replacement patients. Methods WOMAC data on a consecutive cohort of 2062 patients scheduled for surgery were analyzed. Rates of missing values in each of the WOMAC items from this large cohort were used to create missing patterns in the subset of patients with complete data. EM and the WOMAC's method of imputation are then applied to fill the missing values. Summary score statistics for both methods are then described through box-plot and contrasted with the complete case (CC analysis and the true score (TS. This process is repeated using a smaller sample size of 200 randomly drawn patients with higher missing rate (5 times the rates of missing values observed in the 2062 patients capped at 45%. Results Rate of missing values per item ranged from 2.9% to 14.5% and 1339 patients had complete data. Probability model-based EM imputed a score for all subjects while WOMAC's imputation method did not. Mean subscale scores were very similar for both imputation methods and were similar to the true score; however, the EM method results were more consistent with the TS after simulation. This difference became more pronounced as the number of items in a subscale increased and the sample size decreased. Conclusions The EM method provides a better alternative to the WOMAC imputation method. The EM method is more accurate and imputes data to create a complete data set. These features are very valuable for patient-reported outcomes research in which resources are limited and the WOMAC score is used in a multivariate
Implementation of High Performance Fir Filter Using Low Power Multiplier and Adder
Sweety Kashyap,
2014-01-01
Full Text Available The ever increasing growth in laptop and portable systems in cellular networks has intensified the research efforts in low power microelectronics. Now a day, there are many portable applications requiring low power and high throughput than ever before. Thus, low power system design has become a significant performance goal. So this paper is face with more constraints: high speed, high throughput, and at the same time, consumes as minimal power as possible. The Finite Impulse Response (FIR Filter is the important component for designing an efficient digital signal processing system. So, in this paper author trying, a FIR filter is constructing, which is efficient not only in terms of power and speed but also in terms of delay. When consider the elementary structure of an FIR filter, it is found that it is a combination of multipliers and delays, which in turn are the combination of adders. . This paper presents an efficient implementation and analysis for performance evaluation of multiplier and adder to minimize the consumption of energy during multiplication and addition methodology to improve the performance by compares different type of Multipliers and adder, respectively. By using, power comparison result of adders and multiplier, choice low power adder and multiplier to implementation of high performance FIR filter.
Laserspray ionization imaging of multiply charged ions using a commercial vacuum MALDI ion source.
Inutan, Ellen D; Wager-Miller, James; Mackie, Ken; Trimpin, Sarah
2012-11-06
This is the first report of imaging mass spectrometry (MS) from multiply charged ions at vacuum. Laserspray ionization (LSI) was recently extended to applications at vacuum producing electrospray ionization-like multiply charged ions directly from surfaces using a commercial intermediate pressure matrix-assisted laser desorption/ionization ion mobility spectrometry (IMS) MS instrument. Here, we developed a strategy to image multiply charged peptide ions. This is achieved by the use of 2-nitrophloroglucinol as matrix for spray deposition onto the tissue section and implementation of "soft" acquisition conditions including lower laser power and ion accelerating voltages similar to electrospray ionization-like conditions. Sufficient ion abundance is generated by the vacuum LSI method to employ IMS separation in imaging multiply charged ions obtained on a commercial mass spectrometer ion source without physical instrument modifications using the laser in the commercially available reflection geometry alignment. IMS gas-phase separation reduces the complexity of the ion signal from the tissue, especially for multiply charged relative to abundant singly charged ions from tissue lipids. We show examples of LSI tissue imaging from charge state +2 of three endogenous peptides consisting of between 1 and 16 amino acid residues from the acetylated N-terminal end of myelin basic protein: mass-to-charge (m/z) 795.81 (+2) molecular weight (MW) 1589.6, m/z 831.35 (+2) MW 1660.7, and m/z 917.40 (+2) MW 1832.8.
FPGA Implementation of Complex Multiplier Using Urdhva Tiryakbham Sutra of Vedic Mathematics
Rupa A. Tomaskar
2014-05-01
Full Text Available In this work VHDL implementation of complex number multiplier using ancient Vedic mathematics is presented, also the FPGA implementation of 4-bit complex multiplier using Vedic sutra is done on SPARTAN 3 FPGA kit. The idea for designing the multiplier unit is adopted from ancient Indian mathematics "Vedas". The Urdhva Tiryakbhyam sutra (method was selected for implementation since it is applicable to all cases of multiplication. The feature of this method is any multi-bit multiplication can be reduced down to single bit multiplication and addition. On account of these formulas, the partial products and sums are generated in one step which reduces the carry propagation from LSB to MSB. The implementation of the Vedic mathematics and their application to the complex multiplier ensure substantial reduction of propagation delay. The simulation results for 4-bit, 8-bit, 16-bit and 32 bit complex number multiplication using Vedic sutra are illustrated. The results show that Urdhva Tiryakbhyam sutra with less number of bits may be used to implement multiplier efficiently in signal processing algorithms.
Shades of gray: releasing the cognitive binds that blind us
2016-01-01
Approved for public release; distribution is unlimited The United States Intelligence Community is tasked with providing the intelligence necessary to protect the homeland and U.S. interests abroad. Technology acts as a force multiplier for intelligence analysts, but that advantage also comes with substantial risk. The risk lies in our reliance on technology and processes, and the tradecraft of intelligence analysis and critical thinking appears to be losing relevance. During the intellige...
Incomplete Big Data Imputation Algorithm Based on Deep Learning%基于深度学习的不完整大数据填充算法
卜范玉; 陈志奎; 张清辰
2014-01-01
提出一种基于深度学习的不完整大数据填充算法。算法首先以自动编码机为基础建立填充自动编码机。在此基础上，构建深度填充网络模型，分析不完整大数据的深度特征并根据逐层训练思想和反向传播算法计算网络参数。最后利用深度填充网络来还原不完整大数据，对缺失值进行填充。实验表明，提出的算法能够有效提高不完整大数据的填充精度。%This paper presents an impuation algorithm based on learning for incomplete big data .The proposed algorithm establishs a novel auto‐encoder , called imputation auto‐encoder , and then builds a deep imputation network model to analyze the deep features of incomplete big data and to calculate network parameters based on drill training ideas and back‐propagation algorithm .Finally ,the deep imputation network is used to impute the missing values .Experimental results show that the proposed algorithm can effectively improve the imputation accuracy for incomplete big data .
Khor, S-S; Yang, W; Kawashima, M; Kamitsuji, S; Zheng, X; Nishida, N; Sawai, H; Toyoda, H; Miyagawa, T; Honda, M; Kamatani, N; Tokunaga, K
2015-12-01
Statistical imputation of classical human leukocyte antigen (HLA) alleles is becoming an indispensable tool for fine-mappings of disease association signals from case-control genome-wide association studies. However, most currently available HLA imputation tools are based on European reference populations and are not suitable for direct application to non-European populations. Among the HLA imputation tools, The HIBAG R package is a flexible HLA imputation tool that is equipped with a wide range of population-based classifiers; moreover, HIBAG R enables individual researchers to build custom classifiers. Here, two data sets, each comprising data from healthy Japanese individuals of difference sample sizes, were used to build custom classifiers. HLA imputation accuracy in five HLA classes (HLA-A, HLA-B, HLA-DRB1, HLA-DQB1 and HLA-DPB1) increased from the 82.5-98.8% obtained with the original HIBAG references to 95.2-99.5% with our custom classifiers. A call threshold (CT) of 0.4 is recommended for our Japanese classifiers; in contrast, HIBAG references recommend a CT of 0.5. Finally, our classifiers could be used to identify the risk haplotypes for Japanese narcolepsy with cataplexy, HLA-DRB1*15:01 and HLA-DQB1*06:02, with 100% and 99.7% accuracy, respectively; therefore, these classifiers can be used to supplement the current lack of HLA genotyping data in widely available genome-wide association study data sets.
Shah, Jasmit S; Rai, Shesh N; DeFilippis, Andrew P; Hill, Bradford G; Bhatnagar, Aruni; Brock, Guy N
2017-02-20
High throughput metabolomics makes it possible to measure the relative abundances of numerous metabolites in biological samples, which is useful to many areas of biomedical research. However, missing values (MVs) in metabolomics datasets are common and can arise due to both technical and biological reasons. Typically, such MVs are substituted by a minimum value, which may lead to different results in downstream analyses. Here we present a modified version of the K-nearest neighbor (KNN) approach which accounts for truncation at the minimum value, i.e., KNN truncation (KNN-TN). We compare imputation results based on KNN-TN with results from other KNN approaches such as KNN based on correlation (KNN-CR) and KNN based on Euclidean distance (KNN-EU). Our approach assumes that the data follow a truncated normal distribution with the truncation point at the detection limit (LOD). The effectiveness of each approach was analyzed by the root mean square error (RMSE) measure as well as the metabolite list concordance index (MLCI) for influence on downstream statistical testing. Through extensive simulation studies and application to three real data sets, we show that KNN-TN has lower RMSE values compared to the other two KNN procedures as well as simpler imputation methods based on substituting missing values with the metabolite mean, zero values, or the LOD. MLCI values between KNN-TN and KNN-EU were roughly equivalent, and superior to the other four methods in most cases. Our findings demonstrate that KNN-TN generally has improved performance in imputing the missing values of the different datasets compared to KNN-CR and KNN-EU when there is missingness due to missing at random combined with an LOD. The results shown in this study are in the field of metabolomics but this method could be applicable with any high throughput technology which has missing due to LOD.
Ratcliffe, B; El-Dien, O G; Klápště, J; Porth, I; Chen, C; Jaquish, B; El-Kassaby, Y A
2015-12-01
Genomic selection (GS) potentially offers an unparalleled advantage over traditional pedigree-based selection (TS) methods by reducing the time commitment required to carry out a single cycle of tree improvement. This quality is particularly appealing to tree breeders, where lengthy improvement cycles are the norm. We explored the prospect of implementing GS for interior spruce (Picea engelmannii × glauca) utilizing a genotyped population of 769 trees belonging to 25 open-pollinated families. A series of repeated tree height measurements through ages 3-40 years permitted the testing of GS methods temporally. The genotyping-by-sequencing (GBS) platform was used for single nucleotide polymorphism (SNP) discovery in conjunction with three unordered imputation methods applied to a data set with 60% missing information. Further, three diverse GS models were evaluated based on predictive accuracy (PA), and their marker effects. Moderate levels of PA (0.31-0.55) were observed and were of sufficient capacity to deliver improved selection response over TS. Additionally, PA varied substantially through time accordingly with spatial competition among trees. As expected, temporal PA was well correlated with age-age genetic correlation (r=0.99), and decreased substantially with increasing difference in age between the training and validation populations (0.04-0.47). Moreover, our imputation comparisons indicate that k-nearest neighbor and singular value decomposition yielded a greater number of SNPs and gave higher predictive accuracies than imputing with the mean. Furthermore, the ridge regression (rrBLUP) and BayesCπ (BCπ) models both yielded equal, and better PA than the generalized ridge regression heteroscedastic effect model for the traits evaluated.
Miecznikowski, Jeffrey C; Damodaran, Senthilkumar; Sellers, Kimberly F; Rabin, Richard A
2010-12-15
Numerous gel-based softwares exist to detect protein changes potentially associated with disease. The data, however, are abundant with technical and structural complexities, making statistical analysis a difficult task. A particularly important topic is how the various softwares handle missing data. To date, no one has extensively studied the impact that interpolating missing data has on subsequent analysis of protein spots. This work highlights the existing algorithms for handling missing data in two-dimensional gel analysis and performs a thorough comparison of the various algorithms and statistical tests on simulated and real datasets. For imputation methods, the best results in terms of root mean squared error are obtained using the least squares method of imputation along with the expectation maximization (EM) algorithm approach to estimate missing values with an array covariance structure. The bootstrapped versions of the statistical tests offer the most liberal option for determining protein spot significance while the generalized family wise error rate (gFWER) should be considered for controlling the multiple testing error. In summary, we advocate for a three-step statistical analysis of two-dimensional gel electrophoresis (2-DE) data with a data imputation step, choice of statistical test, and lastly an error control method in light of multiple testing. When determining the choice of statistical test, it is worth considering whether the protein spots will be subjected to mass spectrometry. If this is the case a more liberal test such as the percentile-based bootstrap t can be employed. For error control in electrophoresis experiments, we advocate that gFWER be controlled for multiple testing rather than the false discovery rate.
Sellers Kimberly F
2010-12-01
Full Text Available Abstract Background Numerous gel-based softwares exist to detect protein changes potentially associated with disease. The data, however, are abundant with technical and structural complexities, making statistical analysis a difficult task. A particularly important topic is how the various softwares handle missing data. To date, no one has extensively studied the impact that interpolating missing data has on subsequent analysis of protein spots. Results This work highlights the existing algorithms for handling missing data in two-dimensional gel analysis and performs a thorough comparison of the various algorithms and statistical tests on simulated and real datasets. For imputation methods, the best results in terms of root mean squared error are obtained using the least squares method of imputation along with the expectation maximization (EM algorithm approach to estimate missing values with an array covariance structure. The bootstrapped versions of the statistical tests offer the most liberal option for determining protein spot significance while the generalized family wise error rate (gFWER should be considered for controlling the multiple testing error. Conclusions In summary, we advocate for a three-step statistical analysis of two-dimensional gel electrophoresis (2-DE data with a data imputation step, choice of statistical test, and lastly an error control method in light of multiple testing. When determining the choice of statistical test, it is worth considering whether the protein spots will be subjected to mass spectrometry. If this is the case a more liberal test such as the percentile-based bootstrap t can be employed. For error control in electrophoresis experiments, we advocate that gFWER be controlled for multiple testing rather than the false discovery rate.
2014-01-01
Background Identification of recombination events and which chromosomal segments contributed to an individual is useful for a number of applications in genomic analyses including haplotyping, imputation, signatures of selection, and improved estimates of relationship and probability of identity by descent. Genotypic data on half-sib family groups are widely available in livestock genomics. This structure makes it possible to identify recombination events accurately even with only a few individuals and it lends itself well to a range of applications such as parentage assignment and pedigree verification. Results Here we present hsphase, an R package that exploits the genetic structure found in half-sib livestock data to identify and count recombination events, impute and phase un-genotyped sires and phase its offspring. The package also allows reconstruction of family groups (pedigree inference), identification of pedigree errors and parentage assignment. Additional functions in the package allow identification of genomic mapping errors, imputation of paternal high density genotypes from low density genotypes, evaluation of phasing results either from hsphase or from other phasing programs. Various diagnostic plotting functions permit rapid visual inspection of results and evaluation of datasets. Conclusion The hsphase package provides a suite of functions for analysis and visualization of genomic structures in half-sib family groups implemented in the widely used R programming environment. Low level functions were implemented in C++ and parallelized to improve performance. hsphase was primarily designed for use with high density SNP array data but it is fast enough to run directly on sequence data once they become more widely available. The package is available (GPL 3) from the Comprehensive R Archive Network (CRAN) or from http://www-personal.une.edu.au/~cgondro2/hsphase.htm. PMID:24906803
Low Voltage Floating Gate MOS Transistor Based Four-Quadrant Multiplier
R. Srivastava
2014-12-01
Full Text Available This paper presents a four-quadrant multiplier based on square-law characteristic of floating gate MOSFET (FGMOS in saturation region. The proposed circuit uses square-difference identity and the differential voltage squarer proposed by Gupta et al. to implement the multiplication function. The proposed multiplier employs eight FGMOS transistors and two resistors only. The FGMOS implementation of the multiplier allows low voltage operation, reduced power consumption and minimum transistor count. The second order effects caused due to mobility degradation, component mismatch and temperature variations are discussed. Performance of the proposed circuit is verified at ±0.75 V in TSMC 0.18 µm CMOS, BSIM3 and Level 49 technology by using Cadence Spectre simulator.
FPGA Implementation of Double Precision Floating Point Multiplier using Xilinx Coregen Tool
Sukhvir Kaur
2013-06-01
Full Text Available Floating point arithmetic is widely used in many areas, especially scientific computation and signal processing. The main applications of floating points today are in the field of medical imaging, biometrics, motion capture and audio applications. The IEEE floating point standard defines both single precision and double precision formats. Multiplication is a core operation in many signal processing computations, and as such efficient implementation of floating point multipliers is an important concern. Until now there is the implementation of the low precision floating point formats, but this piece of work considers the implementation of 64-bit double precision multiplier. This paper presents the FPGA implementation of double precision floating point multiplier using Xilinx Coregen Tool.
Shuli Gao
2009-01-01
Full Text Available Modern FPGAs contain embedded DSP blocks, which can be configured as multipliers with more than one possible size. FPGA-based designs using these multigranular embedded blocks become more challenging when high speed and reduced area utilization are required. This paper proposes an efficient design methodology for implementing large size signed multipliers using multigranular small embedded blocks. The proposed approach has been implemented and tested targeting Altera's Stratix II FPGAs with the aid of the Quartus II software tool. The implementations of the multipliers have been carried out for operands with sizes ranging from 40 to 256 bits. Experimental results demonstrated that our design approach has outperformed the standard scheme used by Quartus II tool in terms of speed and area. On average, the delay reduction is about 20.7% and the area saving, in terms of ALUTs, is about 67.6%.
A HIGHLY TIME-EFFICIENT DIGITAL MULTIPLIER BASED ON THE A2 BINARY REPRESENTATION
Hatem BOUKADIDA,
2011-05-01
Full Text Available A comparative study of different types of digital multipliers based on the A2 redundant binary representation is investigated in this paper. Some techniques have been proposed and implemented using different ALTERA Stratix FPGA platforms. The principle is to try to reduce the number of partial products terms to be summed with addition trees. These techniques are based on exploiting the associative and commutative properties of the addition operation. The multiplication was achieved using four schemes which are respectively the trivial scheme, the BRAUN scheme, the BOOTH scheme and finally the Carry-Save Wallace scheme. Two input A2- Natural transcoders and one output Natural-A2 transcoder are deployed to translate between the classical and the new A2 redundant binary representation. Synthesis results show that the A2-BRAUN multiplier requires lessarea than the conventional one. It was also noticed that the A2-Wallace multiplier offers better speed performance with respect to others schemes.
Design of Low Power Multiplier with Energy Efficient Full Adder Using DPTAAL
A. Kishore Kumar
2013-01-01
Full Text Available Asynchronous adiabatic logic (AAL is a novel lowpower design technique which combines the energy saving benefits of asynchronous systems with adiabatic benefits. In this paper, energy efficient full adder using double pass transistor with asynchronous adiabatic logic (DPTAAL is used to design a low power multiplier. Asynchronous adiabatic circuits are very low power circuits to preserve energy for reuse, which reduces the amount of energy drawn directly from the power supply. In this work, an 8×8 multiplier using DPTAAL is designed and simulated, which exhibits low power and reliable logical operations. To improve the circuit performance at reduced voltage level, double pass transistor logic (DPL is introduced. The power results of the proposed multiplier design are compared with the conventional CMOS implementation. Simulation results show significant improvement in power for clock rates ranging from 100 MHz to 300 MHz.
Novel Design of a Nano-metric Fast 4*4 Reversible unsigned Wallace Multiplier Circuit
Ehsan PourAliAkbar
2015-12-01
Full Text Available One of the most promising technologies in designing low-power circuits is reversible computing. It is used in nanotechnology, quantum computing, quantum dot cellular automata (QCA, DNA computing, optical computing and in CMOS low-power designs. Since reversible logic is subject to certain restrictions (e.g. fan-out and feedback are not allowed, traditional synthesis methods are not applicable and specific methods have been developed. In this paper, we offer a Wallace 4*4 reversible multiplier circuits which have faster speed and lower complexity in comparison with the other multiplier circuits. This circuit performs better, regarding to the number of gates, garbage outputs and constant inputs work better than the same circuits. In this paper, Peres gate is used as HA and HNG gate is used as FA. We offer the best method to multiply two 4 bit numbers. These Nano-metric circuits can be used in very complex systems.
Design of a High Linearity Four-Quadrant Analog Multiplier in Wideband Frequency Range
Abdul kareem Mokif Obais
2017-05-01
Full Text Available In this paper, a voltage mode four quadrant analog multiplier in the wideband frequency rangeis designed using a wideband operational amplifier (OPAMP and squaring circuits. The wideband OPAMP is designed using 10 identical NMOS transistorsand operated with supply voltages of ±12V. Two NMOS transistors and two wideband OPAMP are utilized in the design of the proposed squaring circuit. All the NMOS transistors are based on 0.35µm NMOStechnology. The multiplier has input and output voltage ranges of ±10 V, high range of linearity from -10 V to +10 V, and cutoff frequency of about 5 GHz. The proposed multiplier is designed on PSpice in Orcad 16.6
G-Frame Representation and Invertibility of G-Bessel Multipliers
A.ABDOLLAHI; E.RAHIMI
2013-01-01
In this paper we show that every g-frame for an infinite dimensional Hilbert space H can be written as a sum of three g-orthonormal bases for H.Also,we prove that every gframe can be represented as a linear combination of two g-orthonormal bases if and only if it is a g-Riesz basis.Further,we show each g-Bessel multiplier is a Bessel multiplier and investigate the inversion of g-frame multipliers.Finally,we introduce the concept of controlled g-frames and weighted g-frames and show that the sequence induced by each controlled g-frame (resp.,weighted g-frame) is a controlled frame (resp.,weighted frame).
Gao, Zhe; Dong, Mei; Wang, Guizhen; Sheng, Pei; Wu, Zhiwei; Yang, Huimin; Zhang, Bin; Wang, Guofu; Wang, Jianguo; Qin, Yong
2015-07-27
To design highly efficient catalysts, new concepts for optimizing the metal-support interactions are desirable. Here we introduce a facile and general template approach assisted by atomic layer deposition (ALD), to fabricate a multiply confined Ni-based nanocatalyst. The Ni nanoparticles are not only confined in Al2 O3 nanotubes, but also embedded in the cavities of Al2 O3 interior wall. The cavities create more Ni-Al2 O3 interfacial sites, which facilitate hydrogenation reactions. The nanotubes inhibit the leaching and detachment of Ni nanoparticles. Compared with the Ni-based catalyst supported on the outer surface of Al2 O3 nanotubes, the multiply confined catalyst shows a striking improvement of catalytic activity and stability in hydrogenation reactions. Our ALD-assisted template method is general and can be extended for other multiply confined nanoreactors, which may have potential applications in many heterogeneous reactions.
R. P. Meenaakshi Sundari
2014-01-01
Full Text Available In this study by using the modified Wallace tree multiplier, an error compensated adder tree is constructed in order to round off truncation errors and to obtain high through put discrete cosine transform design. Peak Signal to Noise Ratio (PSNR is met efficiently since modified Wallace Tree method is an efficient, hardware implementable digital circuit that multiplies two integers resulting an output with reduced delays and errors. Nearly 6% of delays and around 1% of gate counts are reduced. The number of look up tables consumed is 2% lesser than that of the previous multipliers. Thus an area efficient discrete cosine transform is built to achieve high throughput with minimum gate counts and delays for the required Peak Signal to Noise Ratio when compared to the existing DCT’s.
High frequency capacitor-diode voltage multiplier dc-dc converter development
Kisch, J. J.; Martinelli, R. M.
1977-01-01
A power conditioner was developed which used a capacitor diode voltage multiplier to provide a high voltage without the use of a step-up transformer. The power conditioner delivered 1200 Vdc at 100 watts and was operated from a 120 Vdc line. The efficiency was in excess of 90 percent. The component weight was 197 grams. A modified boost-add circuit was used for the regulation. A short circuit protection circuit was used which turns off the drive circuit upon a fault condition, and recovers within 5 ms after removal of the short. High energy density polysulfone capacitors and high speed diodes were used in the multiplier circuit.
A Low Power High Bandwidth Four Quadrant Analog Multiplier in 32 NM CNFET Technology
Vitrag Sheth
2012-05-01
Full Text Available Carbon Nanotube Field Effect Transistor (CNFET is a promising new technology that overcomes several limitations of traditional silicon integrated circuit technology. In recent years, the potential of CNFET for analog circuit applications has been explored. This paper proposes a novel four quadrant analog multiplier design using CNFETs. The simulation based on 32nm CNFET technology shows that the proposed multiplier has very low harmonic distortion (<0.45%, large input range (±400mV, large bandwidth (~50GHz and low power consumption (~247µW, while operating at a supply voltage of ±0.9V.
Coefficient multipliers of H~p spaces over bounded symmetric domains in C
肖建斌
1995-01-01
One way to give information about the Taylor coefficients of Hp functions is to describe the multipliers of Hp into various spaces. In the case of one complex variable, Duren and Shields described the multipliers of Hp into lq (0
MULTIPLIERS AND TENSOR PRODUCTS OF L(p, q) LORENTZ SPACES
无
2007-01-01
Let G be a locally compact abelian group. The main purpose of this article is to find the space of multipliers from the Lorentz space L(p1, q1)(G) to L(p'2, q'2)(G). For this reason, the authors define the spaceAp2,q2p1,q1(G), discuss its properties and prove that the space of multipliers from L(p1, q1)(G) to L(p'2, q'2)(G) is isometrically isomorphic to the dual of Ap2,q2p1,q1(G).
Limitations in THz Power Generation with Schottky Diode Varactor Frequency Multipliers
Krozer, Viktor; Loata, G.; Grajal, J.
2002-01-01
We discuss the limitations in power generation with Schottky diode and HBV (heterostructure barrier varactor) diode frequency multipliers. It is shown that at lower frequencies the experimental results achieved so far approach the theoretical limit of operation for the employed devices. However......, at increasing frequencies the power drops with f-3 instead of the f-2 predicted by theory. In this contribution we provide an overview of state-of-the-art results. A comparison with theoretically achievable multiplier performance reveals that the devices employed at higher frequencies are operating...
CMOS DESIGN OF A MULTI_INPUT ANALOG MULTIPLIER AND DIVIDER CIRCUIT
2014-01-01
This paper proposes a CMOS current-mode multi_input analog multiplier and divider circuit based on a new method. Exponential and logarithmic functions are employed to realize the circuit which is used in neural network and fuzzy integrated systems. The major advantages of this multiplier are ability of having multi_input signals, and low Total Harmonic Distortion (THD). The circuit is designed and simulated using MATLAB software and HSPICE simulator by level 49 parameters (BSIM3v3) in 0.35μm ...