Multiply-Imputed Synthetic Data: Advice to the Imputer
Directory of Open Access Journals (Sweden)
Loong Bronwyn
2017-12-01
Full Text Available Several statistical agencies have started to use multiply-imputed synthetic microdata to create public-use data in major surveys. The purpose of doing this is to protect the confidentiality of respondents’ identities and sensitive attributes, while allowing standard complete-data analyses of microdata. A key challenge, faced by advocates of synthetic data, is demonstrating that valid statistical inferences can be obtained from such synthetic data for non-confidential questions. Large discrepancies between observed-data and synthetic-data analytic results for such questions may arise because of uncongeniality; that is, differences in the types of inputs available to the imputer, who has access to the actual data, and to the analyst, who has access only to the synthetic data. Here, we discuss a simple, but possibly canonical, example of uncongeniality when using multiple imputation to create synthetic data, which specifically addresses the choices made by the imputer. An initial, unanticipated but not surprising, conclusion is that non-confidential design information used to impute synthetic data should be released with the confidential synthetic data to allow users of synthetic data to avoid possible grossly conservative inferences.
Differential network analysis with multiply imputed lipidomic data.
Directory of Open Access Journals (Sweden)
Maiju Kujala
Full Text Available The importance of lipids for cell function and health has been widely recognized, e.g., a disorder in the lipid composition of cells has been related to atherosclerosis caused cardiovascular disease (CVD. Lipidomics analyses are characterized by large yet not a huge number of mutually correlated variables measured and their associations to outcomes are potentially of a complex nature. Differential network analysis provides a formal statistical method capable of inferential analysis to examine differences in network structures of the lipids under two biological conditions. It also guides us to identify potential relationships requiring further biological investigation. We provide a recipe to conduct permutation test on association scores resulted from partial least square regression with multiple imputed lipidomic data from the LUdwigshafen RIsk and Cardiovascular Health (LURIC study, particularly paying attention to the left-censored missing values typical for a wide range of data sets in life sciences. Left-censored missing values are low-level concentrations that are known to exist somewhere between zero and a lower limit of quantification. To make full use of the LURIC data with the missing values, we utilize state of the art multiple imputation techniques and propose solutions to the challenges that incomplete data sets bring to differential network analysis. The customized network analysis helps us to understand the complexities of the underlying biological processes by identifying lipids and lipid classes that interact with each other, and by recognizing the most important differentially expressed lipids between two subgroups of coronary artery disease (CAD patients, the patients that had a fatal CVD event and the ones who remained stable during two year follow-up.
Analyzing the changing gender wage gap based on multiply imputed right censored wages
Gartner, Hermann; Rässler, Susanne
2005-01-01
"In order to analyze the gender wage gap with the German IAB-employment register we have to solve the problem of censored wages at the upper limit of the social security system. We treat this problem as a missing data problem. We regard the missingness mechanism as not missing at random (NMAR, according to Little and Rubin, 1987, 2002) as well as missing by design. The censored wages are multiply imputed by draws of a random variable from a truncated distribution. The multiple imputation is b...
Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.
2017-06-01
The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.
Chaurasia, Ashok; Harel, Ofer
2015-02-10
Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.
Synthetic Multiple-Imputation Procedure for Multistage Complex Samples
Directory of Open Access Journals (Sweden)
Zhou Hanzhi
2016-03-01
Full Text Available Multiple imputation (MI is commonly used when item-level missing data are present. However, MI requires that survey design information be built into the imputation models. For multistage stratified clustered designs, this requires dummy variables to represent strata as well as primary sampling units (PSUs nested within each stratum in the imputation model. Such a modeling strategy is not only operationally burdensome but also inferentially inefficient when there are many strata in the sample design. Complexity only increases when sampling weights need to be modeled. This article develops a generalpurpose analytic strategy for population inference from complex sample designs with item-level missingness. In a simulation study, the proposed procedures demonstrate efficient estimation and good coverage properties. We also consider an application to accommodate missing body mass index (BMI data in the analysis of BMI percentiles using National Health and Nutrition Examination Survey (NHANES III data. We argue that the proposed methods offer an easy-to-implement solution to problems that are not well-handled by current MI techniques. Note that, while the proposed method borrows from the MI framework to develop its inferential methods, it is not designed as an alternative strategy to release multiply imputed datasets for complex sample design data, but rather as an analytic strategy in and of itself.
Missing data imputation: focusing on single imputation.
Zhang, Zhongheng
2016-01-01
Complete case analysis is widely used for handling missing data, and it is the default method in many statistical packages. However, this method may introduce bias and some useful information will be omitted from analysis. Therefore, many imputation methods are developed to make gap end. The present article focuses on single imputation. Imputations with mean, median and mode are simple but, like complete case analysis, can introduce bias on mean and deviation. Furthermore, they ignore relationship with other variables. Regression imputation can preserve relationship between missing values and other variables. There are many sophisticated methods exist to handle missing values in longitudinal data. This article focuses primarily on how to implement R code to perform single imputation, while avoiding complex mathematical calculations.
Directory of Open Access Journals (Sweden)
Garry Jacobs
2013-05-01
Full Text Available This article is not a comprehensive factual history of money as an economic instrument. It aims rather to present an essential psychological history of the power of money as a social organization or social technology. It explores the catalytic role of money in the development of society and its ever-increasing capacity for accomplishment in both economic and non-economic fields. This perspective focuses attention on the unutilized potential for harnessing the social power of money for promoting full employment, global development and human welfare. The title ‘multiplying money’ is intended to convey the idea that this untapped potential is exponential in nature. In order to recognize it, some fundamental misconceptions about the nature of money, how it is created and on what it is based need to be examined. This is the second article in a series.
2013-01-01
A few weeks ago, I had a vague notion of what TED was, and how it worked, but now I’m a confirmed fan. It was my privilege to host CERN’s first TEDx event last Friday, and I can honestly say that I can’t remember a time when I was exposed to so much brilliance in such a short time. TEDxCERN was designed to give a platform to science. That’s why we called it Multiplying Dimensions – a nod towards the work we do here, while pointing to the broader importance of science in society. We had talks ranging from the most subtle pondering on the nature of consciousness to an eighteen year old researcher urging us to be patient, and to learn from our mistakes. We had musical interludes that included encounters between the choirs of local schools and will.i.am, between an Israeli pianist and an Iranian percussionist, and between Grand Opera and high humour. And although I opened the event by announcing it as a day off from physics, we had a quite brill...
Public Undertakings and Imputability
DEFF Research Database (Denmark)
Ølykke, Grith Skovgaard
2013-01-01
In this article, the issue of impuability to the State of public undertakings’ decision-making is analysed and discussed in the context of the DSBFirst case. DSBFirst is owned by the independent public undertaking DSB and the private undertaking FirstGroup plc and won the contracts in the 2008...... Oeresund tender for the provision of passenger transport by railway. From the start, the services were provided at a loss, and in the end a part of DSBFirst was wound up. In order to frame the problems illustrated by this case, the jurisprudence-based imputability requirement in the definition of State aid...... in Article 107(1) TFEU is analysed. It is concluded that where the public undertaking transgresses the control system put in place by the State, conditions for imputability are not fulfilled, and it is argued that in the current state of law, there is no conditional link between the level of control...
Performance of genotype imputation for low frequency and rare variants from the 1000 genomes.
Zheng, Hou-Feng; Rong, Jing-Jing; Liu, Ming; Han, Fang; Zhang, Xing-Wei; Richards, J Brent; Wang, Li
2015-01-01
Genotype imputation is now routinely applied in genome-wide association studies (GWAS) and meta-analyses. However, most of the imputations have been run using HapMap samples as reference, imputation of low frequency and rare variants (minor allele frequency (MAF) 1000 Genomes panel) are available to facilitate imputation of these variants. Therefore, in order to estimate the performance of low frequency and rare variants imputation, we imputed 153 individuals, each of whom had 3 different genotype array data including 317k, 610k and 1 million SNPs, to three different reference panels: the 1000 Genomes pilot March 2010 release (1KGpilot), the 1000 Genomes interim August 2010 release (1KGinterim), and the 1000 Genomes phase1 November 2010 and May 2011 release (1KGphase1) by using IMPUTE version 2. The differences between these three releases of the 1000 Genomes data are the sample size, ancestry diversity, number of variants and their frequency spectrum. We found that both reference panel and GWAS chip density affect the imputation of low frequency and rare variants. 1KGphase1 outperformed the other 2 panels, at higher concordance rate, higher proportion of well-imputed variants (info>0.4) and higher mean info score in each MAF bin. Similarly, 1M chip array outperformed 610K and 317K. However for very rare variants (MAF ≤ 0.3%), only 0-1% of the variants were well imputed. We conclude that the imputation of low frequency and rare variants improves with larger reference panels and higher density of genome-wide genotyping arrays. Yet, despite a large reference panel size and dense genotyping density, very rare variants remain difficult to impute.
Multiple imputation and its application
Carpenter, James
2013-01-01
A practical guide to analysing partially observed data. Collecting, analysing and drawing inferences from data is central to research in the medical and social sciences. Unfortunately, it is rarely possible to collect all the intended data. The literature on inference from the resulting incomplete data is now huge, and continues to grow both as methods are developed for large and complex data structures, and as increasing computer power and suitable software enable researchers to apply these methods. This book focuses on a particular statistical method for analysing and drawing inferences from incomplete data, called Multiple Imputation (MI). MI is attractive because it is both practical and widely applicable. The authors aim is to clarify the issues raised by missing data, describing the rationale for MI, the relationship between the various imputation models and associated algorithms and its application to increasingly complex data structures. Multiple Imputation and its Application: Discusses the issues ...
Flexible Imputation of Missing Data
van Buuren, Stef
2012-01-01
Missing data form a problem in every scientific discipline, yet the techniques required to handle them are complicated and often lacking. One of the great ideas in statistical science--multiple imputation--fills gaps in the data with plausible values, the uncertainty of which is coded in the data itself. It also solves other problems, many of which are missing data problems in disguise. Flexible Imputation of Missing Data is supported by many examples using real data taken from the author's vast experience of collaborative research, and presents a practical guide for handling missing data unde
R package imputeTestbench to compare imputations methods for univariate time series
Bokde, Neeraj; Kulat, Kishore; Beck, Marcus W; Asencio-Cortés, Gualberto
2016-01-01
This paper describes the R package imputeTestbench that provides a testbench for comparing imputation methods for missing data in univariate time series. The imputeTestbench package can be used to simulate the amount and type of missing data in a complete dataset and compare filled data using different imputation methods. The user has the option to simulate missing data by removing observations completely at random or in blocks of different sizes. Several default imputation methods are includ...
International Nuclear Information System (INIS)
Seidman, A.; Avrahami, Z.; Sheinfux, B.; Grinberg, J.
1976-01-01
A channel electron multiplier is described having a tubular wall coated with a secondary-electron emitting material and including an electric field for accelerating the electrons, the electric field comprising a plurality of low-resistive conductive rings each alternating with a high-resistive insulating ring. The thickness of the low-resistive rings is many times larger than that of the high-resistive rings, being in the order of tens of microns for the low-resistive rings and at least one order of magnitude lower for the high-resistive rings; and the diameter of the channel tubular walls is also many times larger than the thickness of the high-resistive rings. Both single-channel and multiple-channel electron multipliers are described. A very important advantage, particularly in making multiple-channel multipliers, is the simplicity of the procedure that may be used in constructing such multipliers. Other operational advantages are described
International Nuclear Information System (INIS)
Comby, G.
1996-01-01
The Ceramic Electron Multipliers (CEM) is a compact, robust, linear and fast multi-channel electron multiplier. The Multi Layer Ceramic Technique (MLCT) allows to build metallic dynodes inside a compact ceramic block. The activation of the metallic dynodes enhances their secondary electron emission (SEE). The CEM can be used in multi-channel photomultipliers, multi-channel light intensifiers, ion detection, spectroscopy, analysis of time of flight events, particle detection or Cherenkov imaging detectors. (auth)
The Multiply Handicapped Child.
Wolf, James M., Ed.; Anderson, Robert M., Ed.
Articles presented in the area of the medical and educational challenge of the multiply handicapped child are an overview of the problem, the increasing challenge, congenital malformations, children whose mothers had rubella, prematurity and deafness, the epidemiology of reproductive casualty, and new education for old problems. Discussions of…
Microchannel electron multiplier
International Nuclear Information System (INIS)
Beranek, I.; Janousek, L.; Vitovsky, O.
1981-01-01
A microchannel electron multiplier is described for detecting low levels of alpha, beta, soft X-ray and UV radiations. It consists of a glass tube or a system of tubes of various shapes made of common technological glass. The inner tube surface is provided with an active coat with photoemitter and secondary emitter properties. (B.S.)
Flowers, William L., Jr.; Harris, John B.
1981-01-01
The multiplier effect is discussed as it applies to the field of continuing education. The authors' main point is that one grant or contract can, and should, be used as the basis for building organizational competencies and capabilities that will secure other funds. (Author/CT)
Analyzing the changing gender wage gap based on multiply inputed right censored wages
Gartner, Hermann; Rässler, Susanne
2005-01-01
In order to analyze the gender wage gap with the German IAB-employment sample we have to solve the problem of censored wages at the upper limit of the social security system. We treat this problem as a missing data problem. We regard the missingness mechanism as not missing at random (NMAR, according to Little and Rubin, 1987, 2002) as well as missing by design. The censored wages are multiply imputed by draws of a random variable from a truncated distribution. The multiple imputation is base...
GEM the gas electron multiplier
Sauli, Fabio
1997-01-01
We describe the basic structure and operation of a new device, the Gas Electron Multiplier. Consisting in a polymer foil, metal-clad on both sides and perforated by a high density of holes, the GEM mesh allows to pre-amplify charges released in the gas with good uniformity and energy. Coupled to a micro-strip plate, the pre-amplification element allows to preserve high rate capability and resolution at considerably lower operating voltages, thus completely eliminating discharges and instabilities. Several GEM grids can be operated in cascade; charge gains are large enough to allow detection of signals in the ionization mode on the last element, permitting the use of a simple printed circuit as read-out electrode. Two-dimensional read-out can then be easily implemented. A new generation of simple, reliable and cheap fast position sensitive detectors seems at hand.
Pritha Mitra; Tigran Poghosyan
2015-01-01
Amid renewed crisis, falling tax revenues, and rising debt, Ukraine faces serious fiscal consolidation needs. Durable fiscal adjustment can support economic confidence and rebuild buffers but what is its overall impact on growth? How effective are revenue versus spending instruments? Does current or capital spending have a larger impact? Applying a structural vector autoregressive model, this paper finds that Ukraine’s near-term revenue and spending multipliers are well below one. In the medi...
Nelson, Jane Bray
2012-01-01
As a new physics teacher, I was explaining how to find the weight of an object sitting on a table near the surface of the Earth. It bothered me when a student asked, "The object is not accelerating so why do you multiply the mass of the object by the acceleration due to gravity?" I answered something like, "That's true, but if the table were not…
Data driven estimation of imputation error-a strategy for imputation with a reject option
DEFF Research Database (Denmark)
Bak, Nikolaj; Hansen, Lars Kai
2016-01-01
Missing data is a common problem in many research fields and is a challenge that always needs careful considerations. One approach is to impute the missing values, i.e., replace missing values with estimates. When imputation is applied, it is typically applied to all records with missing values i...
Improving accuracy of rare variant imputation with a two-step imputation approach
DEFF Research Database (Denmark)
Kreiner-Møller, Eskil; Medina-Gomez, Carolina; Uitterlinden, André G
2015-01-01
not being comprehensively scrutinized. Next-generation arrays ensuring sufficient coverage together with new reference panels, as the 1000 Genomes panel, are emerging to facilitate imputation of low frequent single-nucleotide polymorphisms (minor allele frequency (MAF) ... reference sample genotyped on a dense array and hereafter to the 1000 Genomes reference panel. We show that mean imputation quality, measured by the r(2) using this approach, increases by 28% for variants with a MAF between 1 and 5% as compared with direct imputation to 1000 Genomes reference. Similarly......Genotype imputation has been the pillar of the success of genome-wide association studies (GWAS) for identifying common variants associated with common diseases. However, most GWAS have been run using only 60 HapMap samples as reference for imputation, meaning less frequent and rare variants...
UWB delay and multiply receiver
Energy Technology Data Exchange (ETDEWEB)
Dallum, Gregory E.; Pratt, Garth C.; Haugen, Peter C.; Romero, Carlos E.
2013-09-10
An ultra-wideband (UWB) delay and multiply receiver is formed of a receive antenna; a variable gain attenuator connected to the receive antenna; a signal splitter connected to the variable gain attenuator; a multiplier having one input connected to an undelayed signal from the signal splitter and another input connected to a delayed signal from the signal splitter, the delay between the splitter signals being equal to the spacing between pulses from a transmitter whose pulses are being received by the receive antenna; a peak detection circuit connected to the output of the multiplier and connected to the variable gain attenuator to control the variable gain attenuator to maintain a constant amplitude output from the multiplier; and a digital output circuit connected to the output of the multiplier.
Directory of Open Access Journals (Sweden)
McElwee Joshua
2009-06-01
Full Text Available Abstract Background Although high-throughput genotyping arrays have made whole-genome association studies (WGAS feasible, only a small proportion of SNPs in the human genome are actually surveyed in such studies. In addition, various SNP arrays assay different sets of SNPs, which leads to challenges in comparing results and merging data for meta-analyses. Genome-wide imputation of untyped markers allows us to address these issues in a direct fashion. Methods 384 Caucasian American liver donors were genotyped using Illumina 650Y (Ilmn650Y arrays, from which we also derived genotypes from the Ilmn317K array. On these data, we compared two imputation methods: MACH and BEAGLE. We imputed 2.5 million HapMap Release22 SNPs, and conducted GWAS on ~40,000 liver mRNA expression traits (eQTL analysis. In addition, 200 Caucasian American and 200 African American subjects were genotyped using the Affymetrix 500 K array plus a custom 164 K fill-in chip. We then imputed the HapMap SNPs and quantified the accuracy by randomly masking observed SNPs. Results MACH and BEAGLE perform similarly with respect to imputation accuracy. The Ilmn650Y results in excellent imputation performance, and it outperforms Affx500K or Ilmn317K sets. For Caucasian Americans, 90% of the HapMap SNPs were imputed at 98% accuracy. As expected, imputation of poorly tagged SNPs (untyped SNPs in weak LD with typed markers was not as successful. It was more challenging to impute genotypes in the African American population, given (1 shorter LD blocks and (2 admixture with Caucasian populations in this population. To address issue (2, we pooled HapMap CEU and YRI data as an imputation reference set, which greatly improved overall performance. The approximate 40,000 phenotypes scored in these populations provide a path to determine empirically how the power to detect associations is affected by the imputation procedures. That is, at a fixed false discovery rate, the number of cis
Estimating the accuracy of geographical imputation
Directory of Open Access Journals (Sweden)
Boscoe Francis P
2008-01-01
Full Text Available Abstract Background To reduce the number of non-geocoded cases researchers and organizations sometimes include cases geocoded to postal code centroids along with cases geocoded with the greater precision of a full street address. Some analysts then use the postal code to assign information to the cases from finer-level geographies such as a census tract. Assignment is commonly completed using either a postal centroid or by a geographical imputation method which assigns a location by using both the demographic characteristics of the case and the population characteristics of the postal delivery area. To date no systematic evaluation of geographical imputation methods ("geo-imputation" has been completed. The objective of this study was to determine the accuracy of census tract assignment using geo-imputation. Methods Using a large dataset of breast, prostate and colorectal cancer cases reported to the New Jersey Cancer Registry, we determined how often cases were assigned to the correct census tract using alternate strategies of demographic based geo-imputation, and using assignments obtained from postal code centroids. Assignment accuracy was measured by comparing the tract assigned with the tract originally identified from the full street address. Results Assigning cases to census tracts using the race/ethnicity population distribution within a postal code resulted in more correctly assigned cases than when using postal code centroids. The addition of age characteristics increased the match rates even further. Match rates were highly dependent on both the geographic distribution of race/ethnicity groups and population density. Conclusion Geo-imputation appears to offer some advantages and no serious drawbacks as compared with the alternative of assigning cases to census tracts based on postal code centroids. For a specific analysis, researchers will still need to consider the potential impact of geocoding quality on their results and evaluate
Missing value imputation for epistatic MAPs
LENUS (Irish Health Repository)
Ryan, Colm
2010-04-20
Abstract Background Epistatic miniarray profiling (E-MAPs) is a high-throughput approach capable of quantifying aggravating or alleviating genetic interactions between gene pairs. The datasets resulting from E-MAP experiments typically take the form of a symmetric pairwise matrix of interaction scores. These datasets have a significant number of missing values - up to 35% - that can reduce the effectiveness of some data analysis techniques and prevent the use of others. An effective method for imputing interactions would therefore increase the types of possible analysis, as well as increase the potential to identify novel functional interactions between gene pairs. Several methods have been developed to handle missing values in microarray data, but it is unclear how applicable these methods are to E-MAP data because of their pairwise nature and the significantly larger number of missing values. Here we evaluate four alternative imputation strategies, three local (Nearest neighbor-based) and one global (PCA-based), that have been modified to work with symmetric pairwise data. Results We identify different categories for the missing data based on their underlying cause, and show that values from the largest category can be imputed effectively. We compare local and global imputation approaches across a variety of distinct E-MAP datasets, showing that both are competitive and preferable to filling in with zeros. In addition we show that these methods are effective in an E-MAP from a different species, suggesting that pairwise imputation techniques will be increasingly useful as analogous epistasis mapping techniques are developed in different species. We show that strongly alleviating interactions are significantly more difficult to predict than strongly aggravating interactions. Finally we show that imputed interactions, generated using nearest neighbor methods, are enriched for annotations in the same manner as measured interactions. Therefore our method potentially
Cost reduction for web-based data imputation
Li, Zhixu; Shang, Shuo; Xie, Qing; Zhang, Xiangliang
2014-01-01
Web-based Data Imputation enables the completion of incomplete data sets by retrieving absent field values from the Web. In particular, complete fields can be used as keywords in imputation queries for absent fields. However, due to the ambiguity
Effective switching frequency multiplier inverter
Su, Gui-Jia [Oak Ridge, TN; Peng, Fang Z [Okemos, MI
2007-08-07
A switching frequency multiplier inverter for low inductance machines that uses parallel connection of switches and each switch is independently controlled according to a pulse width modulation scheme. The effective switching frequency is multiplied by the number of switches connected in parallel while each individual switch operates within its limit of switching frequency. This technique can also be used for other power converters such as DC/DC, AC/DC converters.
Fully conditional specification in multivariate imputation
van Buuren, S.; Brand, J. P.L.; Groothuis-Oudshoorn, C. G.M.; Rubin, D. B.
2006-01-01
The use of the Gibbs sampler with fully conditionally specified models, where the distribution of each variable given the other variables is the starting point, has become a popular method to create imputations in incomplete multivariate data. The theoretical weakness of this approach is that the
Microwave Frequency Multiplier
Velazco, J. E.
2017-02-01
High-power microwave radiation is used in the Deep Space Network (DSN) and Goldstone Solar System Radar (GSSR) for uplink communications with spacecraft and for monitoring asteroids and space debris, respectively. Intense X-band (7.1 to 8.6 GHz) microwave signals are produced for these applications via klystron and traveling-wave microwave vacuum tubes. In order to achieve higher data rate communications with spacecraft, the DSN is planning to gradually furnish several of its deep space stations with uplink systems that employ Ka-band (34-GHz) radiation. Also, the next generation of planetary radar, such as Ka-Band Objects Observation and Monitoring (KaBOOM), is considering frequencies in the Ka-band range (34 to 36 GHz) in order to achieve higher target resolution. Current commercial Ka-band sources are limited to power levels that range from hundreds of watts up to a kilowatt and, at the high-power end, tend to suffer from poor reliability. In either case, there is a clear need for stable Ka-band sources that can produce kilowatts of power with high reliability. In this article, we present a new concept for high-power, high-frequency generation (including Ka-band) that we refer to as the microwave frequency multiplier (MFM). The MFM is a two-cavity vacuum tube concept where low-frequency (2 to 8 GHz) power is fed into the input cavity to modulate and accelerate an electron beam. In the second cavity, the modulated electron beam excites and amplifies high-power microwaves at a frequency that is a multiple integer of the input cavity's frequency. Frequency multiplication factors in the 4 to 10 range are being considered for the current application, although higher multiplication factors are feasible. This novel beam-wave interaction allows the MFM to produce high-power, high-frequency radiation with high efficiency. A key feature of the MFM is that it uses significantly larger cavities than its klystron counterparts, thus greatly reducing power density and arcing
Directory of Open Access Journals (Sweden)
Vinay K Gupta
2016-06-01
Full Text Available Objective: The aim of the study is to assess the trend in mean BMI z-score among private schools’ students from their anthropometric records when there were missing values in the outcome. Methodology: The anthropometric measurements of student from class 1 to 12 were taken from the records of two private schools in Delhi, India from 2005 to 2010. These records comprise of an unbalanced longitudinal data that is not all the students had measurements recorded at each year. The trend in mean BMI z-score was estimated through growth curve model. Prior to that, missing values of BMI z-score were imputed through multiple imputation using the same model. A complete case analysis was also performed after excluding missing values to compare the results with those obtained from analysis of multiply imputed data. Results: The mean BMI z-score among school student significantly decreased over time in imputed data (β= -0.2030, se=0.0889, p=0.0232 after adjusting age, gender, class and school. Complete case analysis also shows a decrease in mean BMI z-score though it was not statistically significant (β= -0.2861, se=0.0987, p=0.065. Conclusions: The estimates obtained from multiple imputation analysis were better than those of complete data after excluding missing values in terms of lower standard errors. We showed that anthropometric measurements from schools records can be used to monitor the weight status of children and adolescents and multiple imputation using growth curve model can be useful while analyzing such data
LinkImputeR: user-guided genotype calling and imputation for non-model organisms.
Money, Daniel; Migicovsky, Zoë; Gardner, Kyle; Myles, Sean
2017-07-10
Genomic studies such as genome-wide association and genomic selection require genome-wide genotype data. All existing technologies used to create these data result in missing genotypes, which are often then inferred using genotype imputation software. However, existing imputation methods most often make use only of genotypes that are successfully inferred after having passed a certain read depth threshold. Because of this, any read information for genotypes that did not pass the threshold, and were thus set to missing, is ignored. Most genomic studies also choose read depth thresholds and quality filters without investigating their effects on the size and quality of the resulting genotype data. Moreover, almost all genotype imputation methods require ordered markers and are therefore of limited utility in non-model organisms. Here we introduce LinkImputeR, a software program that exploits the read count information that is normally ignored, and makes use of all available DNA sequence information for the purposes of genotype calling and imputation. It is specifically designed for non-model organisms since it requires neither ordered markers nor a reference panel of genotypes. Using next-generation DNA sequence (NGS) data from apple, cannabis and grape, we quantify the effect of varying read count and missingness thresholds on the quantity and quality of genotypes generated from LinkImputeR. We demonstrate that LinkImputeR can increase the number of genotype calls by more than an order of magnitude, can improve genotyping accuracy by several percent and can thus improve the power of downstream analyses. Moreover, we show that the effects of quality and read depth filters can differ substantially between data sets and should therefore be investigated on a per-study basis. By exploiting DNA sequence data that is normally ignored during genotype calling and imputation, LinkImputeR can significantly improve both the quantity and quality of genotype data generated from
Hopke, P K; Liu, C; Rubin, D B
2001-03-01
Many chemical and environmental data sets are complicated by the existence of fully missing values or censored values known to lie below detection thresholds. For example, week-long samples of airborne particulate matter were obtained at Alert, NWT, Canada, between 1980 and 1991, where some of the concentrations of 24 particulate constituents were coarsened in the sense of being either fully missing or below detection limits. To facilitate scientific analysis, it is appealing to create complete data by filling in missing values so that standard complete-data methods can be applied. We briefly review commonly used strategies for handling missing values and focus on the multiple-imputation approach, which generally leads to valid inferences when faced with missing data. Three statistical models are developed for multiply imputing the missing values of airborne particulate matter. We expect that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.
Clustering with Missing Values: No Imputation Required
Wagstaff, Kiri
2004-01-01
Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.
BRITS: Bidirectional Recurrent Imputation for Time Series
Cao, Wei; Wang, Dong; Li, Jian; Zhou, Hao; Li, Lei; Li, Yitan
2018-01-01
Time series are widely used as signals in many classification/regression tasks. It is ubiquitous that time series contains many missing values. Given multiple correlated time series data, how to fill in missing values and to predict their class labels? Existing imputation methods often impose strong assumptions of the underlying data generating process, such as linear dynamics in the state space. In this paper, we propose BRITS, a novel method based on recurrent neural networks for missing va...
Bootstrap inference when using multiple imputation.
Schomaker, Michael; Heumann, Christian
2018-04-16
Many modern estimators require bootstrapping to calculate confidence intervals because either no analytic standard error is available or the distribution of the parameter of interest is nonsymmetric. It remains however unclear how to obtain valid bootstrap inference when dealing with multiple imputation to address missing data. We present 4 methods that are intuitively appealing, easy to implement, and combine bootstrap estimation with multiple imputation. We show that 3 of the 4 approaches yield valid inference, but that the performance of the methods varies with respect to the number of imputed data sets and the extent of missingness. Simulation studies reveal the behavior of our approaches in finite samples. A topical analysis from HIV treatment research, which determines the optimal timing of antiretroviral treatment initiation in young children, demonstrates the practical implications of the 4 methods in a sophisticated and realistic setting. This analysis suffers from missing data and uses the g-formula for inference, a method for which no standard errors are available. Copyright © 2018 John Wiley & Sons, Ltd.
Lagrange multipliers and gravitational theory
International Nuclear Information System (INIS)
Elston, F.D.
1977-01-01
The Lagrange multiplier variational method is extended to nonlinear Lagrangians in a Riemann space, where it is shown explicitly for the quadratic Lagrangians that, as expected, this approach is equivalent to the Hilbert variational method. It is not, in general, equivalent to the Palatini variational method. The nonvanishing Lagrange multipliers for the quadratic Lagrangians are explicitly obtained in covariant form. A similiar analysis is then carried out in a Riemann--Cartan torsional metric space for the specific Lagrangians g/sup 1/2/R tilde and g/sup 1/2/R/sub uv/tilde R/sup uv/tilde. The possible relevance of the R/sub uv/R/sup u anti v/ invariant to an action-principle formulation of the Rainich--Misner--Wheeler (RMW) already-unified theory is also discussed. It is then pointed out how a different use of the Lagrange multiplier technique in the language of the 3 + 1 canonical formalism developed by Arnowitt, Deser, and Misner (ADM) permits the recasting of the equations of motion for quadratic and general higher-order invariants into the ADM canonical formalism. In general, without this Lagrange multiplier approach, the higher-order ADM problem could not be solved. This is done explicitly for the simplest quadratic Langrangian g/sup 1/2/R 2 as an example
Multiplied Environmental Literacy. Final Report.
Buethe, Chris
This booklet presents a pupil-oriented program designed to increase the environmental literacy of teachers and students in Indiana schools through a programmed multiplier effect. Junior and senior high school science teachers were prepared to teach students the meanings of 44 selected environmental terms and related concepts. Those teachers then…
Gaussian mixture clustering and imputation of microarray data.
Ouyang, Ming; Welsh, William J; Georgopoulos, Panos
2004-04-12
In microarray experiments, missing entries arise from blemishes on the chips. In large-scale studies, virtually every chip contains some missing entries and more than 90% of the genes are affected. Many analysis methods require a full set of data. Either those genes with missing entries are excluded, or the missing entries are filled with estimates prior to the analyses. This study compares methods of missing value estimation. Two evaluation metrics of imputation accuracy are employed. First, the root mean squared error measures the difference between the true values and the imputed values. Second, the number of mis-clustered genes measures the difference between clustering with true values and that with imputed values; it examines the bias introduced by imputation to clustering. The Gaussian mixture clustering with model averaging imputation is superior to all other imputation methods, according to both evaluation metrics, on both time-series (correlated) and non-time series (uncorrelated) data sets.
Directory of Open Access Journals (Sweden)
Kamatani Naoyuki
2011-05-01
Full Text Available Abstract Background Use of missing genotype imputations and haplotype reconstructions are valuable in genome-wide association studies (GWASs. By modeling the patterns of linkage disequilibrium in a reference panel, genotypes not directly measured in the study samples can be imputed and used for GWASs. Since millions of single nucleotide polymorphisms need to be imputed in a GWAS, faster methods for genotype imputation and haplotype reconstruction are required. Results We developed a program package for parallel computation of genotype imputation and haplotype reconstruction. Our program package, ParaHaplo 3.0, is intended for use in workstation clusters using the Intel Message Passing Interface. We compared the performance of ParaHaplo 3.0 on the Japanese in Tokyo, Japan and Han Chinese in Beijing, and Chinese in the HapMap dataset. A parallel version of ParaHaplo 3.0 can conduct genotype imputation 20 times faster than a non-parallel version of ParaHaplo. Conclusions ParaHaplo 3.0 is an invaluable tool for conducting haplotype-based GWASs. The need for faster genotype imputation and haplotype reconstruction using parallel computing will become increasingly important as the data sizes of such projects continue to increase. ParaHaplo executable binaries and program sources are available at http://en.sourceforge.jp/projects/parallelgwas/releases/.
Study on neutron irradiation behavior of beryllium as neutron multiplier
Energy Technology Data Exchange (ETDEWEB)
Ishitsuka, Etsuo [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment
1998-03-01
More than 300 tons beryllium is expected to be used as a neutron multiplier in ITER, and study on the neutron irradiation behavior of beryllium as the neutron multiplier with Japan Materials Testing Reactor (JMTR) were performed to get the engineering data for fusion blanket design. This study started as the study on the tritium behavior in beryllium neutron reflector in order to make clear the generation mechanism on tritium of JMTR primary coolant since 1985. These experiences were handed over to beryllium studies for fusion study, and overall studies such as production technology of beryllium pebbles, irradiation behavior evaluation and reprocessing technology have been started since 1990. In this presentation, study on the neutron irradiation behavior of beryllium as the neutron multiplier with JMTR was reviewed from the point of tritium release, thermal properties, mechanical properties and reprocessing technology. (author)
Cost reduction for web-based data imputation
Li, Zhixu
2014-01-01
Web-based Data Imputation enables the completion of incomplete data sets by retrieving absent field values from the Web. In particular, complete fields can be used as keywords in imputation queries for absent fields. However, due to the ambiguity of these keywords and the data complexity on the Web, different queries may retrieve different answers to the same absent field value. To decide the most probable right answer to each absent filed value, existing method issues quite a few available imputation queries for each absent value, and then vote on deciding the most probable right answer. As a result, we have to issue a large number of imputation queries for filling all absent values in an incomplete data set, which brings a large overhead. In this paper, we work on reducing the cost of Web-based Data Imputation in two aspects: First, we propose a query execution scheme which can secure the most probable right answer to an absent field value by issuing as few imputation queries as possible. Second, we recognize and prune queries that probably will fail to return any answers a priori. Our extensive experimental evaluation shows that our proposed techniques substantially reduce the cost of Web-based Imputation without hurting its high imputation accuracy. © 2014 Springer International Publishing Switzerland.
Synthesis algorithm of VLSI multipliers for ASIC
Chua, O. H.; Eldin, A. G.
1993-01-01
Multipliers are critical sub-blocks in ASIC design, especially for digital signal processing and communications applications. A flexible multiplier synthesis tool is developed which is capable of generating multiplier blocks for word size in the range of 4 to 256 bits. A comparison of existing multiplier algorithms is made in terms of speed, silicon area, and suitability for automated synthesis and verification of its VLSI implementation. The algorithm divides the range of supported word sizes into sub-ranges and provides each sub-range with a specific multiplier architecture for optimal speed and area. The algorithm of the synthesis tool and the multiplier architectures are presented. Circuit implementation and the automated synthesis methodology are discussed.
Multiple imputation in the presence of non-normal data.
Lee, Katherine J; Carlin, John B
2017-02-20
Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Mistler, Stephen A.; Enders, Craig K.
2017-01-01
Multiple imputation methods can generally be divided into two broad frameworks: joint model (JM) imputation and fully conditional specification (FCS) imputation. JM draws missing values simultaneously for all incomplete variables using a multivariate distribution, whereas FCS imputes variables one at a time from a series of univariate conditional…
Multiple Imputation of Predictor Variables Using Generalized Additive Models
de Jong, Roel; van Buuren, Stef; Spiess, Martin
2016-01-01
The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The
Comparison of different Methods for Univariate Time Series Imputation in R
Moritz, Steffen; Sardá, Alexis; Bartz-Beielstein, Thomas; Zaefferer, Martin; Stork, Jörg
2015-01-01
Missing values in datasets are a well-known problem and there are quite a lot of R packages offering imputation functions. But while imputation in general is well covered within R, it is hard to find functions for imputation of univariate time series. The problem is, most standard imputation techniques can not be applied directly. Most algorithms rely on inter-attribute correlations, while univariate time series imputation needs to employ time dependencies. This paper provides an overview of ...
Multiple Improvements of Multiple Imputation Likelihood Ratio Tests
Chan, Kin Wai; Meng, Xiao-Li
2017-01-01
Multiple imputation (MI) inference handles missing data by first properly imputing the missing values $m$ times, and then combining the $m$ analysis results from applying a complete-data procedure to each of the completed datasets. However, the existing method for combining likelihood ratio tests has multiple defects: (i) the combined test statistic can be negative in practice when the reference null distribution is a standard $F$ distribution; (ii) it is not invariant to re-parametrization; ...
Otanps synapse linear relation multiplier circuit
International Nuclear Information System (INIS)
Chible, H.
2008-01-01
In this paper, a four quadrant VLSI analog multiplier will be proposed, in order to be used in the implementation of the neurons and synapses modules of the artificial neural networks. The main characteristics of this multiplier are the small silicon area and the low power consumption and the high value of the weight input voltage. (author)
On compact multipliers of topological algebras
International Nuclear Information System (INIS)
Mohammad, N.
1994-08-01
It is shown that if the maximal ideal space Δ(A) of a semisimple commutative complete metrizable locally convex algebra contains no isolated points, then every compact multiplier is trivial. Particularly, compact multipliers on semisimple commutative Frechet algebras whose maximal ideal space has no isolated points are identically zero. (author). 5 refs
Faster and Energy-Efficient Signed Multipliers
Directory of Open Access Journals (Sweden)
B. Ramkumar
2013-01-01
Full Text Available We demonstrate faster and energy-efficient column compression multiplication with very small area overheads by using a combination of two techniques: partition of the partial products into two parts for independent parallel column compression and acceleration of the final addition using new hybrid adder structures proposed here. Based on the proposed techniques, 8-b, 16-b, 32-b, and 64-b Wallace (W, Dadda (D, and HPM (H reduction tree based Baugh-Wooley multipliers are developed and compared with the regular W, D, H based Baugh-Wooley multipliers. The performances of the proposed multipliers are analyzed by evaluating the delay, area, and power, with 65 nm process technologies on interconnect and layout using industry standard design and layout tools. The result analysis shows that the 64-bit proposed multipliers are as much as 29%, 27%, and 21% faster than the regular W, D, H based Baugh-Wooley multipliers, respectively, with a maximum of only 2.4% power overhead. Also, the power-delay products (energy consumption of the proposed 16-b, 32-b, and 64-b multipliers are significantly lower than those of the regular Baugh-Wooley multiplier. Applicability of the proposed techniques to the Booth-Encoded multipliers is also discussed.
Spin sensitivity of a channel electron multiplier
International Nuclear Information System (INIS)
Scholten, R.E.; McClelland, J.J.; Kelley, M.H.; Celotta, R.J.
1988-01-01
We report direct measurements of the sensitivity of a channel electron multiplier to electrons with different spin orientations. Four regions of the multiplier cone were examined using polarized electrons at 100-eV incident energy. Pulse counting and analog modes of operation were both investigated and in each case the observed spin effects were less than 0.5%
A web-based approach to data imputation
Li, Zhixu
2013-10-24
In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques. © 2013 Springer Science+Business Media New York.
The gas electron multiplier (GEM)
Bouclier, Roger; Dominik, Wojciech; Hoch, M; Labbé, J C; Million, Gilbert; Ropelewski, Leszek; Sauli, Fabio; Sharma, A
1996-01-01
We describe operating priciples and results obtained with a new detector component: the Gas Electrons Multiplier (GEM). Consisting of a thin composite sheet with two metal layers separated by a thin insulator, and pierced by a regular matrix of open channels, the GEM electrode, inserted on the path of electrons in a gas detector, allows to transfer the charge with an amplification factor approaching ten. Uniform response and high rate capability are demonstrated. Coupled to another device, multiwire or micro-strip chamber, the GEM electrode permit to obtain higher gains or less critical operation; separation of the sensitive (conversion) volume and the detection volume has other advantages, as a built-in delay (useful for triggering purposes) and the possibility of applying high fields on the photo-cathode of ring imaging detectors to improve efficiency. Multiple GEM grids in the same gas volume allow to obtain large amplification factors in a succession of steps, leading to the realization of an effective ga...
Gaseous Electron Multiplier (GEM) Detectors
Gnanvo, Kondo
2017-09-01
Gaseous detectors have played a pivotal role as tracking devices in the field of particle physics experiments for the last fifty years. Recent advances in photolithography and micro processing techniques have enabled the transition from Multi Wire Proportional Chambers (MWPCs) and Drift Chambers to a new family of gaseous detectors refer to as Micro Pattern Gaseous Detectors (MPGDs). MPGDs combine the basic gas amplification principle with micro-structure printed circuits to provide detectors with excellent spatial and time resolution, high rate capability, low material budget and high radiation tolerance. Gas Electron Multiplier (GEMs) is a well-established MPGD technology invented by F. Sauli at CERN in 1997 and deployed various high energy physics (HEP) and nuclear NP experiment for tracking systems of current and future NP experiments. GEM detector combines an exceptional high rate capability (1 MHz / mm2) and robustness against harsh radiation environment with excellent position and timing resolution performances. Recent breakthroughs over the past decade have allowed the possibility for large area GEMs, making them cost effective and high-performance detector candidates to play pivotal role in current and future particle physics experiments. After a brief introduction of the basic principle of GEM technology, I will give a brief overview of the GEM detectors used in particle physics experiments over the past decades and especially in the NP community at Thomas Jefferson National Laboratory (JLab) and Brookhaven National Laboratory (BNL). I will follow by a review of state of the art of the new GEM development for the next generation of colliders such as Electron Ion Collider (EIC) or High Luminosity LHC and future Nuclear Physics experiments. I will conclude with a presentation of the CERN-based RD51 collaboration established in 2008 and its major achievements regarding technological developments and applications of MPGDs.
Keynesian multiplier versus velocity of money
Wang, Yougui; Xu, Yan; Liu, Li
2010-08-01
In this paper we present the relation between Keynesian multiplier and the velocity of money circulation in a money exchange model. For this purpose we modify the original exchange model by constructing the interrelation between income and expenditure. The random exchange yields an agent's income, which along with the amount of money he processed determines his expenditure. In this interactive process, both the circulation of money and Keynesian multiplier effect can be formulated. The equilibrium values of Keynesian multiplier are demonstrated to be closely related to the velocity of money. Thus the impacts of macroeconomic policies on aggregate income can be understood by concentrating solely on the variations of money circulation.
Calculated characteristics of multichannel photoelectron multipliers
International Nuclear Information System (INIS)
Vasil'chenko, V.G.; Dajkovskij, A.G.; Milova, N.V.; Rakhmatov, V.E.; Rykalin, V.I.
1990-01-01
Structural features and main calculated characteristics of some modifications of position-sensitive two-coordinate multichannel photoelectron multipliers (PEM) with plate-type multiplying systems are described. The presented PEM structures are free from direct optical and ion feedbacks, provide coordinate resolution ≅ 1 mm with efficiency of photoelectron detection ≅ 90%. Capabilities for using silicon field-effect photocathodes, providing electron extraction into vacuum, as well as prospects of using multichannel multiplying systems for readout of the data from solid detectors are considered
Assessing accuracy of genotype imputation in American Indians.
Directory of Open Access Journals (Sweden)
Alka Malhotra
Full Text Available Genotype imputation is commonly used in genetic association studies to test untyped variants using information on linkage disequilibrium (LD with typed markers. Imputing genotypes requires a suitable reference population in which the LD pattern is known, most often one selected from HapMap. However, some populations, such as American Indians, are not represented in HapMap. In the present study, we assessed accuracy of imputation using HapMap reference populations in a genome-wide association study in Pima Indians.Data from six randomly selected chromosomes were used. Genotypes in the study population were masked (either 1% or 20% of SNPs available for a given chromosome. The masked genotypes were then imputed using the software Markov Chain Haplotyping Algorithm. Using four HapMap reference populations, average genotype error rates ranged from 7.86% for Mexican Americans to 22.30% for Yoruba. In contrast, use of the original Pima Indian data as a reference resulted in an average error rate of 1.73%.Our results suggest that the use of HapMap reference populations results in substantial inaccuracy in the imputation of genotypes in American Indians. A possible solution would be to densely genotype or sequence a reference American Indian population.
Directory of Open Access Journals (Sweden)
He Yulei
2016-03-01
Full Text Available Multiple imputation is a popular approach to handling missing data. Although it was originally motivated by survey nonresponse problems, it has been readily applied to other data settings. However, its general behavior still remains unclear when applied to survey data with complex sample designs, including clustering. Recently, Lewis et al. (2014 compared single- and multiple-imputation analyses for certain incomplete variables in the 2008 National Ambulatory Medicare Care Survey, which has a nationally representative, multistage, and clustered sampling design. Their study results suggested that the increase of the variance estimate due to multiple imputation compared with single imputation largely disappears for estimates with large design effects. We complement their empirical research by providing some theoretical reasoning. We consider data sampled from an equally weighted, single-stage cluster design and characterize the process using a balanced, one-way normal random-effects model. Assuming that the missingness is completely at random, we derive analytic expressions for the within- and between-multiple-imputation variance estimators for the mean estimator, and thus conveniently reveal the impact of design effects on these variance estimators. We propose approximations for the fraction of missing information in clustered samples, extending previous results for simple random samples. We discuss some generalizations of this research and its practical implications for data release by statistical agencies.
Multipliers for continuous frames in Hilbert spaces
International Nuclear Information System (INIS)
Balazs, P; Bayer, D; Rahimi, A
2012-01-01
In this paper, we examine the general theory of continuous frame multipliers in Hilbert space. These operators are a generalization of the widely used notion of (discrete) frame multipliers. Well-known examples include anti-Wick operators, STFT multipliers or Calderón–Toeplitz operators. Due to the possible peculiarities of the underlying measure spaces, continuous frames do not behave quite as their discrete counterparts. Nonetheless, many results similar to the discrete case are proven for continuous frame multipliers as well, for instance compactness and Schatten-class properties. Furthermore, the concepts of controlled and weighted frames are transferred to the continuous setting. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Coherent states: mathematical and physical aspects’. (paper)
Economic Multipliers and Mega-Event Analysis
Victor Matheson
2004-01-01
Critics of economic impact studies that purport to show that mega-events such as the Olympics bring large benefits to the communities “lucky” enough to host them frequently cite the use of inappropriate multipliers as a primary reason why these impact studies overstate the true economic gains to the hosts of these events. This brief paper shows in a numerical example how mega-events may lead to inflated multipliers and exaggerated claims of economic benefits.
The multiple imputation method: a case study involving secondary data analysis.
Walani, Salimah R; Cleland, Charles M
2015-05-01
To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.
TRIP: An interactive retrieving-inferring data imputation approach
Li, Zhixu
2016-06-25
Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.
TRIP: An interactive retrieving-inferring data imputation approach
Li, Zhixu; Qin, Lu; Cheng, Hong; Zhang, Xiangliang; Zhou, Xiaofang
2016-01-01
Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.
Missing value imputation: with application to handwriting data
Xu, Zhen; Srihari, Sargur N.
2015-01-01
Missing values make pattern analysis difficult, particularly with limited available data. In longitudinal research, missing values accumulate, thereby aggravating the problem. Here we consider how to deal with temporal data with missing values in handwriting analysis. In the task of studying development of individuality of handwriting, we encountered the fact that feature values are missing for several individuals at several time instances. Six algorithms, i.e., random imputation, mean imputation, most likely independent value imputation, and three methods based on Bayesian network (static Bayesian network, parameter EM, and structural EM), are compared with children's handwriting data. We evaluate the accuracy and robustness of the algorithms under different ratios of missing data and missing values, and useful conclusions are given. Specifically, static Bayesian network is used for our data which contain around 5% missing data to provide adequate accuracy and low computational cost.
Imputed prices of greenhouse gases and land forests
International Nuclear Information System (INIS)
Uzawa, Hirofumi
1993-01-01
The theory of dynamic optimum formulated by Maeler gives us the basic theoretical framework within which it is possible to analyse the economic and, possibly, political circumstances under which the phenomenon of global warming occurs, and to search for the policy and institutional arrangements whereby it would be effectively arrested. The analysis developed here is an application of Maeler's theory to atmospheric quality. In the analysis a central role is played by the concept of imputed price in the dynamic context. Our determination of imputed prices of atmospheric carbon dioxide and land forests takes into account the difference in the stages of economic development. Indeed, the ratios of the imputed prices of atmospheric carbon dioxide and land forests over the per capita level of real national income are identical for all countries involved. (3 figures, 2 tables) (Author)
Directory of Open Access Journals (Sweden)
Boeschoten Laura
2017-12-01
Full Text Available Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible combinations with scores on other variables. Furthermore, the latent class model, by multiply imputing a new variable, enhances the quality of statistics based on the composite data set. The performance of this method is investigated by a simulation study, which shows that whether or not the method can be applied depends on the entropy R2 of the latent class model and the type of analysis a researcher is planning to do. Finally, the method is applied to public data from Statistics Netherlands.
Data imputation analysis for Cosmic Rays time series
Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.
2017-05-01
The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.
Using imputation to provide location information for nongeocoded addresses.
Directory of Open Access Journals (Sweden)
Frank C Curriero
2010-02-01
Full Text Available The importance of geography as a source of variation in health research continues to receive sustained attention in the literature. The inclusion of geographic information in such research often begins by adding data to a map which is predicated by some knowledge of location. A precise level of spatial information is conventionally achieved through geocoding, the geographic information system (GIS process of translating mailing address information to coordinates on a map. The geocoding process is not without its limitations, though, since there is always a percentage of addresses which cannot be converted successfully (nongeocodable. This raises concerns regarding bias since traditionally the practice has been to exclude nongeocoded data records from analysis.In this manuscript we develop and evaluate a set of imputation strategies for dealing with missing spatial information from nongeocoded addresses. The strategies are developed assuming a known zip code with increasing use of collateral information, namely the spatial distribution of the population at risk. Strategies are evaluated using prostate cancer data obtained from the Maryland Cancer Registry. We consider total case enumerations at the Census county, tract, and block group level as the outcome of interest when applying and evaluating the methods. Multiple imputation is used to provide estimated total case counts based on complete data (geocodes plus imputed nongeocodes with a measure of uncertainty. Results indicate that the imputation strategy based on using available population-based age, gender, and race information performed the best overall at the county, tract, and block group levels.The procedure allows for the potentially biased and likely under reported outcome, case enumerations based on only the geocoded records, to be presented with a statistically adjusted count (imputed count with a measure of uncertainty that are based on all the case data, the geocodes and imputed
Multiple imputation of missing passenger boarding data in the national census of ferry operators
2008-08-01
This report presents findings from the 2006 National Census of Ferry Operators (NCFO) augmented with imputed values for passengers and passenger miles. Due to the imputation procedures used to calculate missing data, totals in Table 1 may not corresp...
Optical studies of multiply excited states
International Nuclear Information System (INIS)
Mannervik, S.
1989-01-01
Optical studies of multiply-excited states are reviewed with emphasis on emission spectroscopy. From optical measurements, properties such as excitation energies, lifetimes and autoionization widths can be determined with high accuracy, which constitutes a challenge for modern computational methods. This article mainly covers work on two-, three- and four-electron systems, but also sodium-like quartet systems. Furthermore, some comments are given on bound multiply-excited states in negative ions. Fine structure effects on transition wavelengths and lifetimes (autoionization) are discussed. In particular, the most recent experimental and theoretical studies of multiply-excited states are covered. Some remaining problems, which require further attention, are discussed in more detail. (orig.) With 228 refs
Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit
2017-06-01
Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.
Sequence imputation of HPV16 genomes for genetic association studies.
Directory of Open Access Journals (Sweden)
Benjamin Smith
Full Text Available Human Papillomavirus type 16 (HPV16 causes over half of all cervical cancer and some HPV16 variants are more oncogenic than others. The genetic basis for the extraordinary oncogenic properties of HPV16 compared to other HPVs is unknown. In addition, we neither know which nucleotides vary across and within HPV types and lineages, nor which of the single nucleotide polymorphisms (SNPs determine oncogenicity.A reference set of 62 HPV16 complete genome sequences was established and used to examine patterns of evolutionary relatedness amongst variants using a pairwise identity heatmap and HPV16 phylogeny. A BLAST-based algorithm was developed to impute complete genome data from partial sequence information using the reference database. To interrogate the oncogenic risk of determined and imputed HPV16 SNPs, odds-ratios for each SNP were calculated in a case-control viral genome-wide association study (VWAS using biopsy confirmed high-grade cervix neoplasia and self-limited HPV16 infections from Guanacaste, Costa Rica.HPV16 variants display evolutionarily stable lineages that contain conserved diagnostic SNPs. The imputation algorithm indicated that an average of 97.5±1.03% of SNPs could be accurately imputed. The VWAS revealed specific HPV16 viral SNPs associated with variant lineages and elevated odds ratios; however, individual causal SNPs could not be distinguished with certainty due to the nature of HPV evolution.Conserved and lineage-specific SNPs can be imputed with a high degree of accuracy from limited viral polymorphic data due to the lack of recombination and the stochastic mechanism of variation accumulation in the HPV genome. However, to determine the role of novel variants or non-lineage-specific SNPs by VWAS will require direct sequence analysis. The investigation of patterns of genetic variation and the identification of diagnostic SNPs for lineages of HPV16 variants provides a valuable resource for future studies of HPV16
Imputing amino acid polymorphisms in human leukocyte antigens.
Directory of Open Access Journals (Sweden)
Xiaoming Jia
Full Text Available DNA sequence variation within human leukocyte antigen (HLA genes mediate susceptibility to a wide range of human diseases. The complex genetic structure of the major histocompatibility complex (MHC makes it difficult, however, to collect genotyping data in large cohorts. Long-range linkage disequilibrium between HLA loci and SNP markers across the major histocompatibility complex (MHC region offers an alternative approach through imputation to interrogate HLA variation in existing GWAS data sets. Here we describe a computational strategy, SNP2HLA, to impute classical alleles and amino acid polymorphisms at class I (HLA-A, -B, -C and class II (-DPA1, -DPB1, -DQA1, -DQB1, and -DRB1 loci. To characterize performance of SNP2HLA, we constructed two European ancestry reference panels, one based on data collected in HapMap-CEPH pedigrees (90 individuals and another based on data collected by the Type 1 Diabetes Genetics Consortium (T1DGC, 5,225 individuals. We imputed HLA alleles in an independent data set from the British 1958 Birth Cohort (N = 918 with gold standard four-digit HLA types and SNPs genotyped using the Affymetrix GeneChip 500 K and Illumina Immunochip microarrays. We demonstrate that the sample size of the reference panel, rather than SNP density of the genotyping platform, is critical to achieve high imputation accuracy. Using the larger T1DGC reference panel, the average accuracy at four-digit resolution is 94.7% using the low-density Affymetrix GeneChip 500 K, and 96.7% using the high-density Illumina Immunochip. For amino acid polymorphisms within HLA genes, we achieve 98.6% and 99.3% accuracy using the Affymetrix GeneChip 500 K and Illumina Immunochip, respectively. Finally, we demonstrate how imputation and association testing at amino acid resolution can facilitate fine-mapping of primary MHC association signals, giving a specific example from type 1 diabetes.
Semigroups of Herz-Schur multipliers
DEFF Research Database (Denmark)
Knudby, Søren
2014-01-01
function (see Theorem 1.2). It is then shown that a (not necessarily proper) generator of a semigroup of Herz–Schur multipliers splits into a positive definite kernel and a conditionally negative definite kernel. We also show that the generator has a particularly pleasant form if and only if the group...
A quantum architecture for multiplying signed integers
International Nuclear Information System (INIS)
Alvarez-Sanchez, J J; Alvarez-Bravo, J V; Nieto, L M
2008-01-01
A new quantum architecture for multiplying signed integers is presented based on Booth's algorithm, which is well known in classical computation. It is shown how a quantum binary chain might be encoded by its flank changes, giving the final product in 2's-complement representation.
An Imputation Model for Dropouts in Unemployment Data
Directory of Open Access Journals (Sweden)
Nilsson Petra
2016-09-01
Full Text Available Incomplete unemployment data is a fundamental problem when evaluating labour market policies in several countries. Many unemployment spells end for unknown reasons; in the Swedish Public Employment Service’s register as many as 20 percent. This leads to an ambiguity regarding destination states (employment, unemployment, retired, etc.. According to complete combined administrative data, the employment rate among dropouts was close to 50 for the years 1992 to 2006, but from 2007 the employment rate has dropped to 40 or less. This article explores an imputation approach. We investigate imputation models estimated both on survey data from 2005/2006 and on complete combined administrative data from 2005/2006 and 2011/2012. The models are evaluated in terms of their ability to make correct predictions. The models have relatively high predictive power.
Towards a more efficient representation of imputation operators in TPOT
Garciarena, Unai; Mendiburu, Alexander; Santana, Roberto
2018-01-01
Automated Machine Learning encompasses a set of meta-algorithms intended to design and apply machine learning techniques (e.g., model selection, hyperparameter tuning, model assessment, etc.). TPOT, a software for optimizing machine learning pipelines based on genetic programming (GP), is a novel example of this kind of applications. Recently we have proposed a way to introduce imputation methods as part of TPOT. While our approach was able to deal with problems with missing data, it can prod...
DTW-APPROACH FOR UNCORRELATED MULTIVARIATE TIME SERIES IMPUTATION
Phan , Thi-Thu-Hong; Poisson Caillault , Emilie; Bigand , André; Lefebvre , Alain
2017-01-01
International audience; Missing data are inevitable in almost domains of applied sciences. Data analysis with missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Some well-known methods for multivariate time series imputation require high correlations between series or their features. In this paper , we propose an approach based on the shape-behaviour relation in low/un-correlated multivariate time series under an assumption of...
Which DTW Method Applied to Marine Univariate Time Series Imputation
Phan , Thi-Thu-Hong; Caillault , Émilie; Lefebvre , Alain; Bigand , André
2017-01-01
International audience; Missing data are ubiquitous in any domains of applied sciences. Processing datasets containing missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Therefore, the aim of this paper is to build a framework for filling missing values in univariate time series and to perform a comparison of different similarity metrics used for the imputation task. This allows to suggest the most suitable methods for the imp...
Imputation of missing data in time series for air pollutants
Junger, W. L.; Ponce de Leon, A.
2015-02-01
Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.
A spatial haplotype copying model with applications to genotype imputation.
Yang, Wen-Yun; Hormozdiari, Farhad; Eskin, Eleazar; Pasaniuc, Bogdan
2015-05-01
Ever since its introduction, the haplotype copy model has proven to be one of the most successful approaches for modeling genetic variation in human populations, with applications ranging from ancestry inference to genotype phasing and imputation. Motivated by coalescent theory, this approach assumes that any chromosome (haplotype) can be modeled as a mosaic of segments copied from a set of chromosomes sampled from the same population. At the core of the model is the assumption that any chromosome from the sample is equally likely to contribute a priori to the copying process. Motivated by recent works that model genetic variation in a geographic continuum, we propose a new spatial-aware haplotype copy model that jointly models geography and the haplotype copying process. We extend hidden Markov models of haplotype diversity such that at any given location, haplotypes that are closest in the genetic-geographic continuum map are a priori more likely to contribute to the copying process than distant ones. Through simulations starting from the 1000 Genomes data, we show that our model achieves superior accuracy in genotype imputation over the standard spatial-unaware haplotype copy model. In addition, we show the utility of our model in selecting a small personalized reference panel for imputation that leads to both improved accuracy as well as to a lower computational runtime than the standard approach. Finally, we show our proposed model can be used to localize individuals on the genetic-geographical map on the basis of their genotype data.
Equations for the stochastic cumulative multiplying chain
Energy Technology Data Exchange (ETDEWEB)
Lewins, J D [Cambridge Univ. (UK). Dept. of Engineering
1980-01-01
The forward and backward equations for the conditional probability of the neutron multiplying chain are derived in a new generalization accounting for the chain length and admitting time dependent properties. These Kolmogorov equations form the basis of a variational and hence complete description of the 'lumped' multiplying system. The equations reduce to the marginal distribution, summed over all chain lengths, and to the simpler equations previously derived for that problem. The method of derivation, direct and in the probability space with the minimum of mathematical manipulations, is perhaps the chief attraction: the equations are also displayed in conventional generating function form. As such, they appear to apply to number of problems in areas of social anthropology, polymer chemistry, genetics and cell biology as well as neutron reactor theory and radiation damage.
Equations for the stochastic cumulative multiplying chain
International Nuclear Information System (INIS)
Lewins, J.D.
1980-01-01
The forward and backward equations for the conditional probability of the neutron multiplying chain are derived in a new generalization accounting for the chain length and admitting time dependent properties. These Kolmogorov equations form the basis of a variational and hence complete description of the 'lumped' multiplying system. The equations reduce to the marginal distribution, summed over all chain lengths, and to the simpler equations previously derived for that problem. The method of derivation, direct and in the probability space with the minimum of mathematical manipulations, is perhaps the chief attraction: the equations are also displayed in conventional generating function form. As such, they appear to apply to number of problems in areas of social anthropology, polymer chemistry, genetics and cell biology as well as neutron reactor theory and radiation damage. (author)
Tourism multipliers in the Mexican economy
Directory of Open Access Journals (Sweden)
Antonio Kido-Cruz
2016-12-01
Full Text Available This paper presents an analysis of the multiplier impact generated by the tourism sector in Mexico in the year 2013. The importance of studying this sector, in particular, lies in its contribution to the National GDP of over 8% and in its promising development based on services’ quality and the preferred destination of the developed countries. In addition, it is proposed to simulate the multiplier impact that will generate two current events, as they are, the construction of the new International Airport of Mexico and the increase of the investment in Fibers. The results were very punctual, a better distribution of the investment is generated, it is invested in the tourism sector, mainly in variables such as value added and remuneration.
Integrated optic vector-matrix multiplier
Watts, Michael R [Albuquerque, NM
2011-09-27
A vector-matrix multiplier is disclosed which uses N different wavelengths of light that are modulated with amplitudes representing elements of an N.times.1 vector and combined to form an input wavelength-division multiplexed (WDM) light stream. The input WDM light stream is split into N streamlets from which each wavelength of the light is individually coupled out and modulated for a second time using an input signal representing elements of an M.times.N matrix, and is then coupled into an output waveguide for each streamlet to form an output WDM light stream which is detected to generate a product of the vector and matrix. The vector-matrix multiplier can be formed as an integrated optical circuit using either waveguide amplitude modulators or ring resonator amplitude modulators.
Single electron based binary multipliers with overflow detection ...
African Journals Online (AJOL)
electron based device. Multipliers with overflow detection based on serial and parallel prefix computation algorithm are elaborately discussed analytically and designed. The overflow detection circuits works in parallel with a simplified multiplier to ...
Tax Multipliers: Pitfalls in Measurement and Identification
Daniel Riera-Crichton; Carlos A. Vegh; Guillermo Vuletin
2012-01-01
We contribute to the literature on tax multipliers by analyzing the pitfalls in identification and measurement of tax shocks. Our main focus is on disentangling the discussion regarding the identification of exogenous tax policy shocks (i.e., changes in tax policy that are not the result of policymakers responding to output fluctuations) from the discussion related to the measurement of tax policy (i.e., finding a tax policy variable under the direct control of the policymaker). For this purp...
Electron cyclotron resonance multiply charged ion sources
International Nuclear Information System (INIS)
Geller, R.
1975-01-01
Three ion sources, that deliver multiply charged ion beams are described. All of them are E.C.R. ion sources and are characterized by the fact that the electrons are emitted by the plasma itself and are accelerated to the adequate energy through electron cyclotron resonance (E.C.R.). They can work without interruption during several months in a quasi-continuous regime. (Duty cycle: [fr
Multiplier-free filters for wideband SAR
DEFF Research Database (Denmark)
Dall, Jørgen; Christensen, Erik Lintz
2001-01-01
This paper derives a set of parameters to be optimized when designing filters for digital demodulation and range prefiltering in SAR systems. Aiming at an implementation in field programmable gate arrays (FPGAs), an approach for the design of multiplier-free filters is outlined. Design results...... are presented in terms of filter complexity and performance. One filter has been coded in VHDL and preliminary results indicate that the filter can meet a 2 GHz input sample rate....
Mining, regional Australia and the economic multiplier
Directory of Open Access Journals (Sweden)
Paul Cleary
2012-12-01
Full Text Available Mining in Australia has traditionally delivered a strong development multiplier for regional communities where most mines are based. This relationship has weakened in recent decades as a result of the introduction of mobile workforces - typically known as fly in, fly out. Political parties have responded with policies known as ‘royalties for regions’, though in designing them they overlooked long established Indigenous arrangements for sharing benefits with areas affected directly by mining.
The Uncertainty Multiplier and Business Cycles
Saijo, Hikaru
2013-01-01
I study a business cycle model where agents learn about the state of the economy by accumulating capital. During recessions, agents invest less, and this generates noisier estimates of macroeconomic conditions and an increase in uncertainty. The endogenous increase in aggregate uncertainty further reduces economic activity, which in turn leads to more uncertainty, and so on. Thus, through changes in uncertainty, learning gives rise to a multiplier effect that amplifies business cycles. I use ...
A nonparametric multiple imputation approach for missing categorical data
Directory of Open Access Journals (Sweden)
Muhan Zhou
2017-06-01
Full Text Available Abstract Background Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness probabilities. Methods We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model and the other fits a logistic regression for predicting missingness probabilities (the missingness model. A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. Results The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. Conclusions We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with
Isometric multipliers of a vector valued Beurling algebra on a ...
Indian Academy of Sciences (India)
Home; Journals; Proceedings – Mathematical Sciences; Volume 127; Issue 1. Isometric multipliers of a vector valued Beurling algebra on a discrete semigroup. Research Article Volume 127 Issue 1 February 2017 pp 109- ... Keywords. Weighted semigroup; multipliers of a semigroup; Beurling algebra; isometric multipliers.
Accounting for one-channel depletion improves missing value imputation in 2-dye microarray data.
Ritz, Cecilia; Edén, Patrik
2008-01-19
For 2-dye microarray platforms, some missing values may arise from an un-measurably low RNA expression in one channel only. Information of such "one-channel depletion" is so far not included in algorithms for imputation of missing values. Calculating the mean deviation between imputed values and duplicate controls in five datasets, we show that KNN-based imputation gives a systematic bias of the imputed expression values of one-channel depleted spots. Evaluating the correction of this bias by cross-validation showed that the mean square deviation between imputed values and duplicates were reduced up to 51%, depending on dataset. By including more information in the imputation step, we more accurately estimate missing expression values.
Highly accurate sequence imputation enables precise QTL mapping in Brown Swiss cattle.
Frischknecht, Mirjam; Pausch, Hubert; Bapst, Beat; Signer-Hasler, Heidi; Flury, Christine; Garrick, Dorian; Stricker, Christian; Fries, Ruedi; Gredler-Grandl, Birgit
2017-12-29
Within the last few years a large amount of genomic information has become available in cattle. Densities of genomic information vary from a few thousand variants up to whole genome sequence information. In order to combine genomic information from different sources and infer genotypes for a common set of variants, genotype imputation is required. In this study we evaluated the accuracy of imputation from high density chips to whole genome sequence data in Brown Swiss cattle. Using four popular imputation programs (Beagle, FImpute, Impute2, Minimac) and various compositions of reference panels, the accuracy of the imputed sequence variant genotypes was high and differences between the programs and scenarios were small. We imputed sequence variant genotypes for more than 1600 Brown Swiss bulls and performed genome-wide association studies for milk fat percentage at two stages of lactation. We found one and three quantitative trait loci for early and late lactation fat content, respectively. Known causal variants that were imputed from the sequenced reference panel were among the most significantly associated variants of the genome-wide association study. Our study demonstrates that whole-genome sequence information can be imputed at high accuracy in cattle populations. Using imputed sequence variant genotypes in genome-wide association studies may facilitate causal variant detection.
The Ability of Different Imputation Methods to Preserve the Significant Genes and Pathways in Cancer
Directory of Open Access Journals (Sweden)
Rosa Aghdam
2017-12-01
Full Text Available Deciphering important genes and pathways from incomplete gene expression data could facilitate a better understanding of cancer. Different imputation methods can be applied to estimate the missing values. In our study, we evaluated various imputation methods for their performance in preserving significant genes and pathways. In the first step, 5% genes are considered in random for two types of ignorable and non-ignorable missingness mechanisms with various missing rates. Next, 10 well-known imputation methods were applied to the complete datasets. The significance analysis of microarrays (SAM method was applied to detect the significant genes in rectal and lung cancers to showcase the utility of imputation approaches in preserving significant genes. To determine the impact of different imputation methods on the identification of important genes, the chi-squared test was used to compare the proportions of overlaps between significant genes detected from original data and those detected from the imputed datasets. Additionally, the significant genes are tested for their enrichment in important pathways, using the ConsensusPathDB. Our results showed that almost all the significant genes and pathways of the original dataset can be detected in all imputed datasets, indicating that there is no significant difference in the performance of various imputation methods tested. The source code and selected datasets are available on http://profiles.bs.ipm.ir/softwares/imputation_methods/.
Aghdam, Rosa; Baghfalaki, Taban; Khosravi, Pegah; Saberi Ansari, Elnaz
2017-12-01
Deciphering important genes and pathways from incomplete gene expression data could facilitate a better understanding of cancer. Different imputation methods can be applied to estimate the missing values. In our study, we evaluated various imputation methods for their performance in preserving significant genes and pathways. In the first step, 5% genes are considered in random for two types of ignorable and non-ignorable missingness mechanisms with various missing rates. Next, 10 well-known imputation methods were applied to the complete datasets. The significance analysis of microarrays (SAM) method was applied to detect the significant genes in rectal and lung cancers to showcase the utility of imputation approaches in preserving significant genes. To determine the impact of different imputation methods on the identification of important genes, the chi-squared test was used to compare the proportions of overlaps between significant genes detected from original data and those detected from the imputed datasets. Additionally, the significant genes are tested for their enrichment in important pathways, using the ConsensusPathDB. Our results showed that almost all the significant genes and pathways of the original dataset can be detected in all imputed datasets, indicating that there is no significant difference in the performance of various imputation methods tested. The source code and selected datasets are available on http://profiles.bs.ipm.ir/softwares/imputation_methods/. Copyright © 2017. Production and hosting by Elsevier B.V.
Mikhchi, Abbas; Honarvar, Mahmood; Kashan, Nasser Emam Jomeh; Aminafshar, Mehdi
2016-06-21
Genotype imputation is an important tool for prediction of unknown genotypes for both unrelated individuals and parent-offspring trios. Several imputation methods are available and can either employ universal machine learning methods, or deploy algorithms dedicated to infer missing genotypes. In this research the performance of eight machine learning methods: Support Vector Machine, K-Nearest Neighbors, Extreme Learning Machine, Radial Basis Function, Random Forest, AdaBoost, LogitBoost, and TotalBoost compared in terms of the imputation accuracy, computation time and the factors affecting imputation accuracy. The methods employed using real and simulated datasets to impute the un-typed SNPs in parent-offspring trios. The tested methods show that imputation of parent-offspring trios can be accurate. The Random Forest and Support Vector Machine were more accurate than the other machine learning methods. The TotalBoost performed slightly worse than the other methods.The running times were different between methods. The ELM was always most fast algorithm. In case of increasing the sample size, the RBF requires long imputation time.The tested methods in this research can be an alternative for imputation of un-typed SNPs in low missing rate of data. However, it is recommended that other machine learning methods to be used for imputation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Imputation of genotypes in Danish two-way crossbred pigs using low density panels
DEFF Research Database (Denmark)
Xiang, Tao; Christensen, Ole Fredslund; Legarra, Andres
Genotype imputation is commonly used as an initial step of genomic selection. Studies on humans, plants and ruminants suggested many factors would affect the performance of imputation. However, studies rarely investigated pigs, especially crossbred pigs. In this study, different scenarios...... of imputation from 5K SNPs to 7K SNPs on Danish Landrace, Yorkshire, and crossbred Landrace-Yorkshire were compared. In conclusion, genotype imputation on crossbreds performs equally well as in purebreds, when parental breeds are used as the reference panel. When the size of reference is considerably large...... SNPs. This dataset will be analyzed for genomic selection in a future study...
Imputation and quality control steps for combining multiple genome-wide datasets
Directory of Open Access Journals (Sweden)
Shefali S Verma
2014-12-01
Full Text Available The electronic MEdical Records and GEnomics (eMERGE network brings together DNA biobanks linked to electronic health records (EHRs from multiple institutions. Approximately 52,000 DNA samples from distinct individuals have been genotyped using genome-wide SNP arrays across the nine sites of the network. The eMERGE Coordinating Center and the Genomics Workgroup developed a pipeline to impute and merge genomic data across the different SNP arrays to maximize sample size and power to detect associations with a variety of clinical endpoints. The 1000 Genomes cosmopolitan reference panel was used for imputation. Imputation results were evaluated using the following metrics: accuracy of imputation, allelic R2 (estimated correlation between the imputed and true genotypes, and the relationship between allelic R2 and minor allele frequency. Computation time and memory resources required by two different software packages (BEAGLE and IMPUTE2 were also evaluated. A number of challenges were encountered due to the complexity of using two different imputation software packages, multiple ancestral populations, and many different genotyping platforms. We present lessons learned and describe the pipeline implemented here to impute and merge genomic data sets. The eMERGE imputed dataset will serve as a valuable resource for discovery, leveraging the clinical data that can be mined from the EHR.
Study of heterogeneous multiplying and non-multiplying media by the neutron pulsed source technique
International Nuclear Information System (INIS)
Deniz, V.
1969-01-01
The pulsed neutron technique consists essentially in sending in the medium to be studied a short neutron pulse and in determining the asymptotic decay constant of the generated population. The variation of the decay constant as a function of the size of the medium allows the medium characteristics to be defined. This technique has been largely developed these last years and has been applied as well to moderator as to multiplying media, in most cases homogeneous ones. We considered of interest of apply this technique to lattices, to see if useful informations could be collected for lattice calculations. We present here a general theoretical study of the problem, and results and interpretation of a series of experiments made on graphite lattices. There is a good agreement for non-multiplying media. In the case of multiplying media, it is shown that the age value used until now in graphite lattices calculations is over-estimated by about 10 per cent [fr
VIGAN: Missing View Imputation with Generative Adversarial Networks.
Shang, Chao; Palmer, Aaron; Sun, Jiangwen; Chen, Ko-Shin; Lu, Jin; Bi, Jinbo
2017-01-01
In an era when big data are becoming the norm, there is less concern with the quantity but more with the quality and completeness of the data. In many disciplines, data are collected from heterogeneous sources, resulting in multi-view or multi-modal datasets. The missing data problem has been challenging to address in multi-view data analysis. Especially, when certain samples miss an entire view of data, it creates the missing view problem. Classic multiple imputations or matrix completion methods are hardly effective here when no information can be based on in the specific view to impute data for such samples. The commonly-used simple method of removing samples with a missing view can dramatically reduce sample size, thus diminishing the statistical power of a subsequent analysis. In this paper, we propose a novel approach for view imputation via generative adversarial networks (GANs), which we name by VIGAN. This approach first treats each view as a separate domain and identifies domain-to-domain mappings via a GAN using randomly-sampled data from each view, and then employs a multi-modal denoising autoencoder (DAE) to reconstruct the missing view from the GAN outputs based on paired data across the views. Then, by optimizing the GAN and DAE jointly, our model enables the knowledge integration for domain mappings and view correspondences to effectively recover the missing view. Empirical results on benchmark datasets validate the VIGAN approach by comparing against the state of the art. The evaluation of VIGAN in a genetic study of substance use disorders further proves the effectiveness and usability of this approach in life science.
Transient phenomena in bounded fast multiplying assemblies
International Nuclear Information System (INIS)
Kraft, T.E.
1976-01-01
A generalized dispersion formalism is developed in the context of time-, space-, and energy-dependent transport theory. The evolution of the neutron population in a fast multiplying system following an initial burst of neutrons is examined. The generalized dispersion law obtained is an integral equation, in one variable, for the Laplace and Fourier transformed time- and space-dependent sources of fission neutrons. An approximation technique is shown to generate solutions which converge in L 2 norm to the exact solution for exact elastic, exact inelastic, Goertzel-Grueling or Wigner scattering kernels, and any reasonable fission spectrum
Quasiparticle trapping and the quasiparticle multiplier
International Nuclear Information System (INIS)
Booth, N.E.
1987-01-01
Superconductors and in particular superconducting tunnel junctions can be used to detect phonons, electromagnetic radiation, x rays, and nuclear particles by the mechanism of Cooper-pair breaking to produce excess quasiparticles and phonons. We show that the sensitivity can be increased by a factor of 100 or more by trapping the quasiparticles in another superconductor of lower gap in the region of the tunnel junction. Moreover, if the ratio of the gap energies is >3 a multiplication process can occur due to the interaction of the relaxation phonons. This leads to the concept of the quasiparticle multiplier, a device which could have wider applications than the Gray effect transistor or the quiteron
Multipliers on Generalized Mixed Norm Sequence Spaces
Directory of Open Access Journals (Sweden)
Oscar Blasco
2014-01-01
Full Text Available Given 1≤p,q≤∞ and sequences of integers (nkk and (nk′k such that nk≤nk′≤nk+1, the generalized mixed norm space ℓℐ(p,q is defined as those sequences (ajj such that ((∑j∈Ik|aj|p1/pk∈ℓq where Ik={j∈ℕ0 s.t. nk≤j
Effects of tritium on electron multiplier performance
International Nuclear Information System (INIS)
Kerst, R.A.; Malinowski, M.E.
1980-01-01
In developing diagnostic instruments for fusion reactors, it is necessary to measure the effects of tritium contamination on channel electron multipliers (CEM). A CEM was exposed to T 2 pressures of up to 1.5 x 10 -1 Pa, with exposure quantities ranging up to 8800 Pa-s. The counting rate of the CEM is shown to consist of a prompt (Type I) signal caused by gas-phase tritium and a residual (Type II) signal, probably caused by near-surface tritium. The potential for using CEMs for observing the dynamics of tritium adsorption and absorption is discussed
Evaluation and application of summary statistic imputation to discover new height-associated loci.
Rüeger, Sina; McDaid, Aaron; Kutalik, Zoltán
2018-05-01
As most of the heritability of complex traits is attributed to common and low frequency genetic variants, imputing them by combining genotyping chips and large sequenced reference panels is the most cost-effective approach to discover the genetic basis of these traits. Association summary statistics from genome-wide meta-analyses are available for hundreds of traits. Updating these to ever-increasing reference panels is very cumbersome as it requires reimputation of the genetic data, rerunning the association scan, and meta-analysing the results. A much more efficient method is to directly impute the summary statistics, termed as summary statistics imputation, which we improved to accommodate variable sample size across SNVs. Its performance relative to genotype imputation and practical utility has not yet been fully investigated. To this end, we compared the two approaches on real (genotyped and imputed) data from 120K samples from the UK Biobank and show that, genotype imputation boasts a 3- to 5-fold lower root-mean-square error, and better distinguishes true associations from null ones: We observed the largest differences in power for variants with low minor allele frequency and low imputation quality. For fixed false positive rates of 0.001, 0.01, 0.05, using summary statistics imputation yielded a decrease in statistical power by 9, 43 and 35%, respectively. To test its capacity to discover novel associations, we applied summary statistics imputation to the GIANT height meta-analysis summary statistics covering HapMap variants, and identified 34 novel loci, 19 of which replicated using data in the UK Biobank. Additionally, we successfully replicated 55 out of the 111 variants published in an exome chip study. Our study demonstrates that summary statistics imputation is a very efficient and cost-effective way to identify and fine-map trait-associated loci. Moreover, the ability to impute summary statistics is important for follow-up analyses, such as Mendelian
Rhinoplasty for the multiply revised nose.
Foda, Hossam M T
2005-01-01
To evaluate the problems encountered on revising a multiply operated nose and the methods used in correcting such problems. The study included 50 cases presenting for revision rhinoplasty after having had 2 or more previous rhinoplasties. An external rhinoplasty approach was used in all cases. Simultaneous septal surgery was done whenever indicated. All cases were followed for a mean period of 32 months (range, 1.5-8 years). Evaluation of the surgical result depended on clinical examination, comparison of pre- and postoperative photographs, and degree of patients' satisfaction with their aesthetic and functional outcome. Functionally, 68% suffered nasal obstruction that was mainly caused by septal deviations and nasal valve problems. Aesthetically, the most common deformities of the upper two thirds of the nose included pollybeak (64%), dorsal irregularities (54%), dorsal saddle (44%), and open roof deformity (42%), whereas the deformities of lower third included depressed tip (68%), tip contour irregularities (60%), and overrotated tip (42%). Nasal grafting was necessary in all cases; usually more than 1 type of graft was used in each case. Postoperatively, 79% of the patients, with preoperative nasal obstruction, reported improved breathing; 84% were satisfied with their aesthetic result; and only 8 cases (16%) requested further revision to correct minor deformities. Revision of a multiply operated nose is a complex and technically demanding task, yet, in a good percentage of cases, aesthetic as well as functional improvement are still possible.
Quantum mechanics in a multiply connected region
International Nuclear Information System (INIS)
Miyazawa, H.
1986-01-01
It is usually assumed that wave fields or wave functions are single valued functions of space-time. However, the phase of a complex field is an unobservable quantity and there is no obvious reason that it must be single valued. On this point quantum mechanics in a multiply connected regions is not well formulated. This ambiguity appears e.g., in the case of the Bohm-Aharonov effect concerning the observability of the vector potential around a magnetic flux. The author discusses the single or multiple valuedness of wave functions and attempts to see if such an effect really exists or not. The wave function of a charged particle in a multiply connected region is not necessarily single valued. The condition that the ground state energy be a minimum fixes the character of the multiple valuedness. For a charged particle around a magnetic flux a multiple valued wave function is preferable and no Bohm-Aharonov effect is observed. The minimum energy principle is proved if one also considers the interaction of a charged particle with external objects. Then theoretically the Bohm-Aharonov effect should not be observed. Experiments are not yet conclusive on this point
Improved Correction of Misclassification Bias With Bootstrap Imputation.
van Walraven, Carl
2018-07-01
Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.
Outlier Removal in Model-Based Missing Value Imputation for Medical Datasets
Directory of Open Access Journals (Sweden)
Min-Wei Huang
2018-01-01
Full Text Available Many real-world medical datasets contain some proportion of missing (attribute values. In general, missing value imputation can be performed to solve this problem, which is to provide estimations for the missing values by a reasoning process based on the (complete observed data. However, if the observed data contain some noisy information or outliers, the estimations of the missing values may not be reliable or may even be quite different from the real values. The aim of this paper is to examine whether a combination of instance selection from the observed data and missing value imputation offers better performance than performing missing value imputation alone. In particular, three instance selection algorithms, DROP3, GA, and IB3, and three imputation algorithms, KNNI, MLP, and SVM, are used in order to find out the best combination. The experimental results show that that performing instance selection can have a positive impact on missing value imputation over the numerical data type of medical datasets, and specific combinations of instance selection and imputation methods can improve the imputation results over the mixed data type of medical datasets. However, instance selection does not have a definitely positive impact on the imputation result for categorical medical datasets.
Whole-Genome Sequencing Coupled to Imputation Discovers Genetic Signals for Anthropometric Traits
I. Tachmazidou (Ioanna); Süveges, D. (Dániel); J. Min (Josine); G.R.S. Ritchie (Graham R.S.); Steinberg, J. (Julia); K. Walter (Klaudia); V. Iotchkova (Valentina); J.A. Schwartzentruber (Jeremy); J. Huang (Jian); Y. Memari (Yasin); McCarthy, S. (Shane); Crawford, A.A. (Andrew A.); C. Bombieri (Cristina); M. Cocca (Massimiliano); A.-E. Farmaki (Aliki-Eleni); T.R. Gaunt (Tom); P. Jousilahti (Pekka); M.N. Kooijman (Marjolein ); Lehne, B. (Benjamin); G. Malerba (Giovanni); S. Männistö (Satu); A. Matchan (Angela); M.C. Medina-Gomez (Carolina); S. Metrustry (Sarah); A. Nag (Abhishek); I. Ntalla (Ioanna); L. Paternoster (Lavinia); N.W. Rayner (Nigel William); C. Sala (Cinzia); W.R. Scott (William R.); H.A. Shihab (Hashem A.); L. Southam (Lorraine); B. St Pourcain (Beate); M. Traglia (Michela); K. Trajanoska (Katerina); Zaza, G. (Gialuigi); W. Zhang (Weihua); M.S. Artigas; Bansal, N. (Narinder); M. Benn (Marianne); Chen, Z. (Zhongsheng); P. Danecek (Petr); Lin, W.-Y. (Wei-Yu); A. Locke (Adam); J. Luan (Jian'An); A.K. Manning (Alisa); Mulas, A. (Antonella); C. Sidore (Carlo); A. Tybjaerg-Hansen; A. Varbo (Anette); M. Zoledziewska (Magdalena); C. Finan (Chris); Hatzikotoulas, K. (Konstantinos); A.E. Hendricks (Audrey E.); J.P. Kemp (John); A. Moayyeri (Alireza); Panoutsopoulou, K. (Kalliope); Szpak, M. (Michal); S.G. Wilson (Scott); M. Boehnke (Michael); F. Cucca (Francesco); Di Angelantonio, E. (Emanuele); C. Langenberg (Claudia); C.M. Lindgren (Cecilia M.); McCarthy, M.I. (Mark I.); A.P. Morris (Andrew); B.G. Nordestgaard (Børge); R.A. Scott (Robert); M.D. Tobin (Martin); N.J. Wareham (Nick); P.R. Burton (Paul); J.C. Chambers (John); Smith, G.D. (George Davey); G.V. Dedoussis (George); J.F. Felix (Janine); O.H. Franco (Oscar); Gambaro, G. (Giovanni); P. Gasparini (Paolo); C.J. Hammond (Christopher J.); A. Hofman (Albert); V.W.V. Jaddoe (Vincent); M.E. Kleber (Marcus); J.S. Kooner (Jaspal S.); M. Perola (Markus); C.L. Relton (Caroline); S.M. Ring (Susan); F. Rivadeneira Ramirez (Fernando); V. Salomaa (Veikko); T.D. Spector (Timothy); O. Stegle (Oliver); D. Toniolo (Daniela); A.G. Uitterlinden (André); I.E. Barroso (Inês); C.M.T. Greenwood (Celia); Perry, J.R.B. (John R.B.); Walker, B.R. (Brian R.); A.S. Butterworth (Adam); Y. Xue (Yali); R. Durbin (Richard); K.S. Small (Kerrin); N. Soranzo (Nicole); N.J. Timpson (Nicholas); E. Zeggini (Eleftheria)
2016-01-01
textabstractDeep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the
Whole-Genome Sequencing Coupled to Imputation Discovers Genetic Signals for Anthropometric Traits
DEFF Research Database (Denmark)
Tachmazidou, Ioanna; Süveges, Dániel; Min, Josine L
2017-01-01
Deep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the broader alleli...
48 CFR 1830.7002-4 - Determining imputed cost of money.
2010-10-01
... money. 1830.7002-4 Section 1830.7002-4 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND... Determining imputed cost of money. (a) Determine the imputed cost of money for an asset under construction, fabrication, or development by applying a cost of money rate (see 1830.7002-2) to the representative...
[Imputing missing data in public health: general concepts and application to dichotomous variables].
Hernández, Gilma; Moriña, David; Navarro, Albert
The presence of missing data in collected variables is common in health surveys, but the subsequent imputation thereof at the time of analysis is not. Working with imputed data may have certain benefits regarding the precision of the estimators and the unbiased identification of associations between variables. The imputation process is probably still little understood by many non-statisticians, who view this process as highly complex and with an uncertain goal. To clarify these questions, this note aims to provide a straightforward, non-exhaustive overview of the imputation process to enable public health researchers ascertain its strengths. All this in the context of dichotomous variables which are commonplace in public health. To illustrate these concepts, an example in which missing data is handled by means of simple and multiple imputation is introduced. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Imputing data that are missing at high rates using a boosting algorithm
Energy Technology Data Exchange (ETDEWEB)
Cauthen, Katherine Regina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lambert, Gregory [Apple Inc., Cupertino, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lefantzi, Sophia [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2016-09-01
Traditional multiple imputation approaches may perform poorly for datasets with high rates of missingness unless many m imputations are used. This paper implements an alternative machine learning-based approach to imputing data that are missing at high rates. Here, we use boosting to create a strong learner from a weak learner fitted to a dataset missing many observations. This approach may be applied to a variety of types of learners (models). The approach is demonstrated by application to a spatiotemporal dataset for predicting dengue outbreaks in India from meteorological covariates. A Bayesian spatiotemporal CAR model is boosted to produce imputations, and the overall RMSE from a k-fold cross-validation is used to assess imputation accuracy.
Tritium-caused background currents in electron multipliers
International Nuclear Information System (INIS)
Malinowski, M.E.
1979-05-01
One channel electron multiplier (Galileo No. 4501) and one 14 stage Be/Cu multiplier (Dumont No. SPM3) were exposed to tritium pressures between approx. 10 -7 Torr to 10 -3 Torr in amounts from approx. 10 -5 Torr-s to 60 Torr-s and the β-decay caused currents in the multipliers measured. The background currents in both multipliers consisted of two components: (1) a high, reversible current which was proportional to the tritium exposure pressure; and (2) a lower, irreversible background current which increased with increasing cumulative tritium exposure. The β-decay caused currents in each multiplier increased the same way with exposure, suggesting the detected electrons arose from decaying tritium adsorbed on surfaced external to the multipliers
Hadamard Multipliers and Abel Dual of Hardy Spaces
Directory of Open Access Journals (Sweden)
Paweł Mleczko
2016-01-01
Full Text Available The paper is devoted to the study of Hadamard multipliers of functions from the abstract Hardy classes generated by rearrangement invariant spaces. In particular the relation between the existence of such multiplier and the boundedness of the appropriate convolution operator on spaces of measurable functions is presented. As an application, the description of Hadamard multipliers into H∞ is given and the Abel type theorem for mentioned Hardy spaces is proved.
φ-Multipliers on Banach Algebras and Topological Modules
Adib, Marjan
2015-01-01
We prove some results concerning Arens regularity and amenability of the Banach algebra ${M}_{\\phi }(A)$ of all $\\phi $ -multipliers on a given Banach algebra $A$ . We also consider $\\phi $ -multipliers in the general topological module setting and investigate some of their properties. We discuss the $\\phi $ -strict and $\\phi $ -uniform topologies on ${M}_{\\phi }(A)$ . A characterization of $\\phi $ -multipliers on ${L}_{1}(G)$ -module ${L}_{p}(G)$ , where $G$ is a compact group, is given.
Jerez, José M; Molina, Ignacio; García-Laencina, Pedro J; Alba, Emilio; Ribelles, Nuria; Martín, Miguel; Franco, Leonardo
2010-10-01
Missing data imputation is an important task in cases where it is crucial to use all available data and not discard records with missing values. This work evaluates the performance of several statistical and machine learning imputation methods that were used to predict recurrence in patients in an extensive real breast cancer data set. Imputation methods based on statistical techniques, e.g., mean, hot-deck and multiple imputation, and machine learning techniques, e.g., multi-layer perceptron (MLP), self-organisation maps (SOM) and k-nearest neighbour (KNN), were applied to data collected through the "El Álamo-I" project, and the results were then compared to those obtained from the listwise deletion (LD) imputation method. The database includes demographic, therapeutic and recurrence-survival information from 3679 women with operable invasive breast cancer diagnosed in 32 different hospitals belonging to the Spanish Breast Cancer Research Group (GEICAM). The accuracies of predictions on early cancer relapse were measured using artificial neural networks (ANNs), in which different ANNs were estimated using the data sets with imputed missing values. The imputation methods based on machine learning algorithms outperformed imputation statistical methods in the prediction of patient outcome. Friedman's test revealed a significant difference (p=0.0091) in the observed area under the ROC curve (AUC) values, and the pairwise comparison test showed that the AUCs for MLP, KNN and SOM were significantly higher (p=0.0053, p=0.0048 and p=0.0071, respectively) than the AUC from the LD-based prognosis model. The methods based on machine learning techniques were the most suited for the imputation of missing values and led to a significant enhancement of prognosis accuracy compared to imputation methods based on statistical procedures. Copyright © 2010 Elsevier B.V. All rights reserved.
Performance of gas electron multiplier (GEM) detector
International Nuclear Information System (INIS)
Han, S. H.; Moon, B. S.; Kim, Y. K.; Chung, C. E.; Kang, H. D.; Cho, H. S.
2002-01-01
We have investigated in detail the operating properties of Gas Electron Multiplier (GEM) detectors with a double conical and a cylindrical structure in a wide range of external fields and GEM voltages. With the double conical GEM, the gain gradually increased with time by 10%; whereas this surface charging was eliminated with the cylindrical GEM. Effective gains above 1000 were easily observed over a wide range of collection field strengths in a gas mixture of Ar/CO 2 (70/30). The transparency and electron collection efficiency were found to depend on the ratio of external field and the applied GEM voltage; the mutual influence of both drift and collection fields was found to be trivial
Charge transfer in gas electron multipliers
Energy Technology Data Exchange (ETDEWEB)
Ottnad, Jonathan; Ball, Markus; Ketzer, Bernhard; Ratza, Viktor; Razzaghi, Cina [HISKP, Bonn University, Nussallee 14-16, D-53115 Bonn (Germany)
2015-07-01
In order to efficiently employ a Time Projection Chamber (TPC) at interaction rates higher than ∝1 kHz, as foreseen e.g. in the ALICE experiment (CERN) and at CB-ELSA (Bonn), a continuous operation and readout mode is required. A necessary prerequisite is to minimize the space charge coming from the amplification system and to maintain an excellent spatial and energy resolution. Unfortunately these two goals can be in conflict to each other. Gas Electron Multipliers (GEM) are one candidate to fulfill these requirements. It is necessary to understand the processes within the amplification structure to find optimal operation conditions. To do so, we measure the charge transfer processes in and between GEM foils with different geometries and field configurations, and use an analytical model to describe the results. This model can then be used to predict and optimize the performance. The talk gives the present status of the measurements and describes the model.
Neutron multiplier alternative for fusion reactor blankets
International Nuclear Information System (INIS)
Taczanowski, S.
1980-01-01
A proposal is given to replace neutron multiplier needed to enable low lithium and tritium inventories simultaneously assuring sufficient production of tritium, by an efficient moderator ( 7 LiH or 7 LiD). The advantageous effect of the intensified neutron energy degradation is due to the 1/v character of the main tritium producing reaction. The slowing-down medium is designed to be the source of moderated neutrons for the surrounding Li ( 6 Li enriched) region where the most of tritium is to be produced. The surplus tritium production remains stored in the moderator zone. Some preliminary calculations illustrating the above concept were carried out and the neutron flux and tritium production distributions are presented. The indications regarding further studies are also suggested. (author)
Electronic de-multipliers; Demultiplicateurs electroniques
Energy Technology Data Exchange (ETDEWEB)
Ailloud, J
1948-07-01
The counting of a huge number of events, randomly or periodically distributed, requires the use of electronic counters which can work with a flow of up to 500000 events per second, while mechanical systems have a much lower resolution which leads to an important percentage of losses (non-counted events). Thus, hybrid systems are generally used which comprise an electronic part with fast counting capabilities but low recording capacities, and a mechanical part for the recording of the successive resets of the electronic part. This report describes the basic elementary circuits of these electronic counters (de-multipliers): dividers by 2 and 5 and flip-flop circuits using triode and pentode valves for the counting of events in the decimal system. (J.S.)
Four-gate transistor analog multiplier circuit
Mojarradi, Mohammad M. (Inventor); Blalock, Benjamin (Inventor); Cristoloveanu, Sorin (Inventor); Chen, Suheng (Inventor); Akarvardar, Kerem (Inventor)
2011-01-01
A differential output analog multiplier circuit utilizing four G.sup.4-FETs, each source connected to a current source. The four G.sup.4-FETs may be grouped into two pairs of two G.sup.4-FETs each, where one pair has its drains connected to a load, and the other par has its drains connected to another load. The differential output voltage is taken at the two loads. In one embodiment, for each G.sup.4-FET, the first and second junction gates are each connected together, where a first input voltage is applied to the front gates of each pair, and a second input voltage is applied to the first junction gates of each pair. Other embodiments are described and claimed.
Fabrication and measurement of gas electron multiplier
International Nuclear Information System (INIS)
Zhang Minglong; Xia Yiben; Wang Linjun; Gu Beibei; Wang Lin; Yang Ying
2005-01-01
Gas electron multiplier (GEM) with special performance has been widely used in the field of radiation detectors. In this work, GEM film was fabricated using a 50 μm -thick kapton film by the therma evaporation and laser masking drilling technique. GEM film has many uniformly arrayed holes with a diameter of 100 μm and a gap of 223 μm. It was then set up to a gas-flowing detector with an effective area of 3 x 3 cm 2 , 5.9 keV X-ray generated from a 55 Fe source was used to measure the pulse height distribution of GEM operating at various high voltage and gas proportion. The effect of high potential and gas proportion on the count rate and the energy resolution was discussed in detail. The results indicate that GEM has a very high ratio of signal to noise and better energy resolution of 18.2%. (authors)
Faster Double-Size Bipartite Multiplication out of Montgomery Multipliers
Yoshino, Masayuki; Okeya, Katsuyuki; Vuillaume, Camille
This paper proposes novel algorithms for computing double-size modular multiplications with few modulus-dependent precomputations. Low-end devices such as smartcards are usually equipped with hardware Montgomery multipliers. However, due to progresses of mathematical attacks, security institutions such as NIST have steadily demanded longer bit-lengths for public-key cryptography, making the multipliers quickly obsolete. In an attempt to extend the lifespan of such multipliers, double-size techniques compute modular multiplications with twice the bit-length of the multipliers. Techniques are known for extending the bit-length of classical Euclidean multipliers, of Montgomery multipliers and the combination thereof, namely bipartite multipliers. However, unlike classical and bipartite multiplications, Montgomery multiplications involve modulus-dependent precomputations, which amount to a large part of an RSA encryption or signature verification. The proposed double-size technique simulates double-size multiplications based on single-size Montgomery multipliers, and yet precomputations are essentially free: in an 2048-bit RSA encryption or signature verification with public exponent e=216+1, the proposal with a 1024-bit Montgomery multiplier is at least 1.5 times faster than previous double-size Montgomery multiplications.
Efek Multiplier Zakat terhadap Pendapatan di Provinsi DKI Jakarta
Al Arif, M. Nur Rianto
2012-01-01
The aim of this research is to analyse the multiplier effect of zakâh revenue in DKI Jakarta. A study case at Badan Amil Zakat, Infak, and Sadaqah (BAZIS) DKI Jakarta. Least square method is used to analyze the data. The coefficients will be used to calculate the multiplier effect of zakâh-revenue and it will be compared with the economy without zakah revenue. The results showed 2,522 multiplier effects of zakâh-revenue and 3.561 multiplier effect ofeconomic income without zakâh-revenue. Thi...
Evaluating Imputation Algorithms for Low-Depth Genotyping-By-Sequencing (GBS Data.
Directory of Open Access Journals (Sweden)
Ariel W Chan
Full Text Available Well-powered genomic studies require genome-wide marker coverage across many individuals. For non-model species with few genomic resources, high-throughput sequencing (HTS methods, such as Genotyping-By-Sequencing (GBS, offer an inexpensive alternative to array-based genotyping. Although affordable, datasets derived from HTS methods suffer from sequencing error, alignment errors, and missing data, all of which introduce noise and uncertainty to variant discovery and genotype calling. Under such circumstances, meaningful analysis of the data is difficult. Our primary interest lies in the issue of how one can accurately infer or impute missing genotypes in HTS-derived datasets. Many of the existing genotype imputation algorithms and software packages were primarily developed by and optimized for the human genetics community, a field where a complete and accurate reference genome has been constructed and SNP arrays have, in large part, been the common genotyping platform. We set out to answer two questions: 1 can we use existing imputation methods developed by the human genetics community to impute missing genotypes in datasets derived from non-human species and 2 are these methods, which were developed and optimized to impute ascertained variants, amenable for imputation of missing genotypes at HTS-derived variants? We selected Beagle v.4, a widely used algorithm within the human genetics community with reportedly high accuracy, to serve as our imputation contender. We performed a series of cross-validation experiments, using GBS data collected from the species Manihot esculenta by the Next Generation (NEXTGEN Cassava Breeding Project. NEXTGEN currently imputes missing genotypes in their datasets using a LASSO-penalized, linear regression method (denoted 'glmnet'. We selected glmnet to serve as a benchmark imputation method for this reason. We obtained estimates of imputation accuracy by masking a subset of observed genotypes, imputing, and
Evaluating Imputation Algorithms for Low-Depth Genotyping-By-Sequencing (GBS) Data.
Chan, Ariel W; Hamblin, Martha T; Jannink, Jean-Luc
2016-01-01
Well-powered genomic studies require genome-wide marker coverage across many individuals. For non-model species with few genomic resources, high-throughput sequencing (HTS) methods, such as Genotyping-By-Sequencing (GBS), offer an inexpensive alternative to array-based genotyping. Although affordable, datasets derived from HTS methods suffer from sequencing error, alignment errors, and missing data, all of which introduce noise and uncertainty to variant discovery and genotype calling. Under such circumstances, meaningful analysis of the data is difficult. Our primary interest lies in the issue of how one can accurately infer or impute missing genotypes in HTS-derived datasets. Many of the existing genotype imputation algorithms and software packages were primarily developed by and optimized for the human genetics community, a field where a complete and accurate reference genome has been constructed and SNP arrays have, in large part, been the common genotyping platform. We set out to answer two questions: 1) can we use existing imputation methods developed by the human genetics community to impute missing genotypes in datasets derived from non-human species and 2) are these methods, which were developed and optimized to impute ascertained variants, amenable for imputation of missing genotypes at HTS-derived variants? We selected Beagle v.4, a widely used algorithm within the human genetics community with reportedly high accuracy, to serve as our imputation contender. We performed a series of cross-validation experiments, using GBS data collected from the species Manihot esculenta by the Next Generation (NEXTGEN) Cassava Breeding Project. NEXTGEN currently imputes missing genotypes in their datasets using a LASSO-penalized, linear regression method (denoted 'glmnet'). We selected glmnet to serve as a benchmark imputation method for this reason. We obtained estimates of imputation accuracy by masking a subset of observed genotypes, imputing, and calculating the
Nonparametric autocovariance estimation from censored time series by Gaussian imputation.
Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K
2009-02-01
One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.
Traffic Speed Data Imputation Method Based on Tensor Completion
Directory of Open Access Journals (Sweden)
Bin Ran
2015-01-01
Full Text Available Traffic speed data plays a key role in Intelligent Transportation Systems (ITS; however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS. In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC, an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.
Traffic speed data imputation method based on tensor completion.
Ran, Bin; Tan, Huachun; Feng, Jianshuai; Liu, Ying; Wang, Wuhong
2015-01-01
Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.
An Overview and Evaluation of Recent Machine Learning Imputation Methods Using Cardiac Imaging Data.
Liu, Yuzhe; Gopalakrishnan, Vanathi
2017-03-01
Many clinical research datasets have a large percentage of missing values that directly impacts their usefulness in yielding high accuracy classifiers when used for training in supervised machine learning. While missing value imputation methods have been shown to work well with smaller percentages of missing values, their ability to impute sparse clinical research data can be problem specific. We previously attempted to learn quantitative guidelines for ordering cardiac magnetic resonance imaging during the evaluation for pediatric cardiomyopathy, but missing data significantly reduced our usable sample size. In this work, we sought to determine if increasing the usable sample size through imputation would allow us to learn better guidelines. We first review several machine learning methods for estimating missing data. Then, we apply four popular methods (mean imputation, decision tree, k-nearest neighbors, and self-organizing maps) to a clinical research dataset of pediatric patients undergoing evaluation for cardiomyopathy. Using Bayesian Rule Learning (BRL) to learn ruleset models, we compared the performance of imputation-augmented models versus unaugmented models. We found that all four imputation-augmented models performed similarly to unaugmented models. While imputation did not improve performance, it did provide evidence for the robustness of our learned models.
Genotype Imputation for Latinos Using the HapMap and 1000 Genomes Project Reference Panels
Directory of Open Access Journals (Sweden)
Xiaoyi eGao
2012-06-01
Full Text Available Genotype imputation is a vital tool in genome-wide association studies (GWAS and meta-analyses of multiple GWAS results. Imputation enables researchers to increase genomic coverage and to pool data generated using different genotyping platforms. HapMap samples are often employed as the reference panel. More recently, the 1000 Genomes Project resource is becoming the primary source for reference panels. Multiple GWAS and meta-analyses are targeting Latinos, the most populous and fastest growing minority group in the US. However, genotype imputation resources for Latinos are rather limited compared to individuals of European ancestry at present, largely because of the lack of good reference data. One choice of reference panel for Latinos is one derived from the population of Mexican individuals in Los Angeles contained in the HapMap Phase 3 project and the 1000 Genomes Project. However, a detailed evaluation of the quality of the imputed genotypes derived from the public reference panels has not yet been reported. Using simulation studies, the Illumina OmniExpress GWAS data from the Los Angles Latino Eye Study and the MACH software package, we evaluated the accuracy of genotype imputation in Latinos. Our results show that the 1000 Genomes Project AMR+CEU+YRI reference panel provides the highest imputation accuracy for Latinos, and that also including Asian samples in the panel can reduce imputation accuracy. We also provide the imputation accuracy for each autosomal chromosome using the 1000 Genomes Project panel for Latinos. Our results serve as a guide to future imputation-based analysis in Latinos.
Multipliers for the Absolute Euler Summability of Fourier Series
Indian Academy of Sciences (India)
In this paper, the author has investigated necessary and sufficient conditions for the absolute Euler summability of the Fourier series with multipliers. These conditions are weaker than those obtained earlier by some workers. It is further shown that the multipliers are best possible in certain sense.
Multiplier convergent series and uniform convergence of mapping ...
Indian Academy of Sciences (India)
MS received 14 April 2011; revised 17 November 2012. Abstract. In this paper, we introduce the frame property of complex sequence sets and study the uniform convergence of nonlinear mapping series in β-dual of spaces consisting of multiplier convergent series. Keywords. Multiplier convergent series; mapping series. 1.
Dimension of the c-nilpotent multiplier of Lie algebras
Indian Academy of Sciences (India)
Abstract. The purpose of this paper is to derive some inequalities for dimension of the c-nilpotent multiplier of finite dimensional Lie algebras and their factor Lie algebras. We further obtain an inequality between dimensions of c-nilpotent multiplier of Lie algebra L and tensor product of a central ideal by its abelianized factor ...
DEFF Research Database (Denmark)
Ma, Peipei; Lund, Mogens Sandø; Ding, X
2015-01-01
This study investigated the effect of including Nordic Holsteins in the reference population on the imputation accuracy and prediction accuracy for Chinese Holsteins. The data used in this study include 85 Chinese Holstein bulls genotyped with both 54K chip and 777K (HD) chip, 2862 Chinese cows...... was improved slightly when using the marker data imputed based on the combined HD reference data, compared with using the marker data imputed based on the Chinese HD reference data only. On the other hand, when using the combined reference population including 4398 Nordic Holstein bulls, the accuracy...... to increase reference population rather than increasing marker density...
Optimizing strassen matrix multiply on GPUs
ul Hasan Khan, Ayaz; Al-Mouhamed, Mayez; Fatayer, Allam
2015-01-01
© 2015 IEEE. Many core systems are basically designed for applications having large data parallelism. Strassen Matrix Multiply (MM) can be formulated as a depth first (DFS) traversal of a recursion tree where all cores work in parallel on computing each of the NxN sub-matrices that reduces storage at the detriment of large data motion to gather and aggregate the results. We propose Strassen and Winograd algorithms (S-MM and W-MM) based on three optimizations: a set of basic algebra functions to reduce overhead, invoking efficient library (CUBLAS 5.5), and parameter-tuning of parametric kernel to improve resource occupancy. On GPUs, W-MM and S-MM with one recursion level outperform CUBLAS 5.5 Library with up to twice as faster for large arrays satisfying N>=2048 and N>=3072, respectively. Compared to NVIDIA SDK library, S-MM and W-MM achieved a speedup between 20x to 80x for the above arrays. The proposed approach can be used to enhance the performance of CUBLAS and MKL libraries.
Optimizing strassen matrix multiply on GPUs
ul Hasan Khan, Ayaz
2015-06-01
© 2015 IEEE. Many core systems are basically designed for applications having large data parallelism. Strassen Matrix Multiply (MM) can be formulated as a depth first (DFS) traversal of a recursion tree where all cores work in parallel on computing each of the NxN sub-matrices that reduces storage at the detriment of large data motion to gather and aggregate the results. We propose Strassen and Winograd algorithms (S-MM and W-MM) based on three optimizations: a set of basic algebra functions to reduce overhead, invoking efficient library (CUBLAS 5.5), and parameter-tuning of parametric kernel to improve resource occupancy. On GPUs, W-MM and S-MM with one recursion level outperform CUBLAS 5.5 Library with up to twice as faster for large arrays satisfying N>=2048 and N>=3072, respectively. Compared to NVIDIA SDK library, S-MM and W-MM achieved a speedup between 20x to 80x for the above arrays. The proposed approach can be used to enhance the performance of CUBLAS and MKL libraries.
RIDDLE: Race and ethnicity Imputation from Disease history with Deep LEarning
Kim, Ji-Sung; Gao, Xin; Rzhetsky, Andrey
2018-01-01
are predictive of race and ethnicity. We used these characterizations of informative features to perform a systematic comparison of differential disease patterns by race and ethnicity. The fact that clinical histories are informative for imputing race
Imputation methods for filling missing data in urban air pollution data for Malaysia
Directory of Open Access Journals (Sweden)
Nur Afiqah Zakaria
2018-06-01
Full Text Available The air quality measurement data obtained from the continuous ambient air quality monitoring (CAAQM station usually contained missing data. The missing observations of the data usually occurred due to machine failure, routine maintenance and human error. In this study, the hourly monitoring data of CO, O3, PM10, SO2, NOx, NO2, ambient temperature and humidity were used to evaluate four imputation methods (Mean Top Bottom, Linear Regression, Multiple Imputation and Nearest Neighbour. The air pollutants observations were simulated into four percentages of simulated missing data i.e. 5%, 10%, 15% and 20%. Performance measures namely the Mean Absolute Error, Root Mean Squared Error, Coefficient of Determination and Index of Agreement were used to describe the goodness of fit of the imputation methods. From the results of the performance measures, Mean Top Bottom method was selected as the most appropriate imputation method for filling in the missing values in air pollutants data.
Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen
2014-01-01
Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.
Design of two easily-testable VLSI array multipliers
Energy Technology Data Exchange (ETDEWEB)
Ferguson, J.; Shen, J.P.
1983-01-01
Array multipliers are well-suited to VLSI implementation because of the regularity in their iterative structure. However, most VLSI circuits are very difficult to test. This paper shows that, with appropriate cell design, array multipliers can be designed to be very easily testable. An array multiplier is called c-testable if all its adder cells can be exhaustively tested while requiring only a constant number of test patterns. The testability of two well-known array multiplier structures are studied. The conventional design of the carry-save array multipler is shown to be not c-testable. However, a modified design, using a modified adder cell, is generated and shown to be c-testable and requires only 16 test patterns. Similar results are obtained for the baugh-wooley two's complement array multiplier. A modified design of the baugh-wooley array multiplier is shown to be c-testable and requires 55 test patterns. The implementation of a practical c-testable 16*16 array multiplier is also presented. 10 references.
Directory of Open Access Journals (Sweden)
Abbas Mikhchi
2016-01-01
Full Text Available Abstract Background Genotype imputation is an important process of predicting unknown genotypes, which uses reference population with dense genotypes to predict missing genotypes for both human and animal genetic variations at a low cost. Machine learning methods specially boosting methods have been used in genetic studies to explore the underlying genetic profile of disease and build models capable of predicting missing values of a marker. Methods In this study strategies and factors affecting the imputation accuracy of parent-offspring trios compared from lower-density SNP panels (5 K to high density (10 K SNP panel using three different Boosting methods namely TotalBoost (TB, LogitBoost (LB and AdaBoost (AB. The methods employed using simulated data to impute the un-typed SNPs in parent-offspring trios. Four different datasets of G1 (100 trios with 5 k SNPs, G2 (100 trios with 10 k SNPs, G3 (500 trios with 5 k SNPs, and G4 (500 trio with 10 k SNPs were simulated. In four datasets all parents were genotyped completely, and offspring genotyped with a lower density panel. Results Comparison of the three methods for imputation showed that the LB outperformed AB and TB for imputation accuracy. The time of computation were different between methods. The AB was the fastest algorithm. The higher SNP densities resulted the increase of the accuracy of imputation. Larger trios (i.e. 500 was better for performance of LB and TB. Conclusions The conclusion is that the three methods do well in terms of imputation accuracy also the dense chip is recommended for imputation of parent-offspring trios.
Simple nuclear norm based algorithms for imputing missing data and forecasting in time series
Butcher, Holly Louise; Gillard, Jonathan William
2017-01-01
There has been much recent progress on the use of the nuclear norm for the so-called matrix completion problem (the problem of imputing missing values of a matrix). In this paper we investigate the use of the nuclear norm for modelling time series, with particular attention to imputing missing data and forecasting. We introduce a simple alternating projections type algorithm based on the nuclear norm for these tasks, and consider a number of practical examples.
Missing value imputation for microarray gene expression data using histone acetylation information
Directory of Open Access Journals (Sweden)
Feng Jihua
2008-05-01
Full Text Available Abstract Background It is an important pre-processing step to accurately estimate missing values in microarray data, because complete datasets are required in numerous expression profile analysis in bioinformatics. Although several methods have been suggested, their performances are not satisfactory for datasets with high missing percentages. Results The paper explores the feasibility of doing missing value imputation with the help of gene regulatory mechanism. An imputation framework called histone acetylation information aided imputation method (HAIimpute method is presented. It incorporates the histone acetylation information into the conventional KNN(k-nearest neighbor and LLS(local least square imputation algorithms for final prediction of the missing values. The experimental results indicated that the use of acetylation information can provide significant improvements in microarray imputation accuracy. The HAIimpute methods consistently improve the widely used methods such as KNN and LLS in terms of normalized root mean squared error (NRMSE. Meanwhile, the genes imputed by HAIimpute methods are more correlated with the original complete genes in terms of Pearson correlation coefficients. Furthermore, the proposed methods also outperform GOimpute, which is one of the existing related methods that use the functional similarity as the external information. Conclusion We demonstrated that the using of histone acetylation information could greatly improve the performance of the imputation especially at high missing percentages. This idea can be generalized to various imputation methods to facilitate the performance. Moreover, with more knowledge accumulated on gene regulatory mechanism in addition to histone acetylation, the performance of our approach can be further improved and verified.
Production processes of multiply charged ions by electron impact
International Nuclear Information System (INIS)
Oda, Nobuo
1980-02-01
First, are compared the foil or gas stripper and the ion sources utilizing electron-atom ionizing collisions, which are practically used or are under development to produce multiply charged ions. A review is made of the fundamental physical parameters such as successive ionization potentials and various ionization cross sections by electron impact, as well as the primary processes in multiply charged ion production. Multiply charged ion production processes are described for the different existing ion sources such as high temperature plasma type, ion-trapping type and discharge type. (author)
Thomas, A M; Cook, L J; Dean, J M; Olson, L M
2014-01-01
To compare results from high probability matched sets versus imputed matched sets across differing levels of linkage information. A series of linkages with varying amounts of available information were performed on two simulated datasets derived from multiyear motor vehicle crash (MVC) and hospital databases, where true matches were known. Distributions of high probability and imputed matched sets were compared against the true match population for occupant age, MVC county, and MVC hour. Regression models were fit to simulated log hospital charges and hospitalization status. High probability and imputed matched sets were not significantly different from occupant age, MVC county, and MVC hour in high information settings (p > 0.999). In low information settings, high probability matched sets were significantly different from occupant age and MVC county (p sets were not (p > 0.493). High information settings saw no significant differences in inference of simulated log hospital charges and hospitalization status between the two methods. High probability and imputed matched sets were significantly different from the outcomes in low information settings; however, imputed matched sets were more robust. The level of information available to a linkage is an important consideration. High probability matched sets are suitable for high to moderate information settings and for situations involving case-specific analysis. Conversely, imputed matched sets are preferable for low information settings when conducting population-based analyses.
Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things
Directory of Open Access Journals (Sweden)
Xiaobo Yan
2015-01-01
Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.
Luo, Yuan; Szolovits, Peter; Dighe, Anand S; Baron, Jason M
2018-06-01
A key challenge in clinical data mining is that most clinical datasets contain missing data. Since many commonly used machine learning algorithms require complete datasets (no missing data), clinical analytic approaches often entail an imputation procedure to "fill in" missing data. However, although most clinical datasets contain a temporal component, most commonly used imputation methods do not adequately accommodate longitudinal time-based data. We sought to develop a new imputation algorithm, 3-dimensional multiple imputation with chained equations (3D-MICE), that can perform accurate imputation of missing clinical time series data. We extracted clinical laboratory test results for 13 commonly measured analytes (clinical laboratory tests). We imputed missing test results for the 13 analytes using 3 imputation methods: multiple imputation with chained equations (MICE), Gaussian process (GP), and 3D-MICE. 3D-MICE utilizes both MICE and GP imputation to integrate cross-sectional and longitudinal information. To evaluate imputation method performance, we randomly masked selected test results and imputed these masked results alongside results missing from our original data. We compared predicted results to measured results for masked data points. 3D-MICE performed significantly better than MICE and GP-based imputation in a composite of all 13 analytes, predicting missing results with a normalized root-mean-square error of 0.342, compared to 0.373 for MICE alone and 0.358 for GP alone. 3D-MICE offers a novel and practical approach to imputing clinical laboratory time series data. 3D-MICE may provide an additional tool for use as a foundation in clinical predictive analytics and intelligent clinical decision support.
Photoionization of multiply charged ions at the advanced light source
International Nuclear Information System (INIS)
Schlachter, A.S.; Kilcoyne, A.L.D.; Aguilar, A.; Gharaibeh, M.F.; Emmons, E.D.; Scully, S.W.J.; Phaneuf, R.A.; Muller, A.; Schippers, S.; Alvarez, I.; Cisneros, C.; Hinojosa, G.; McLaughlin, B.M.
2004-01-01
Photoionization of multiply charged ions is studied using the merged-beams technique at the Advanced Light Source. Absolute photoionization cross sections have been measured for a variety of ions along both isoelectronic and isonuclear sequences
Cavallo's multiplier for in situ generation of high voltage
Clayton, S. M.; Ito, T. M.; Ramsey, J. C.; Wei, W.; Blatnik, M. A.; Filippone, B. W.; Seidel, G. M.
2018-05-01
A classic electrostatic induction machine, Cavallo's multiplier, is suggested for in situ production of very high voltage in cryogenic environments. The device is suitable for generating a large electrostatic field under conditions of very small load current. Operation of the Cavallo multiplier is analyzed, with quantitative description in terms of mutual capacitances between electrodes in the system. A demonstration apparatus was constructed, and measured voltages are compared to predictions based on measured capacitances in the system. The simplicity of the Cavallo multiplier makes it amenable to electrostatic analysis using finite element software, and electrode shapes can be optimized to take advantage of a high dielectric strength medium such as liquid helium. A design study is presented for a Cavallo multiplier in a large-scale, cryogenic experiment to measure the neutron electric dipole moment.
Sociophysics of sexism: normal and anomalous petrie multipliers
Eliazar, Iddo
2015-07-01
A recent mathematical model by Karen Petrie explains how sexism towards women can arise in organizations where male and female are equally sexist. Indeed, the Petrie model predicts that such sexism will emerge whenever there is a male majority, and quantifies this majority bias by the ‘Petrie multiplier’: the square of the male/female ratio. In this paper—emulating the shift from ‘normal’ to ‘anomalous’ diffusion—we generalize the Petrie model to a stochastic Poisson model that accommodates heterogeneously sexist men and woman, and that extends the ‘normal’ quadratic Petrie multiplier to ‘anomalous’ non-quadratic multipliers. The Petrie multipliers span a full spectrum of behaviors which we classify into four universal types. A variation of the stochastic Poisson model and its Petrie multipliers is further applied to the context of cyber warfare.
Atomic collisions in fusion plasmas involving multiply charged ions
International Nuclear Information System (INIS)
Salzborn, E.
1980-01-01
A short survey is given on atomic collisions involving multiply charged ions. The basic features of charge transfer processes in ion-ion and ion-atom collisions relevant to fusion plasmas are discussed. (author)
Efek Multiplier Zakat Terhadap Pendapatan di Propinsi DKI Jakarta
Directory of Open Access Journals (Sweden)
M. Nur Rianto Al Arif
2015-10-01
Full Text Available The aim of this research is to analyze the multiplier effect of zakah revenue in DKI Jakarta, a study case at Badan Amil Zakat, Infak, and Shadaqah (BAZIS DKI Jakarta. Least square methods is used to analyze the data. The coefficient will be used to calculate the multiplier effect of zakah revenue and it will be compared with the economy without zakah revenue. The result showed 2,522 multiplier effects of zakah revenue and 3,561 multiplier effect of economic income without zakah revenue. This suggest that the management of zakah in BAZIS DKI Jakarta still can have a significant influence on the economyDOI: 10.15408/aiq.v4i1.2079
Multiplier less high-speed squaring circuit for binary numbers
Sethi, Kabiraj; Panda, Rutuparna
2015-03-01
The squaring operation is important in many applications in signal processing, cryptography etc. In general, squaring circuits reported in the literature use fast multipliers. A novel idea of a squaring circuit without using multipliers is proposed in this paper. Ancient Indian method used for squaring decimal numbers is extended here for binary numbers. The key to our success is that no multiplier is used. Instead, one squaring circuit is used. The hardware architecture of the proposed squaring circuit is presented. The design is coded in VHDL and synthesised and simulated in Xilinx ISE Design Suite 10.1 (Xilinx Inc., San Jose, CA, USA). It is implemented in Xilinx Vertex 4vls15sf363-12 device (Xilinx Inc.). The results in terms of time delay and area is compared with both modified Booth's algorithm and squaring circuit using Vedic multipliers. Our proposed squaring circuit seems to have better performance in terms of both speed and area.
Instructional Computing Project Uses "Multiplier Effect" to Train Florida Teachers.
Roblyer, M. D.; Castine, W. H.
1987-01-01
Reviews the efforts undertaken in the Florida Model Microcomputer Trainer Project (FMMTP) and its statewide impact. Outlines its procedural strategies, trainer curriculum, networking system, and the results of its multiplier effect. (ML)
Evaporator line for special electron tubes, in particular electron multipliers
International Nuclear Information System (INIS)
Richter, M.
1984-01-01
The invention has been aimed at reducing the effort for preventing short circuits in achieving certain material-dependent effects e.g. secondary emission, by deposition through evaporation in the production of electron tubes, in particular electron multipliers
EFEK MULTIPLIER ZAKAT TERHADAP PENDAPATAN DI PROVINSI DKI JAKARTA
Directory of Open Access Journals (Sweden)
M. Nur Rianto Al Arif
2016-02-01
Full Text Available The aim of this research is to analyse the multiplier effect of zakâh revenue in DKI Jakarta. A study case at Badan Amil Zakat, Infak, and Sadaqah (BAZIS DKI Jakarta. Least square method is used to analyze the data. The coefficients will be used to calculate the multiplier effect of zakâh-revenue and it will be compared with the economy without zakah revenue. The results showed 2,522 multiplier effects of zakâh-revenue and 3.561 multiplier effect ofeconomic income without zakâh-revenue. This suggests that the management of zakat in BAZIS Jakarta still can have a significant influence on the economy.DOI: 10.15408/aiq.v4i1.2533
Toghiani, S; Aggrey, S E; Rekaya, R
2016-07-01
Availability of high-density single nucleotide polymorphism (SNP) genotyping platforms provided unprecedented opportunities to enhance breeding programmes in livestock, poultry and plant species, and to better understand the genetic basis of complex traits. Using this genomic information, genomic breeding values (GEBVs), which are more accurate than conventional breeding values. The superiority of genomic selection is possible only when high-density SNP panels are used to track genes and QTLs affecting the trait. Unfortunately, even with the continuous decrease in genotyping costs, only a small fraction of the population has been genotyped with these high-density panels. It is often the case that a larger portion of the population is genotyped with low-density and low-cost SNP panels and then imputed to a higher density. Accuracy of SNP genotype imputation tends to be high when minimum requirements are met. Nevertheless, a certain rate of genotype imputation errors is unavoidable. Thus, it is reasonable to assume that the accuracy of GEBVs will be affected by imputation errors; especially, their cumulative effects over time. To evaluate the impact of multi-generational selection on the accuracy of SNP genotypes imputation and the reliability of resulting GEBVs, a simulation was carried out under varying updating of the reference population, distance between the reference and testing sets, and the approach used for the estimation of GEBVs. Using fixed reference populations, imputation accuracy decayed by about 0.5% per generation. In fact, after 25 generations, the accuracy was only 7% lower than the first generation. When the reference population was updated by either 1% or 5% of the top animals in the previous generations, decay of imputation accuracy was substantially reduced. These results indicate that low-density panels are useful, especially when the generational interval between reference and testing population is small. As the generational interval
The generalization of the Schur multipliers of Bieberbach groups
Masri, Rohaidah; Hassim, Hazzirah Izzati Mat; Sarmin, Nor Haniza; Ali, Nor Muhainiah Mohd; Idrus, Nor'ashiqin Mohd
2014-12-01
The Schur multiplier is the second homology group of a group. It has been found to be isomorphic to the kernel of a homomorphism which maps the elements in the exterior square of the group to the elements in its derived subgroup. Meanwhile, a Bieberbach group is a space group which is a discrete cocompact group of isometries of oriented Euclidean space. In this research, the Schur multipliers of Bieberbach groups with cyclic point group of order two of finite dimension are computed.
Physics of subcritical multiplying regions and experimental validation
International Nuclear Information System (INIS)
Salvatores, M.
1996-01-01
The coupling of a particle accelerator with a spallation target and with a subcritical multiplying region has been proposed in the fifties and is called here a hybrid system. This article gives some ideas about the energetic balance of such a system. The possibilities of experimental validation of some properties of a subcritical multiplying region by using MASURCA facility at CEA-Cadarache are examined. The results of a preliminary experiment called MUSE are presented. (A.C.)
Isometric multipliers of a vector valued Beurling algebra on a ...
Indian Academy of Sciences (India)
Throughout, let S be a nonunital faith- ful abelian semigroup, and let A be a commutative Banach algebra. A map σ : S → S is a multiplier [1, 4] if σ(xy) = xσ(y) = σ(x)y, x,y ∈ S. Let M(S) be the set of all multipliers of S. Then M(S) is a unital abelian semigroup under composition. Since S is faithful, S can be imbedded as an ...
DEFF Research Database (Denmark)
Ma, Peipei; Brøndum, Rasmus Froberg; Qin, Zahng
2013-01-01
This study investigated the imputation accuracy of different methods, considering both the minor allele frequency and relatedness between individuals in the reference and test data sets. Two data sets from the combined population of Swedish and Finnish Red Cattle were used to test the influence...... coefficient was lower when the minor allele frequency was lower. The results indicate that Beagle and IMPUTE2 provide the most robust and accurate imputation accuracies, but considering computing time and memory usage, FImpute is another alternative method....
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.
Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker
2015-01-01
The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.
Dealing with missing data in a multi-question depression scale: a comparison of imputation methods
Directory of Open Access Journals (Sweden)
Stuart Heather
2006-12-01
Full Text Available Abstract Background Missing data present a challenge to many research projects. The problem is often pronounced in studies utilizing self-report scales, and literature addressing different strategies for dealing with missing data in such circumstances is scarce. The objective of this study was to compare six different imputation techniques for dealing with missing data in the Zung Self-reported Depression scale (SDS. Methods 1580 participants from a surgical outcomes study completed the SDS. The SDS is a 20 question scale that respondents complete by circling a value of 1 to 4 for each question. The sum of the responses is calculated and respondents are classified as exhibiting depressive symptoms when their total score is over 40. Missing values were simulated by randomly selecting questions whose values were then deleted (a missing completely at random simulation. Additionally, a missing at random and missing not at random simulation were completed. Six imputation methods were then considered; 1 multiple imputation, 2 single regression, 3 individual mean, 4 overall mean, 5 participant's preceding response, and 6 random selection of a value from 1 to 4. For each method, the imputed mean SDS score and standard deviation were compared to the population statistics. The Spearman correlation coefficient, percent misclassified and the Kappa statistic were also calculated. Results When 10% of values are missing, all the imputation methods except random selection produce Kappa statistics greater than 0.80 indicating 'near perfect' agreement. MI produces the most valid imputed values with a high Kappa statistic (0.89, although both single regression and individual mean imputation also produced favorable results. As the percent of missing information increased to 30%, or when unbalanced missing data were introduced, MI maintained a high Kappa statistic. The individual mean and single regression method produced Kappas in the 'substantial agreement' range
PRIMAL: Fast and accurate pedigree-based imputation from sequence data in a founder population.
Directory of Open Access Journals (Sweden)
Oren E Livne
2015-03-01
Full Text Available Founder populations and large pedigrees offer many well-known advantages for genetic mapping studies, including cost-efficient study designs. Here, we describe PRIMAL (PedigRee IMputation ALgorithm, a fast and accurate pedigree-based phasing and imputation algorithm for founder populations. PRIMAL incorporates both existing and original ideas, such as a novel indexing strategy of Identity-By-Descent (IBD segments based on clique graphs. We were able to impute the genomes of 1,317 South Dakota Hutterites, who had genome-wide genotypes for ~300,000 common single nucleotide variants (SNVs, from 98 whole genome sequences. Using a combination of pedigree-based and LD-based imputation, we were able to assign 87% of genotypes with >99% accuracy over the full range of allele frequencies. Using the IBD cliques we were also able to infer the parental origin of 83% of alleles, and genotypes of deceased recent ancestors for whom no genotype information was available. This imputed data set will enable us to better study the relative contribution of rare and common variants on human phenotypes, as well as parental origin effect of disease risk alleles in >1,000 individuals at minimal cost.
Resche-Rigon, Matthieu; White, Ian R
2018-06-01
In multilevel settings such as individual participant data meta-analysis, a variable is 'systematically missing' if it is wholly missing in some clusters and 'sporadically missing' if it is partly missing in some clusters. Previously proposed methods to impute incomplete multilevel data handle either systematically or sporadically missing data, but frequently both patterns are observed. We describe a new multiple imputation by chained equations (MICE) algorithm for multilevel data with arbitrary patterns of systematically and sporadically missing variables. The algorithm is described for multilevel normal data but can easily be extended for other variable types. We first propose two methods for imputing a single incomplete variable: an extension of an existing method and a new two-stage method which conveniently allows for heteroscedastic data. We then discuss the difficulties of imputing missing values in several variables in multilevel data using MICE, and show that even the simplest joint multilevel model implies conditional models which involve cluster means and heteroscedasticity. However, a simulation study finds that the proposed methods can be successfully combined in a multilevel MICE procedure, even when cluster means are not included in the imputation models.
Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.
Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A
2016-01-01
Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.
Tables of compound-discount interest rate multipliers for evaluating forestry investments.
Allen L. Lundgren
1971-01-01
Tables, prepared by computer, are presented for 10 selected compound-discount interest rate multipliers commonly used in financial analyses of forestry investments. Two set of tables are given for each of the 10 multipliers. The first set gives multipliers for each year from 1 to 40 years; the second set gives multipliers at 5-year intervals from 5 to 160 years....
Johnson, Eric O; Hancock, Dana B; Levy, Joshua L; Gaddis, Nathan C; Saccone, Nancy L; Bierut, Laura J; Page, Grier P
2013-05-01
A great promise of publicly sharing genome-wide association data is the potential to create composite sets of controls. However, studies often use different genotyping arrays, and imputation to a common set of SNPs has shown substantial bias: a problem which has no broadly applicable solution. Based on the idea that using differing genotyped SNP sets as inputs creates differential imputation errors and thus bias in the composite set of controls, we examined the degree to which each of the following occurs: (1) imputation based on the union of genotyped SNPs (i.e., SNPs available on one or more arrays) results in bias, as evidenced by spurious associations (type 1 error) between imputed genotypes and arbitrarily assigned case/control status; (2) imputation based on the intersection of genotyped SNPs (i.e., SNPs available on all arrays) does not evidence such bias; and (3) imputation quality varies by the size of the intersection of genotyped SNP sets. Imputations were conducted in European Americans and African Americans with reference to HapMap phase II and III data. Imputation based on the union of genotyped SNPs across the Illumina 1M and 550v3 arrays showed spurious associations for 0.2 % of SNPs: ~2,000 false positives per million SNPs imputed. Biases remained problematic for very similar arrays (550v1 vs. 550v3) and were substantial for dissimilar arrays (Illumina 1M vs. Affymetrix 6.0). In all instances, imputing based on the intersection of genotyped SNPs (as few as 30 % of the total SNPs genotyped) eliminated such bias while still achieving good imputation quality.
A New Missing Data Imputation Algorithm Applied to Electrical Data Loggers
Directory of Open Access Journals (Sweden)
Concepción Crespo Turrado
2015-12-01
Full Text Available Nowadays, data collection is a key process in the study of electrical power networks when searching for harmonics and a lack of balance among phases. In this context, the lack of data of any of the main electrical variables (phase-to-neutral voltage, phase-to-phase voltage, and current in each phase and power factor adversely affects any time series study performed. When this occurs, a data imputation process must be accomplished in order to substitute the data that is missing for estimated values. This paper presents a novel missing data imputation method based on multivariate adaptive regression splines (MARS and compares it with the well-known technique called multivariate imputation by chained equations (MICE. The results obtained demonstrate how the proposed method outperforms the MICE algorithm.
Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis
Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal
Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.
On multivariate imputation and forecasting of decadal wind speed missing data.
Wesonga, Ronald
2015-01-01
This paper demonstrates the application of multiple imputations by chained equations and time series forecasting of wind speed data. The study was motivated by the high prevalence of missing wind speed historic data. Findings based on the fully conditional specification under multiple imputations by chained equations, provided reliable wind speed missing data imputations. Further, the forecasting model shows, the smoothing parameter, alpha (0.014) close to zero, confirming that recent past observations are more suitable for use to forecast wind speeds. The maximum decadal wind speed for Entebbe International Airport was estimated to be 17.6 metres per second at a 0.05 level of significance with a bound on the error of estimation of 10.8 metres per second. The large bound on the error of estimations confirms the dynamic tendencies of wind speed at the airport under study.
On centralized power pool auction: a novel multipliers stabilization procedure
International Nuclear Information System (INIS)
Jimenez-Redondo, Noemi
2005-01-01
This paper addresses the Short-Term Hydro-Thermal Coordination (STHTC) problem. It is a large-scale, combinatorial and nonlinear optimization problem. It is usually solved using a Lagrangian Relaxation (LR) approach. LR procedure is based on the solution of the dual problem of the original one. The dual problem variables are the Lagrange multipliers. These multipliers have an economic meaning: electric energy hourly prices. This paper focuses on an efficient solution of the dual problem of the STHTC problem. A novel multiplier stabilization technique, which significantly improves the quality of the solution, is presented. The provided method could be the optimization tool used by the Independent System Operator of a centralized Power Pool. The solution procedure diminishes the conflict of interest in determining energy prices. A realistic large-scale case study illustrates the behavior of the presented approach. (Author)
New design of an RSFQ parallel multiply-accumulate unit
International Nuclear Information System (INIS)
Kataeva, Irina; Engseth, Henrik; Kidiyarova-Shevchenko, Anna
2006-01-01
The multiply-accumulate unit (MAC) is a central component of a successive interference canceller, an advanced receiver for W-CDMA base stations. A 4 x 4 two's complement fixed point RSFQ MAC with rounding to 5 bits has been simulated using VHDL, and maximum performance is equal to 24 GMACS (giga-multiply-accumulates per second). The clock distribution network has been re-designed from a linear ripple to a binary tree network in order to eliminate the data dependence of the clock propagation speed and reduce the number of Josephson junctions in clock lines. The 4 x 4 bit MAC has been designed for the HYPRES 4.5 kA cm -2 process and its components have been experimentally tested at low frequency: the 5-bit combiner, using an exhaustive test pattern, had margins on DC bias voltage of ± 18%, and the 4 x 4 parallel multiplier had margins equal to ± 2%
Multiplier Accounting of Indian Mining Industry: The Application
Hussain, Azhar; Karmakar, Netai Chandra
2017-10-01
In the previous paper (Hussain and Karmakar in Inst Eng India Ser, 2014. doi: 10.1007/s40033-014-0058-0), the concepts of input-output transaction matrix and multiplier were explained in detail. Input-output multipliers are indicators used for predicting the total impact on an economy due to changes in its industrial demand and output which is calculated using transaction matrix. The aim of this paper is to present an application of the concepts with respect to the mining industry, showing progress in different sectors of mining with time and explaining different outcomes from the results obtained. The analysis shows that a few mineral industries saw a significant growth in their multiplier values over the years.
Dark energy from modified gravity with Lagrange multipliers
International Nuclear Information System (INIS)
Capozziello, Salvatore; Matsumoto, Jiro; Nojiri, Shin'ichi; Odintsov, Sergei D.
2010-01-01
We study scalar-tensor theory, k-essence and modified gravity with Lagrange multiplier constraint which role is to reduce the number of degrees of freedom. Dark Energy cosmology of different types (ΛCDM, unified inflation with DE, smooth non-phantom/phantom transition epoch) is reconstructed in such models. It is demonstrated that presence of Lagrange multiplier simplifies the reconstruction scenario. It is shown that mathematical equivalence between scalar theory and F(R) gravity is broken due to presence of constraint. The cosmological evolution is defined by the second F 2 (R) function dictated by the constraint. The convenient F(R) gravity sector is relevant for local tests. This opens the possibility to make originally non-realistic theory to be viable by adding the corresponding constraint. A general discussion on the role of Lagrange multipliers to make higher-derivative gravity canonical is developed.
Principal parameters of classical multiply charged ion sources
International Nuclear Information System (INIS)
Winter, H.; Wolf, B.H.
1974-01-01
A review is given of the operational principles of classical multiply charged ion sources (operating sources for intense beams of multiply charged ions using discharge plasmas; MCIS). The fractional rates of creation of multiply charged ions in MCIS plasmas cannot be deduced from the discharge parameters in a simple manner; they depend essentially on three principal parameters, the density and energy distribution of the ionizing electrons, and the confinement time of ions in the ionization space. Simple discharge models were used to find relations between principal parameters, and results of model calculations are compared to actually measured charge state density distributions of extracted ions. Details of processes which determine the energy distribution of ionizing electrons (heating effects), confinement times of ions (instabilities), and some technical aspects of classical MCIS (cathodes, surface processes, conditioning, life time) are discussed
A suggested approach for imputation of missing dietary data for young children in daycare.
Stevens, June; Ou, Fang-Shu; Truesdale, Kimberly P; Zeng, Donglin; Vaughn, Amber E; Pratt, Charlotte; Ward, Dianne S
2015-01-01
Parent-reported 24-h diet recalls are an accepted method of estimating intake in young children. However, many children eat while at childcare making accurate proxy reports by parents difficult. The goal of this study was to demonstrate a method to impute missing weekday lunch and daytime snack nutrient data for daycare children and to explore the concurrent predictive and criterion validity of the method. Data were from children aged 2-5 years in the My Parenting SOS project (n=308; 870 24-h diet recalls). Mixed models were used to simultaneously predict breakfast, dinner, and evening snacks (B+D+ES); lunch; and daytime snacks for all children after adjusting for age, sex, and body mass index (BMI). From these models, we imputed the missing weekday daycare lunches by interpolation using the mean lunch to B+D+ES [L/(B+D+ES)] ratio among non-daycare children on weekdays and the L/(B+D+ES) ratio for all children on weekends. Daytime snack data were used to impute snacks. The reported mean (± standard deviation) weekday intake was lower for daycare children [725 (±324) kcal] compared to non-daycare children [1,048 (±463) kcal]. Weekend intake for all children was 1,173 (±427) kcal. After imputation, weekday caloric intake for daycare children was 1,230 (±409) kcal. Daily intakes that included imputed data were associated with age and sex but not with BMI. This work indicates that imputation is a promising method for improving the precision of daily nutrient data from young children.
Directory of Open Access Journals (Sweden)
Ward Judson A
2013-01-01
Full Text Available Abstract Background Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry. Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker density, but result in some genotype errors and a large number of missing genotype values. Imputation can reduce the number of missing values and can correct genotyping errors, but current methods of imputation require a reference genome and thus are not an option for most species. Results Genotyping by Sequencing (GBS was used to produce highly saturated maps for a R. idaeus pseudo-testcross progeny. While low coverage and high variance in sequencing resulted in a large number of missing values for some individuals, a novel method of imputation based on maximum likelihood marker ordering from initial marker segregation overcame the challenge of missing values, and made map construction computationally tractable. The two resulting parental maps contained 4521 and 2391 molecular markers spanning 462.7 and 376.6 cM respectively over seven linkage groups. Detection of precise genomic regions with segregation distortion was possible because of map saturation. Microsatellites (SSRs linked these results to published maps for cross-validation and map comparison. Conclusions GBS together with genome-independent imputation provides a rapid method for genetic map construction in any pseudo-testcross progeny. Our method of imputation estimates the correct genotype call of missing values and corrects genotyping errors that lead to inflated map size and reduced precision in marker placement. Comparison of SSRs to published R. idaeus maps showed that the linkage maps constructed with GBS and our method of imputation were robust, and marker positioning reliable. The high marker density allowed identification of genomic regions with segregation
A suggested approach for imputation of missing dietary data for young children in daycare
Directory of Open Access Journals (Sweden)
June Stevens
2015-12-01
Full Text Available Background: Parent-reported 24-h diet recalls are an accepted method of estimating intake in young children. However, many children eat while at childcare making accurate proxy reports by parents difficult. Objective: The goal of this study was to demonstrate a method to impute missing weekday lunch and daytime snack nutrient data for daycare children and to explore the concurrent predictive and criterion validity of the method. Design: Data were from children aged 2-5 years in the My Parenting SOS project (n=308; 870 24-h diet recalls. Mixed models were used to simultaneously predict breakfast, dinner, and evening snacks (B+D+ES; lunch; and daytime snacks for all children after adjusting for age, sex, and body mass index (BMI. From these models, we imputed the missing weekday daycare lunches by interpolation using the mean lunch to B+D+ES [L/(B+D+ES] ratio among non-daycare children on weekdays and the L/(B+D+ES ratio for all children on weekends. Daytime snack data were used to impute snacks. Results: The reported mean (± standard deviation weekday intake was lower for daycare children [725 (±324 kcal] compared to non-daycare children [1,048 (±463 kcal]. Weekend intake for all children was 1,173 (±427 kcal. After imputation, weekday caloric intake for daycare children was 1,230 (±409 kcal. Daily intakes that included imputed data were associated with age and sex but not with BMI. Conclusion: This work indicates that imputation is a promising method for improving the precision of daily nutrient data from young children.
Välikangas, Tommi; Suomi, Tomi; Elo, Laura L
2017-05-31
Label-free mass spectrometry (MS) has developed into an important tool applied in various fields of biological and life sciences. Several software exist to process the raw MS data into quantified protein abundances, including open source and commercial solutions. Each software includes a set of unique algorithms for different tasks of the MS data processing workflow. While many of these algorithms have been compared separately, a thorough and systematic evaluation of their overall performance is missing. Moreover, systematic information is lacking about the amount of missing values produced by the different proteomics software and the capabilities of different data imputation methods to account for them.In this study, we evaluated the performance of five popular quantitative label-free proteomics software workflows using four different spike-in data sets. Our extensive testing included the number of proteins quantified and the number of missing values produced by each workflow, the accuracy of detecting differential expression and logarithmic fold change and the effect of different imputation and filtering methods on the differential expression results. We found that the Progenesis software performed consistently well in the differential expression analysis and produced few missing values. The missing values produced by the other software decreased their performance, but this difference could be mitigated using proper data filtering or imputation methods. Among the imputation methods, we found that the local least squares (lls) regression imputation consistently increased the performance of the software in the differential expression analysis, and a combination of both data filtering and local least squares imputation increased performance the most in the tested data sets. © The Author 2017. Published by Oxford University Press.
UniFIeD Univariate Frequency-based Imputation for Time Series Data
Friese, Martina; Stork, Jörg; Ramos Guerra, Ricardo; Bartz-Beielstein, Thomas; Thaker, Soham; Flasch, Oliver; Zaefferer, Martin
2013-01-01
This paper introduces UniFIeD, a new data preprocessing method for time series. UniFIeD can cope with large intervals of missing data. A scalable test function generator, which allows the simulation of time series with different gap sizes, is presented additionally. An experimental study demonstrates that (i) UniFIeD shows a significant better performance than simple imputation methods and (ii) UniFIeD is able to handle situations, where advanced imputation methods fail. The results are indep...
Time efficient signed Vedic multiplier using redundant binary representation
Directory of Open Access Journals (Sweden)
Ranjan Kumar Barik
2017-03-01
Full Text Available This study presents a high-speed signed Vedic multiplier (SVM architecture using redundant binary (RB representation in Urdhva Tiryagbhyam (UT sutra. This is the first ever effort towards extension of Vedic algorithms to the signed numbers. The proposed multiplier architecture solves the carry propagation issue in UT sutra, as carry free addition is possible in RB representation. The proposed design is coded in VHDL and synthesised in Xilinx ISE 14.4 of various FPGA devices. The proposed SVM architecture has better speed performances as compared with various state-of-the-art conventional as well as Vedic architectures.
Radial multipliers on amalgamated free products of II-factors
DEFF Research Database (Denmark)
Möller, Sören
2014-01-01
Let ℳi be a family of II1-factors, containing a common II1-subfactor 풩, such that [ℳi : 풩] ∈ ℕ0 for all i. Furthermore, let ϕ: ℕ0 → ℂ. We show that if a Hankel matrix related to ϕ is trace-class, then there exists a unique completely bounded map Mϕ on the amalgamated free product of the ℳi...... with amalgamation over 풩, which acts as a radial multiplier. Hereby, we extend a result of Haagerup and the author for radial multipliers on reduced free products of unital C*- and von Neumann algebras....
Electronic de-multipliers II (ring-shape systems)
International Nuclear Information System (INIS)
Raievski, V.
1948-09-01
This report describes a new type of ring-shape fast electronic counter (de-multiplier) with a resolution capacity equivalent to the one made by Regener (Rev. of Scientific Instruments USA 1946, 17, 180-89) but requiring two-times less electronic valves. This report follows the general description of electronic de-multipliers made by J. Ailloud (CEA--001). The ring comprises 5 flip-flop circuits with two valves each. The different elements of the ring are calculated with enough details to allow the transfer of this calculation to different valve types. (J.S.)
Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.
2017-01-01
Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels
Seaman, Shaun R; Hughes, Rachael A
2018-06-01
Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.
The Gas Electron Multiplier Chamber Exhibition LEPFest 2000
2000-01-01
The Gas Electron Multiplier (GEM) is a novel device introduced in 1996.Large area detectors based on this technology are in construction for high energy physics detectors.This technology can also be used for high-rate X-ray imaging in medical diagnostics and for monitoring irradiation during cancer treatment
ANALYSIS OF THE INVESTMENT ARBITRAGE STRATEGY USING FINANCIAL MULTIPLIERS
Directory of Open Access Journals (Sweden)
Dmitry S. Pashkov
2013-01-01
Full Text Available This article describes an algorithm for stock pairs trading using financial multipliers of underlying companies. This algorithm has been tested on historical data and compared with classical Bollinger bands strategy. The results of tests were presented for two financial sectors of US stock market.
Garbage-free reversible constant multipliers for arbitrary integers
DEFF Research Database (Denmark)
Mogensen, Torben Ægidius
2013-01-01
We present a method for constructing reversible circuitry for multiplying integers by arbitrary integer constants. The method is based on Mealy machines and gives circuits whose size are (in the worst case) linear in the size of the constant. This makes the method unsuitable for large constants...
Smooth bifurcation for variational inequalities based on Lagrange multipliers
Czech Academy of Sciences Publication Activity Database
Eisner, Jan; Kučera, Milan; Recke, L.
2006-01-01
Roč. 19, č. 9 (2006), s. 981-1000 ISSN 0893-4983 R&D Projects: GA AV ČR(CZ) IAA100190506 Institutional research plan: CEZ:AV0Z10190503 Keywords : abstract variational inequality * bifurcation * Lagrange multipliers Subject RIV: BA - General Mathematics
Detection of differential item functioning using Lagrange multiplier tests
Glas, Cornelis A.W.
1998-01-01
Abstract: In the present paper it is shown that differential item functioning can be evaluated using the Lagrange multiplier test or Rao’s efficient score test. The test is presented in the framework of a number of IRT models such as the Rasch model, the OPLM, the 2-parameter logistic model, the
Detection of differential item functioning using Lagrange multiplier tests
Glas, Cornelis A.W.
1996-01-01
In this paper it is shown that differential item functioning can be evaluated using the Lagrange multiplier test or C. R. Rao's efficient score test. The test is presented in the framework of a number of item response theory (IRT) models such as the Rasch model, the one-parameter logistic model, the
Lagrange-multiplier tests for weak exogeneity: a synthesis.
Boswijk, H.P.; Urbain, J.P.
1997-01-01
This paper unifies two seemingly separate approaches to test weak exogeneity in dynamic regression models with Lagrange-multiplier statistics. The first class of tests focuses on the orthogonality between innovations and conditioning variables, and thus is related to the Durbin-Wu-Hausman
Fiscal multipliers over the growth cycle : evidence from Malaysia
Rafiq, Sohrab; Zeufack, Albert
2012-01-01
This paper explores the stabilisation properties of fiscal policy in Malaysia using a model incorporating nonlinearities into the dynamic relationship between fiscal policy and real economic activity over the growth cycle. The paper also investigates how output multipliers for government purchases may alter for different components of government spending. The authors find that fiscal polic...
A database analysis of information on multiply charged ions
International Nuclear Information System (INIS)
Delcroix, J.L.
1989-01-01
A statistical analysis of data related to multiply charged ions, is performed in GAPHYOR data base: over-all statistics by ionization degree from q=1 to q=99, 'historical' development from 1975 to 1987, distribution (for q≥ 5) over physical processes (energy levels, charge exchange,...) and chemical elements
Multiple images of our galaxy in closed, multiply connected cosmologies
International Nuclear Information System (INIS)
Fagundes, H.V.
1985-01-01
Friedmanian cosmology with multiply connected spatial sections allows multiple images of cosmic sources, in particular of the galaxy itself. This is illustrated with a specific example of a closed hyperbolic model and a brief mention of a spherical model. Such images may eventually become observable (or recognized as such), thus providing a new test of relativistic cosmology. (Author) [pt
A CMOS four-quadrant analog current multiplier
Wiegerink, Remco J.
1991-01-01
A CMOS four-quadrant analog current multiplier is described. The circuit is based on the square-law characteristic of an MOS transistor and is insensitive to temperature and process variations. The circuit is insensitive to the body effect so it is not necessary to place transistors in individual
The evolution of unconditional strategies via the 'multiplier effect'.
McNamara, John M; Dall, Sasha R X
2011-03-01
Ostensibly, it makes sense in a changeable world to condition behaviour and development on information when it is available. Nevertheless, unconditional behavioural and life history strategies are widespread. Here, we show how intergenerational effects can limit the evolutionary value of responding to reliable environmental cues, and thus favour the evolutionary persistence of otherwise paradoxical unconditional strategies. While cue-ignoring genotypes do poorly in the wrong environments, in the right environment they will leave many copies of themselves, which will themselves leave many copies, and so on, leading genotypes to accumulate in habitats in which they do well. We call this 'The Multiplier Effect'. We explore the consequences of the multiplier effect by focussing on the ecologically important phenomenon of natal philopatry. We model the environment as a large number of temporally varying breeding sites connected by natal dispersal between sites. Our aim is to identify which aspects of an environment promote the multiplier effect. We show, if sites remain connected through some background level of 'accidental' dispersal, unconditional natal philopatry can evolve even when there is density dependence (with its accompanying kin competition effects), and cues that are only mildly erroneous. Thus, the multiplier effect may underpin the evolution and maintenance of unconditional strategies such as natal philopatry in many biological systems. © 2011 Blackwell Publishing Ltd/CNRS.
A cascaded three-phase symmetrical multistage voltage multiplier
International Nuclear Information System (INIS)
Iqbal, Shahid; Singh, G K; Besar, R; Muhammad, G
2006-01-01
A cascaded three-phase symmetrical multistage Cockcroft-Walton voltage multiplier (CW-VM) is proposed in this report. It consists of three single-phase symmetrical voltage multipliers, which are connected in series at their smoothing columns like string of batteries and are driven by three-phase ac power source. The smoothing column of each voltage multiplier is charged twice every cycle independently by respective oscillating columns and discharged in series through load. The charging discharging process completes six times a cycle and therefore the output voltage ripple's frequency is of sixth order of the drive signal frequency. Thus the proposed approach eliminates the first five harmonic components of load generated voltage ripples and sixth harmonic is the major ripple component. The proposed cascaded three-phase symmetrical voltage multiplier has less than half the voltage ripple, and three times larger output voltage and output power than the conventional single-phase symmetrical CW-VM. Experimental and simulation results of the laboratory prototype are given to show the feasibility of proposed cascaded three-phase symmetrical CW-VM
Robust formation control of marine surface craft using Lagrange multipliers
DEFF Research Database (Denmark)
Ihle, Ivar-Andre F.; Jouffroy, Jerome; Fossen, Thor I.
2006-01-01
This paper presents a formation modelling scheme based on a set of inter-body constraint functions and Lagrangian multipliers. Formation control for a °eet of marine craft is achieved by stabilizing the auxiliary constraints such that the desired formation con¯guration appears. In the proposed fr...
Familiar Sports and Activities Adapted for Multiply Impaired Persons.
Schilling, Mary Lou, Ed.
1984-01-01
Means of adapting some familiar and popular physical activities for multiply impaired persons are described. Games reviewed are dice baseball, one base baseball, in-house bowling, wheelchair bowling, ramp bowling, swing-ball bowling, table tennis, shuffleboard, beanbag bingo and tic-tac-toe, balloon basketball, circle football, and wheelchair…
Poyatos, Rafael; Sus, Oliver; Vilà-Cabrera, Albert; Vayreda, Jordi; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi
2016-04-01
Plant functional traits are increasingly being used in ecosystem ecology thanks to the growing availability of large ecological databases. However, these databases usually contain a large fraction of missing data because measuring plant functional traits systematically is labour-intensive and because most databases are compilations of datasets with different sampling designs. As a result, within a given database, there is an inevitable variability in the number of traits available for each data entry and/or the species coverage in a given geographical area. The presence of missing data may severely bias trait-based analyses, such as the quantification of trait covariation or trait-environment relationships and may hamper efforts towards trait-based modelling of ecosystem biogeochemical cycles. Several data imputation (i.e. gap-filling) methods have been recently tested on compiled functional trait databases, but the performance of imputation methods applied to a functional trait database with a regular spatial sampling has not been thoroughly studied. Here, we assess the effects of data imputation on five tree functional traits (leaf biomass to sapwood area ratio, foliar nitrogen, maximum height, specific leaf area and wood density) in the Ecological and Forest Inventory of Catalonia, an extensive spatial database (covering 31900 km2). We tested the performance of species mean imputation, single imputation by the k-nearest neighbors algorithm (kNN) and a multiple imputation method, Multivariate Imputation with Chained Equations (MICE) at different levels of missing data (10%, 30%, 50%, and 80%). We also assessed the changes in imputation performance when additional predictors (species identity, climate, forest structure, spatial structure) were added in kNN and MICE imputations. We evaluated the imputed datasets using a battery of indexes describing departure from the complete dataset in trait distribution, in the mean prediction error, in the correlation matrix
Multiply excited molecules produced by photon and electron interactions
International Nuclear Information System (INIS)
Odagiri, T.; Kouchi, N.
2006-01-01
The photon and electron interactions with molecules resulting in the formation of multiply excited molecules and the subsequent decay are subjects of great interest because the independent electron model and Born-Oppenheimer approximation are much less reliable for the multiply excited states of molecules than for the ground and lower excited electronic states. We have three methods to observe and investigate multiply excited molecules: 1) Measurements of the cross sections for the emission of fluorescence emitted by neutral fragments in the photoexcitation of molecules as a function of incident photon energy [1-3], 2) Measurements of the electron energy-loss spectra tagged with the fluorescence photons emitted by neutral fragments [4], 3) Measurements of the cross sections for generating a pair of photons in absorption of a single photon by a molecule as a function of incident photon energy [5-7]. Multiply excited states degenerate with ionization continua, which make a large contribution in the cross section curve involving ionization processes. The key point of our methods is hence that we measure cross sections free from ionization. The feature of multiply excited states is noticeable in such a cross section curve. Recently we have measured: i) the cross sections for the emission of the Lyman- fluorescence in the photoexcitation of CH 4 as a function of incident photon energy in the range 18-51 eV, ii) the electron energy-loss spectrum of CH 4 tagged with the Lyman-photons at 80 eV incident electron energy and 10 electron scattering angle in the range of the energy loss 20-45 eV, in order to understand the formation and decay of the doubly excited methane in photon and electron interactions. [8] The results are summarized in this paper and the simultaneous excitation of two electrons by electron interaction is compared with that by photon interaction in terms of the oscillator strength. (authors)
Applying an efficient K-nearest neighbor search to forest attribute imputation
Andrew O. Finley; Ronald E. McRoberts; Alan R. Ek
2006-01-01
This paper explores the utility of an efficient nearest neighbor (NN) search algorithm for applications in multi-source kNN forest attribute imputation. The search algorithm reduces the number of distance calculations between a given target vector and each reference vector, thereby, decreasing the time needed to discover the NN subset. Results of five trials show gains...
Limitations in Using Multiple Imputation to Harmonize Individual Participant Data for Meta-Analysis.
Siddique, Juned; de Chavez, Peter J; Howe, George; Cruden, Gracelyn; Brown, C Hendricks
2018-02-01
Individual participant data (IPD) meta-analysis is a meta-analysis in which the individual-level data for each study are obtained and used for synthesis. A common challenge in IPD meta-analysis is when variables of interest are measured differently in different studies. The term harmonization has been coined to describe the procedure of placing variables on the same scale in order to permit pooling of data from a large number of studies. Using data from an IPD meta-analysis of 19 adolescent depression trials, we describe a multiple imputation approach for harmonizing 10 depression measures across the 19 trials by treating those depression measures that were not used in a study as missing data. We then apply diagnostics to address the fit of our imputation model. Even after reducing the scale of our application, we were still unable to produce accurate imputations of the missing values. We describe those features of the data that made it difficult to harmonize the depression measures and provide some guidelines for using multiple imputation for harmonization in IPD meta-analysis.
Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2009-01-01
Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....
Mapping change of older forest with nearest-neighbor imputation and Landsat time-series
Janet L. Ohmann; Matthew J. Gregory; Heather M. Roberts; Warren B. Cohen; Robert E. Kennedy; Zhiqiang. Yang
2012-01-01
The Northwest Forest Plan (NWFP), which aims to conserve late-successional and old-growth forests (older forests) and associated species, established new policies on federal lands in the Pacific Northwest USA. As part of monitoring for the NWFP, we tested nearest-neighbor imputation for mapping change in older forest, defined by threshold values for forest attributes...
DEFF Research Database (Denmark)
Meseck, Kristin; Jankowska, Marta M; Schipperijn, Jasper
2016-01-01
The main purpose of the present study was to assess the impact of global positioning system (GPS) signal lapse on physical activity analyses, discover any existing associations between missing GPS data and environmental and demographics attributes, and to determine whether imputation is an accurate...
Combining Fourier and lagged k-nearest neighbor imputation for biomedical time series data.
Rahman, Shah Atiqur; Huang, Yuxiao; Claassen, Jan; Heintzman, Nathaniel; Kleinberg, Samantha
2015-12-01
Most clinical and biomedical data contain missing values. A patient's record may be split across multiple institutions, devices may fail, and sensors may not be worn at all times. While these missing values are often ignored, this can lead to bias and error when the data are mined. Further, the data are not simply missing at random. Instead the measurement of a variable such as blood glucose may depend on its prior values as well as that of other variables. These dependencies exist across time as well, but current methods have yet to incorporate these temporal relationships as well as multiple types of missingness. To address this, we propose an imputation method (FLk-NN) that incorporates time lagged correlations both within and across variables by combining two imputation methods, based on an extension to k-NN and the Fourier transform. This enables imputation of missing values even when all data at a time point is missing and when there are different types of missingness both within and across variables. In comparison to other approaches on three biological datasets (simulated and actual Type 1 diabetes datasets, and multi-modality neurological ICU monitoring) the proposed method has the highest imputation accuracy. This was true for up to half the data being missing and when consecutive missing values are a significant fraction of the overall time series length. Copyright © 2015 Elsevier Inc. All rights reserved.
Kmetic, Andrew; Joseph, Lawrence; Berger, Claudie; Tenenhouse, Alan
2002-07-01
Nonresponse bias is a concern in any epidemiologic survey in which a subset of selected individuals declines to participate. We reviewed multiple imputation, a widely applicable and easy to implement Bayesian methodology to adjust for nonresponse bias. To illustrate the method, we used data from the Canadian Multicentre Osteoporosis Study, a large cohort study of 9423 randomly selected Canadians, designed in part to estimate the prevalence of osteoporosis. Although subjects were randomly selected, only 42% of individuals who were contacted agreed to participate fully in the study. The study design included a brief questionnaire for those invitees who declined further participation in order to collect information on the major risk factors for osteoporosis. These risk factors (which included age, sex, previous fractures, family history of osteoporosis, and current smoking status) were then used to estimate the missing osteoporosis status for nonparticipants using multiple imputation. Both ignorable and nonignorable imputation models are considered. Our results suggest that selection bias in the study is of concern, but only slightly, in very elderly (age 80+ years), both women and men. Epidemiologists should consider using multiple imputation more often than is current practice.
MacNeil Vroomen, Janet; Eekhout, Iris; Dijkgraaf, Marcel G; van Hout, Hein; de Rooij, Sophia E; Heymans, Martijn W; Bosmans, Judith E
2016-01-01
Cost and effect data often have missing data because economic evaluations are frequently added onto clinical studies where cost data are rarely the primary outcome. The objective of this article was to investigate which multiple imputation strategy is most appropriate to use for missing
Learning-Based Adaptive Imputation Methodwith kNN Algorithm for Missing Power Data
Directory of Open Access Journals (Sweden)
Minkyung Kim
2017-10-01
Full Text Available This paper proposes a learning-based adaptive imputation method (LAI for imputing missing power data in an energy system. This method estimates the missing power data by using the pattern that appears in the collected data. Here, in order to capture the patterns from past power data, we newly model a feature vector by using past data and its variations. The proposed LAI then learns the optimal length of the feature vector and the optimal historical length, which are significant hyper parameters of the proposed method, by utilizing intentional missing data. Based on a weighted distance between feature vectors representing a missing situation and past situation, missing power data are estimated by referring to the k most similar past situations in the optimal historical length. We further extend the proposed LAI to alleviate the effect of unexpected variation in power data and refer to this new approach as the extended LAI method (eLAI. The eLAI selects a method between linear interpolation (LI and the proposed LAI to improve accuracy under unexpected variations. Finally, from a simulation under various energy consumption profiles, we verify that the proposed eLAI achieves about a 74% reduction of the average imputation error in an energy system, compared to the existing imputation methods.
Missing value imputation in DNA microarrays based on conjugate gradient method.
Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh
2012-02-01
Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.
Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael
2017-01-01
The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…
Poyatos, Rafael; Sus, Oliver; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi
2018-05-01
The ubiquity of missing data in plant trait databases may hinder trait-based analyses of ecological patterns and processes. Spatially explicit datasets with information on intraspecific trait variability are rare but offer great promise in improving our understanding of functional biogeography. At the same time, they offer specific challenges in terms of data imputation. Here we compare statistical imputation approaches, using varying levels of environmental information, for five plant traits (leaf biomass to sapwood area ratio, leaf nitrogen content, maximum tree height, leaf mass per area and wood density) in a spatially explicit plant trait dataset of temperate and Mediterranean tree species (Ecological and Forest Inventory of Catalonia, IEFC, dataset for Catalonia, north-east Iberian Peninsula, 31 900 km2). We simulated gaps at different missingness levels (10-80 %) in a complete trait matrix, and we used overall trait means, species means, k nearest neighbours (kNN), ordinary and regression kriging, and multivariate imputation using chained equations (MICE) to impute missing trait values. We assessed these methods in terms of their accuracy and of their ability to preserve trait distributions, multi-trait correlation structure and bivariate trait relationships. The relatively good performance of mean and species mean imputations in terms of accuracy masked a poor representation of trait distributions and multivariate trait structure. Species identity improved MICE imputations for all traits, whereas forest structure and topography improved imputations for some traits. No method performed best consistently for the five studied traits, but, considering all traits and performance metrics, MICE informed by relevant ecological variables gave the best results. However, at higher missingness (> 30 %), species mean imputations and regression kriging tended to outperform MICE for some traits. MICE informed by relevant ecological variables allowed us to fill the gaps in
Directory of Open Access Journals (Sweden)
R. Poyatos
2018-05-01
Full Text Available The ubiquity of missing data in plant trait databases may hinder trait-based analyses of ecological patterns and processes. Spatially explicit datasets with information on intraspecific trait variability are rare but offer great promise in improving our understanding of functional biogeography. At the same time, they offer specific challenges in terms of data imputation. Here we compare statistical imputation approaches, using varying levels of environmental information, for five plant traits (leaf biomass to sapwood area ratio, leaf nitrogen content, maximum tree height, leaf mass per area and wood density in a spatially explicit plant trait dataset of temperate and Mediterranean tree species (Ecological and Forest Inventory of Catalonia, IEFC, dataset for Catalonia, north-east Iberian Peninsula, 31 900 km2. We simulated gaps at different missingness levels (10–80 % in a complete trait matrix, and we used overall trait means, species means, k nearest neighbours (kNN, ordinary and regression kriging, and multivariate imputation using chained equations (MICE to impute missing trait values. We assessed these methods in terms of their accuracy and of their ability to preserve trait distributions, multi-trait correlation structure and bivariate trait relationships. The relatively good performance of mean and species mean imputations in terms of accuracy masked a poor representation of trait distributions and multivariate trait structure. Species identity improved MICE imputations for all traits, whereas forest structure and topography improved imputations for some traits. No method performed best consistently for the five studied traits, but, considering all traits and performance metrics, MICE informed by relevant ecological variables gave the best results. However, at higher missingness (> 30 %, species mean imputations and regression kriging tended to outperform MICE for some traits. MICE informed by relevant ecological variables
Jiao, S; Tiezzi, F; Huang, Y; Gray, K A; Maltecca, C
2016-02-01
Obtaining accurate individual feed intake records is the key first step in achieving genetic progress toward more efficient nutrient utilization in pigs. Feed intake records collected by electronic feeding systems contain errors (erroneous and abnormal values exceeding certain cutoff criteria), which are due to feeder malfunction or animal-feeder interaction. In this study, we examined the use of a novel data-editing strategy involving multiple imputation to minimize the impact of errors and missing values on the quality of feed intake data collected by an electronic feeding system. Accuracy of feed intake data adjustment obtained from the conventional linear mixed model (LMM) approach was compared with 2 alternative implementations of multiple imputation by chained equation, denoted as MI (multiple imputation) and MICE (multiple imputation by chained equation). The 3 methods were compared under 3 scenarios, where 5, 10, and 20% feed intake error rates were simulated. Each of the scenarios was replicated 5 times. Accuracy of the alternative error adjustment was measured as the correlation between the true daily feed intake (DFI; daily feed intake in the testing period) or true ADFI (the mean DFI across testing period) and the adjusted DFI or adjusted ADFI. In the editing process, error cutoff criteria are used to define if a feed intake visit contains errors. To investigate the possibility that the error cutoff criteria may affect any of the 3 methods, the simulation was repeated with 2 alternative error cutoff values. Multiple imputation methods outperformed the LMM approach in all scenarios with mean accuracies of 96.7, 93.5, and 90.2% obtained with MI and 96.8, 94.4, and 90.1% obtained with MICE compared with 91.0, 82.6, and 68.7% using LMM for DFI. Similar results were obtained for ADFI. Furthermore, multiple imputation methods consistently performed better than LMM regardless of the cutoff criteria applied to define errors. In conclusion, multiple imputation
THEORY AND PRACTICE IN THE WORKSHOP OF EXPRESSION VISUAL-GRAPHICS FOR MULTIPLIERS - DAC / UFSC.
Directory of Open Access Journals (Sweden)
Richard Perassi Sousa
2010-12-01
Full Text Available This paper presents theoretical reasons and procedures that structure the practices of free artistic expression in working with adolescents and adults. The goal is to identify and justify the theory, pedagogical methods and the practical activities developed with the multipliers for the release of graphic and plastic expression, as an exercise in personal expression within a social context. The method adopted provides a theoretical justification and development activities of artistic expression-visual graph, as a field of significant interaction between subject and his inner universe. These activities are motivated by the need for expression, which is inherent in human beings, requiring the participants' ministrator and planning and organization. Therefore, beyond the scope of self-expressive, the proposed activities serve as the organizing principle subject to significant social and working life. Here are the relevant issues included the theoretical and practical theories "Free Expression" and "Education through Art." The procedures described here were developed with teachers, artists and other participants of "Workshop of expression visual-graphics for multipliers" held at the Departamento de Arte e Cultura (DAC / UFSC within the Project Arte na Escola in the years 2008 and 2009.
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different
Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S
2005-05-15
Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE
Factors associated with low birth weight in Nepal using multiple imputation
Directory of Open Access Journals (Sweden)
Usha Singh
2017-02-01
Full Text Available Abstract Background Survey data from low income countries on birth weight usually pose a persistent problem. The studies conducted on birth weight have acknowledged missing data on birth weight, but they are not included in the analysis. Furthermore, other missing data presented on determinants of birth weight are not addressed. Thus, this study tries to identify determinants that are associated with low birth weight (LBW using multiple imputation to handle missing data on birth weight and its determinants. Methods The child dataset from Nepal Demographic and Health Survey (NDHS, 2011 was utilized in this study. A total of 5,240 children were born between 2006 and 2011, out of which 87% had at least one measured variable missing and 21% had no recorded birth weight. All the analyses were carried out in R version 3.1.3. Transform-then impute method was applied to check for interaction between explanatory variables and imputed missing data. Survey package was applied to each imputed dataset to account for survey design and sampling method. Survey logistic regression was applied to identify the determinants associated with LBW. Results The prevalence of LBW was 15.4% after imputation. Women with the highest autonomy on their own health compared to those with health decisions involving husband or others (adjusted odds ratio (OR 1.87, 95% confidence interval (95% CI = 1.31, 2.67, and husband and women together (adjusted OR 1.57, 95% CI = 1.05, 2.35 were less likely to give birth to LBW infants. Mothers using highly polluting cooking fuels (adjusted OR 1.49, 95% CI = 1.03, 2.22 were more likely to give birth to LBW infants than mothers using non-polluting cooking fuels. Conclusion The findings of this study suggested that obtaining the prevalence of LBW from only the sample of measured birth weight and ignoring missing data results in underestimation.
Ahmad, Meraj; Sinha, Anubhav; Ghosh, Sreya; Kumar, Vikrant; Davila, Sonia; Yajnik, Chittaranjan S; Chandak, Giriraj R
2017-07-27
Imputation is a computational method based on the principle of haplotype sharing allowing enrichment of genome-wide association study datasets. It depends on the haplotype structure of the population and density of the genotype data. The 1000 Genomes Project led to the generation of imputation reference panels which have been used globally. However, recent studies have shown that population-specific panels provide better enrichment of genome-wide variants. We compared the imputation accuracy using 1000 Genomes phase 3 reference panel and a panel generated from genome-wide data on 407 individuals from Western India (WIP). The concordance of imputed variants was cross-checked with next-generation re-sequencing data on a subset of genomic regions. Further, using the genome-wide data from 1880 individuals, we demonstrate that WIP works better than the 1000 Genomes phase 3 panel and when merged with it, significantly improves the imputation accuracy throughout the minor allele frequency range. We also show that imputation using only South Asian component of the 1000 Genomes phase 3 panel works as good as the merged panel, making it computationally less intensive job. Thus, our study stresses that imputation accuracy using 1000 Genomes phase 3 panel can be further improved by including population-specific reference panels from South Asia.
Directory of Open Access Journals (Sweden)
Stanley Xu
2014-05-01
Full Text Available In studies that use electronic health record data, imputation of important data elements such as Glycated hemoglobin (A1c has become common. However, few studies have systematically examined the validity of various imputation strategies for missing A1c values. We derived a complete dataset using an incident diabetes population that has no missing values in A1c, fasting and random plasma glucose (FPG and RPG, age, and gender. We then created missing A1c values under two assumptions: missing completely at random (MCAR and missing at random (MAR. We then imputed A1c values, compared the imputed values to the true A1c values, and used these data to assess the impact of A1c on initiation of antihyperglycemic therapy. Under MCAR, imputation of A1c based on FPG 1 estimated a continuous A1c within ± 1.88% of the true A1c 68.3% of the time; 2 estimated a categorical A1c within ± one category from the true A1c about 50% of the time. Including RPG in imputation slightly improved the precision but did not improve the accuracy. Under MAR, including gender and age in addition to FPG improved the accuracy of imputed continuous A1c but not categorical A1c. Moreover, imputation of up to 33% of missing A1c values did not change the accuracy and precision and did not alter the impact of A1c on initiation of antihyperglycemic therapy. When using A1c values as a predictor variable, a simple imputation algorithm based only on age, sex, and fasting plasma glucose gave acceptable results.
DEFF Research Database (Denmark)
Jørgensen, Anders W.; Lundstrøm, Lars H; Wetterslev, Jørn
2014-01-01
BACKGROUND: In randomised trials of medical interventions, the most reliable analysis follows the intention-to-treat (ITT) principle. However, the ITT analysis requires that missing outcome data have to be imputed. Different imputation techniques may give different results and some may lead to bias...... of handling missing data in a 60-week placebo controlled anti-obesity drug trial on topiramate. METHODS: We compared an analysis of complete cases with datasets where missing body weight measurements had been replaced using three different imputation methods: LOCF, baseline carried forward (BOCF) and MI...
Liu, Siwei; Molenaar, Peter C M
2014-12-01
This article introduces iVAR, an R program for imputing missing data in multivariate time series on the basis of vector autoregressive (VAR) models. We conducted a simulation study to compare iVAR with three methods for handling missing data: listwise deletion, imputation with sample means and variances, and multiple imputation ignoring time dependency. The results showed that iVAR produces better estimates for the cross-lagged coefficients than do the other three methods. We demonstrate the use of iVAR with an empirical example of time series electrodermal activity data and discuss the advantages and limitations of the program.
Californium Multiplier. Part I. Design for neutron radiography
International Nuclear Information System (INIS)
Crosbie, K.L.; Preskitt, C.A.; John, J.; Hastings, J.D.
1982-01-01
The Californium Multiplier (CFX) is a subcritical assembly of enriched uranium surrounding a californium-252 neutron source. The function of the CFX is to multiply the neutrons emitted by the source to a number sufficient for neutron radiography. The CFX is designed to provide a collimated beam of thermal neutrons from which the gamma radiation is filtered, and the scattered neutrons are reduced to make it suitable for high resolution radiography. The entire system has inherent safety features, which provide for system and personnel safety, and it operates at moderate cost. In Part I, the CFX and the theory of its operation are described in detail. Part II covers the performance of the Mound Facility CFX
Generation of fast multiply charged ions in conical targets
International Nuclear Information System (INIS)
Demchenko, V.V.; Chukbar, K.V.
1990-01-01
So-called conical targets, when the thermonuclear fuel is compressed and heated in a conical cavity in a heavy material (lead, gold, etc.) with the help of a spherical segment that is accelerated by a laser pulse or a beam of charged particles, are often employed in experimental studies of inertial-confinement fusion. In spite of the obvious advantages of such a scheme, one of which is a significant reduction of the required energy input compared with the complete spherical target, it also introduces additional effects into the process of cumulation of energy. In this paper the authors call attention to an effect observed in numerical calculations: the hydrodynamic heating of a small group of multiply charged heavy ions of the walls of the conical cavity up to high energies (T i approx-gt 100 keV). This effect ultimately occurs as a result of the high radiation losses of a multiply charged plasma
Inverse mass matrix via the method of localized lagrange multipliers
Czech Academy of Sciences Publication Activity Database
González, José A.; Kolman, Radek; Cho, S.S.; Felippa, C.A.; Park, K.C.
2018-01-01
Roč. 113, č. 2 (2018), s. 277-295 ISSN 0029-5981 R&D Projects: GA MŠk(CZ) EF15_003/0000493; GA ČR GA17-22615S Institutional support: RVO:61388998 Keywords : explicit time integration * inverse mass matrix * localized Lagrange multipliers * partitioned analysis Subject RIV: BI - Acoustics OBOR OECD: Applied mechanics Impact factor: 2.162, year: 2016 https://onlinelibrary.wiley.com/doi/10.1002/nme.5613
Multiply-negatively charged aluminium clusters and fullerenes
Energy Technology Data Exchange (ETDEWEB)
Walsh, Noelle
2008-07-15
Multiply negatively charged aluminium clusters and fullerenes were generated in a Penning trap using the 'electron-bath' technique. Aluminium monoanions were generated using a laser vaporisation source. After this, two-, three- and four-times negatively charged aluminium clusters were generated for the first time. This research marks the first observation of tetra-anionic metal clusters in the gas phase. Additionally, doubly-negatively charged fullerenes were generated. The smallest fullerene dianion observed contained 70 atoms. (orig.)
On Lagrange Multipliers in Work with Quality and Reliability Assurance
DEFF Research Database (Denmark)
Vidal, Rene Victor Valqui; Becker, P.
1986-01-01
In optimizing some property of a system, reliability say, a designer usually has to accept certain constraints regarding cost, completion time, volume, weight, etc. The solution of optimization problems with boundary constraints can be helped substantially by the use of Lagrange multipliers...... in the areas of sales promotion and teaching. These maps illuminate the logic structure of solution sequences. One such map is shown, illustrating the application of LMT in one of the examples....
Characterization of a prototype matrix of Silicon PhotoMultipliers
Energy Technology Data Exchange (ETDEWEB)
Dinu, N. [Laboratory of Linear Accelerator (LAL), IN2P3-CNRS, 91898 Orsay (France)], E-mail: dinu@lal.in2p3.fr; Barrillon, P.; Bazin, C. [Laboratory of Linear Accelerator (LAL), IN2P3-CNRS, 91898 Orsay (France); Belcari, N.; Bisogni, M.G. [Universita di Pisa, Dipartimento di Fisica ' E. Fermi' , 56127 Pisa (Italy); INFN, Sezione di Pisa, 56127 Pisa (Italy); Bondil-Blin, S. [Laboratory of Linear Accelerator (LAL), IN2P3-CNRS, 91898 Orsay (France); Boscardin, M. [Fondazione Bruno Kessler (FBK-irst), 38050 Trento (Italy); Chaumat, V. [Laboratory of Linear Accelerator (LAL), IN2P3-CNRS, 91898 Orsay (France); Collazuol, G. [Scuola Normale Superiore (SNS), 56127 Pisa (Italy); INFN, Sezione di Pisa, 56127 Pisa (Italy); De La Taille, C. [Laboratory of Linear Accelerator (LAL), IN2P3-CNRS, 91898 Orsay (France); Del Guerra, A. [Universita di Pisa, Dipartimento di Fisica ' E. Fermi' , 56127 Pisa (Italy); INFN, Sezione di Pisa, 56127 Pisa (Italy); Llosa, G. [Universita di Pisa, Dipartimento di Fisica ' E. Fermi' , 56127 Pisa (Italy); Marcatili, S. [Universita di Pisa, Dipartimento di Fisica ' E. Fermi' , 56127 Pisa (Italy); INFN, Sezione di Pisa, 56127 Pisa (Italy); Melchiorri, M.; Piemonte, C. [Fondazione Bruno Kessler (FBK-irst), 38050 Trento (Italy); Puill, V. [Laboratory of Linear Accelerator (LAL), IN2P3-CNRS, 91898 Orsay (France); Tarolli, A. [Fondazione Bruno Kessler (FBK-irst), 38050 Trento (Italy); Vagnucci, J.F. [Laboratory of Linear Accelerator (LAL), IN2P3-CNRS, 91898 Orsay (France); Zorzi, N. [Fondazione Bruno Kessler (FBK-irst), 38050 Trento (Italy)
2009-10-21
This work reports on the electrical as well as the optical characterizations of a prototype matrix of Silicon PhotoMultipliers (SiPM). The electrical test consists of the measurement of the static (breakdown voltage, quenching resistance, post-breakdown dark current) as well as the dynamic characteristics (gain, dark count rate). The optical test consists of the estimation of the photon detection efficiency as a function of wavelength as well as operation voltage.
Characterization of a prototype matrix of Silicon PhotoMultipliers
International Nuclear Information System (INIS)
Dinu, N.; Barrillon, P.; Bazin, C.; Belcari, N.; Bisogni, M.G.; Bondil-Blin, S.; Boscardin, M.; Chaumat, V.; Collazuol, G.; De La Taille, C.; Del Guerra, A.; Llosa, G.; Marcatili, S.; Melchiorri, M.; Piemonte, C.; Puill, V.; Tarolli, A.; Vagnucci, J.F.; Zorzi, N.
2009-01-01
This work reports on the electrical as well as the optical characterizations of a prototype matrix of Silicon PhotoMultipliers (SiPM). The electrical test consists of the measurement of the static (breakdown voltage, quenching resistance, post-breakdown dark current) as well as the dynamic characteristics (gain, dark count rate). The optical test consists of the estimation of the photon detection efficiency as a function of wavelength as well as operation voltage.
Radial multipliers on reduced free products of operator algebras
DEFF Research Database (Denmark)
Haagerup, Uffe; Møller, Søren
2012-01-01
Let AiAi be a family of unital C¿C¿-algebras, respectively, of von Neumann algebras and ¿:N0¿C¿:N0¿C. We show that if a Hankel matrix related to ¿ is trace-class, then there exists a unique completely bounded map M¿M¿ on the reduced free product of the AiAi, which acts as a radial multiplier...
Study of the electric field inside microchannel plate multipliers
International Nuclear Information System (INIS)
Gatti, E.; Oba, K.; Rehak, P.
1982-01-01
Electric field inside high gain microchannel plate multipliers was studied. The calculations were based directly on the solution of the Maxwell equations applied to the microchannel plate (MCP) rather than on the conventional lumped RC model. The results are important to explain the performance of MCP's, (1) under a pulsed bias tension and, (2) at high rate conditions. The results were tested experimentally and a new method of MCP operation free from the positive ion feedback was demonstrated
Neutralization of H-- in energetic collisions with multiply charged ions
International Nuclear Information System (INIS)
Melchert, F.; Benner, M.; Kruedener, S.; Schulze, R.; Meuser, S.; Huber, K.; Salzborn, E.; Uskov, D.B.; Presnyakov, L.P.
1995-01-01
Employing the crossed-beam technique, we have measured absolute cross sections for neutralization of H -- ions in collisions with multiply charged ions Ne q+ (q≤4) and Ar q+ , Xe q+ (q≤8) at center-of-mass energies ranging from 20 to 200 keV. . . It is found that th cross sections are independent of the target ion species. The data are in excellent agreement with quantum calculations. A universal scaling law for the neutralization cross section is given
Estimates for Unimodular Multipliers on Modulation Hardy Spaces
Directory of Open Access Journals (Sweden)
Jiecheng Chen
2013-01-01
Full Text Available It is known that the unimodular Fourier multipliers eit|Δ|α/2, α>0, are bounded on all modulation spaces Mp,qs for 1≤p,q≤∞. We extend such boundedness to the case of all 00 and obtain the local well-posedness for the Cauchy problem of some nonlinear partial differential equations with fundamental semigroup eit|Δ|α/2.
Safety analysis report for the Neutron Multiplier Facility, 329 Building
International Nuclear Information System (INIS)
Rieck, H.G.
1978-09-01
Neutron multiplication is a process wherein the flux of a neutron source such as 252 Cf is enhanced by fission reactions that occur in a subcritical assemblage of fissile material. The multiplication factor of the device depends upon the consequences of neutron reactions with matter and is independent of the initial number of neutrons present. Safe utilization of such a device demands that the fissile material assemblage be maintained in a subcritical state throughout all normal and credibly abnormal conditions. Examples of things that can alter the multiplication factor (and degree of subcriticality) are temperature fluctuations, changes in moderator material such as voiding or composition, addition of fissile materials, and change in assembly configuration. The Neutron Multiplier Facility (NMF) utilizes a multiplier- 252 Cf assembly to produce neutrons for activation analysis of organic and inorganic environmental samples and for on-line mass spectrometry analysis of fission products which diffuse from a stationary fissile target (less than or equal to 4 g fissile material) located in the Neutron Multiplier. The NMF annex to the 329 Building provides close proximity to related counting equipment, and delay between sample irradiation and counting is minimized
Neutron fluctuations in a multiplying medium randomly varying in time
Energy Technology Data Exchange (ETDEWEB)
Pal, L. [KFKI Atomic Energy Research Inst., Budapest (Hungary); Pazsit, I. [Chalmers Univ. of Technology, Goeteborg (Sweden). Dept. of Nuclear Engineering
2006-07-15
The master equation approach, which has traditionally been used for the calculation of neutron fluctuations in multiplying systems with constant parameters, is extended to a case when the parameters of the system change randomly in time. A forward type master equation is considered for the case of a multiplying system whose properties jump randomly between two discrete states, both with and without a stationary external source. The first two factorial moments are calculated, including the covariance. This model can be considered as the unification of stochastic methods that were used either in a constant multiplying medium via the master equation technique, or in a fluctuating medium via the Langevin technique. The results obtained show a much richer characteristic of the zero power noise than that in constant systems. The results are relevant in medium power subcritical nuclear systems where the zero power noise is still significant, but they also have a bearing on all types of branching processes, such as evolution of biological systems, spreading of epidemics etc, which are set in a time-varying environment.
Neutron fluctuations in a multiplying medium randomly varying in time
International Nuclear Information System (INIS)
Pal, L.; Pazsit, I.
2006-01-01
The master equation approach, which has traditionally been used for the calculation of neutron fluctuations in multiplying systems with constant parameters, is extended to a case when the parameters of the system change randomly in time. A forward type master equation is considered for the case of a multiplying system whose properties jump randomly between two discrete states, both with and without a stationary external source. The first two factorial moments are calculated, including the covariance. This model can be considered as the unification of stochastic methods that were used either in a constant multiplying medium via the master equation technique, or in a fluctuating medium via the Langevin technique. The results obtained show a much richer characteristic of the zero power noise than that in constant systems. The results are relevant in medium power subcritical nuclear systems where the zero power noise is still significant, but they also have a bearing on all types of branching processes, such as evolution of biological systems, spreading of epidemics etc, which are set in a time-varying environment
Charge amplification and transfer processes in the gas electron multiplier
International Nuclear Information System (INIS)
Bachmann, S.; Bressan, A.; Ropelewski, L.; Sauli, F.; Sharma, A.; Moermann, D.
1999-01-01
We report the results of systematic investigations on the operating properties of detectors based on the gas electron multiplier (GEM). The dependence of gain and charge collection efficiency on the external fields has been studied in a range of values for the hole diameter and pitch. The collection efficiency of ionization electrons into the multiplier, after an initial increase, reaches a plateau extending to higher values of drift field the larger the GEM voltage and its optical transparency. The effective gain, fraction of electrons collected by an electrode following the multiplier, increases almost linearly with the collection field, until entering a steeper parallel plate multiplication regime. The maximum effective gain attainable increases with the reduction in the hole diameter, stabilizing to a constant value at a diameter approximately corresponding to the foil thickness. Charge transfer properties appear to depend only on ratios of fields outside and within the channels, with no interaction between the external fields. With proper design, GEM detectors can be optimized to satisfy a wide range of experimental requirements: tracking of minimum ionizing particles, good electron collection with small distortions in high magnetic fields, improved multi-track resolution and strong ion feedback suppression in large volume and time-projection chambers
Ondeck, Nathaniel T; Fu, Michael C; Skrip, Laura A; McLynn, Ryan P; Cui, Jonathan J; Basques, Bryce A; Albert, Todd J; Grauer, Jonathan N
2018-04-09
The presence of missing data is a limitation of large datasets, including the National Surgical Quality Improvement Program (NSQIP). In addressing this issue, most studies use complete case analysis, which excludes cases with missing data, thus potentially introducing selection bias. Multiple imputation, a statistically rigorous approach that approximates missing data and preserves sample size, may be an improvement over complete case analysis. The present study aims to evaluate the impact of using multiple imputation in comparison with complete case analysis for assessing the associations between preoperative laboratory values and adverse outcomes following anterior cervical discectomy and fusion (ACDF) procedures. This is a retrospective review of prospectively collected data. Patients undergoing one-level ACDF were identified in NSQIP 2012-2015. Perioperative adverse outcome variables assessed included the occurrence of any adverse event, severe adverse events, and hospital readmission. Missing preoperative albumin and hematocrit values were handled using complete case analysis and multiple imputation. These preoperative laboratory levels were then tested for associations with 30-day postoperative outcomes using logistic regression. A total of 11,999 patients were included. Of this cohort, 63.5% of patients had missing preoperative albumin and 9.9% had missing preoperative hematocrit. When using complete case analysis, only 4,311 patients were studied. The removed patients were significantly younger, healthier, of a common body mass index, and male. Logistic regression analysis failed to identify either preoperative hypoalbuminemia or preoperative anemia as significantly associated with adverse outcomes. When employing multiple imputation, all 11,999 patients were included. Preoperative hypoalbuminemia was significantly associated with the occurrence of any adverse event and severe adverse events. Preoperative anemia was significantly associated with the
Imputing forest carbon stock estimates from inventory plots to a nationally continuous coverage
Directory of Open Access Journals (Sweden)
Wilson Barry Tyler
2013-01-01
Full Text Available Abstract The U.S. has been providing national-scale estimates of forest carbon (C stocks and stock change to meet United Nations Framework Convention on Climate Change (UNFCCC reporting requirements for years. Although these currently are provided as national estimates by pool and year to meet greenhouse gas monitoring requirements, there is growing need to disaggregate these estimates to finer scales to enable strategic forest management and monitoring activities focused on various ecosystem services such as C storage enhancement. Through application of a nearest-neighbor imputation approach, spatially extant estimates of forest C density were developed for the conterminous U.S. using the U.S.’s annual forest inventory. Results suggest that an existing forest inventory plot imputation approach can be readily modified to provide raster maps of C density across a range of pools (e.g., live tree to soil organic carbon and spatial scales (e.g., sub-county to biome. Comparisons among imputed maps indicate strong regional differences across C pools. The C density of pools closely related to detrital input (e.g., dead wood is often highest in forests suffering from recent mortality events such as those in the northern Rocky Mountains (e.g., beetle infestations. In contrast, live tree carbon density is often highest on the highest quality forest sites such as those found in the Pacific Northwest. Validation results suggest strong agreement between the estimates produced from the forest inventory plots and those from the imputed maps, particularly when the C pool is closely associated with the imputation model (e.g., aboveground live biomass and live tree basal area, with weaker agreement for detrital pools (e.g., standing dead trees. Forest inventory imputed plot maps provide an efficient and flexible approach to monitoring diverse C pools at national (e.g., UNFCCC and regional scales (e.g., Reducing Emissions from Deforestation and Forest
Wood, Andrew R; Perry, John R B; Tanaka, Toshiko; Hernandez, Dena G; Zheng, Hou-Feng; Melzer, David; Gibbs, J Raphael; Nalls, Michael A; Weedon, Michael N; Spector, Tim D; Richards, J Brent; Bandinelli, Stefania; Ferrucci, Luigi; Singleton, Andrew B; Frayling, Timothy M
2013-01-01
Genome-wide association (GWA) studies have been limited by the reliance on common variants present on microarrays or imputable from the HapMap Project data. More recently, the completion of the 1000 Genomes Project has provided variant and haplotype information for several million variants derived from sequencing over 1,000 individuals. To help understand the extent to which more variants (including low frequency (1% ≤ MAF 1000 Genomes imputation, respectively, and 9 and 11 that reached a stricter, likely conservative, threshold of P1000 Genomes genotype data modestly improved the strength of known associations. Of 20 associations detected at P1000 Genomes imputed data and one was nominally more strongly associated in HapMap imputed data. We also detected an association between a low frequency variant and phenotype that was previously missed by HapMap based imputation approaches. An association between rs112635299 and alpha-1 globulin near the SERPINA gene represented the known association between rs28929474 (MAF = 0.007) and alpha1-antitrypsin that predisposes to emphysema (P = 2.5×10(-12)). Our data provide important proof of principle that 1000 Genomes imputation will detect novel, low frequency-large effect associations.
DEFF Research Database (Denmark)
Dassonneville, R; Brøndum, Rasmus Froberg; Druet, T
2011-01-01
The purpose of this study was to investigate the imputation error and loss of reliability of direct genomic values (DGV) or genomically enhanced breeding values (GEBV) when using genotypes imputed from a 3,000-marker single nucleotide polymorphism (SNP) panel to a 50,000-marker SNP panel. Data...... of missing markers and prediction of breeding values were performed using 2 different reference populations in each country: either a national reference population or a combined EuroGenomics reference population. Validation for accuracy of imputation and genomic prediction was done based on national test...... with a national reference data set gave an absolute loss of 0.05 in mean reliability of GEBV in the French study, whereas a loss of 0.03 was obtained for reliability of DGV in the Nordic study. When genotypes were imputed using the EuroGenomics reference, a loss of 0.02 in mean reliability of GEBV was detected...
Moiseeva, A.; Jessurun, A.J.; Timmermans, H.J.P.; Stopher, P.
2016-01-01
Anastasia Moiseeva, Joran Jessurun and Harry Timmermans (2010), ‘Semiautomatic Imputation of Activity Travel Diaries: Use of Global Positioning System Traces, Prompted Recall, and Context-Sensitive Learning Algorithms’, Transportation Research Record: Journal of the Transportation Research Board,
Directory of Open Access Journals (Sweden)
Danai Jattawa
2016-04-01
Full Text Available The objective of this study was to investigate the accuracy of imputation from low density (LDC to moderate density SNP chips (MDC in a Thai Holstein-Other multibreed dairy cattle population. Dairy cattle with complete pedigree information (n = 1,244 from 145 dairy farms were genotyped with GeneSeek GGP20K (n = 570, GGP26K (n = 540 and GGP80K (n = 134 chips. After checking for single nucleotide polymorphism (SNP quality, 17,779 SNP markers in common between the GGP20K, GGP26K, and GGP80K were used to represent MDC. Animals were divided into two groups, a reference group (n = 912 and a test group (n = 332. The SNP markers chosen for the test group were those located in positions corresponding to GeneSeek GGP9K (n = 7,652. The LDC to MDC genotype imputation was carried out using three different software packages, namely Beagle 3.3 (population-based algorithm, FImpute 2.2 (combined family- and population-based algorithms and Findhap 4 (combined family- and population-based algorithms. Imputation accuracies within and across chromosomes were calculated as ratios of correctly imputed SNP markers to overall imputed SNP markers. Imputation accuracy for the three software packages ranged from 76.79% to 93.94%. FImpute had higher imputation accuracy (93.94% than Findhap (84.64% and Beagle (76.79%. Imputation accuracies were similar and consistent across chromosomes for FImpute, but not for Findhap and Beagle. Most chromosomes that showed either high (73% or low (80% imputation accuracies were the same chromosomes that had above and below average linkage disequilibrium (LD; defined here as the correlation between pairs of adjacent SNP within chromosomes less than or equal to 1 Mb apart. Results indicated that FImpute was more suitable than Findhap and Beagle for genotype imputation in this Thai multibreed population. Perhaps additional increments in imputation accuracy could be achieved by increasing the completeness of pedigree information.
DEFF Research Database (Denmark)
Andersen, Andreas; Rieckmann, Andreas
2016-01-01
In this article, we illustrate how to use mi impute chained with intreg to fit an analysis of covariance analysis of censored and nondetectable immunological concentrations measured in a randomized pretest–posttest design.......In this article, we illustrate how to use mi impute chained with intreg to fit an analysis of covariance analysis of censored and nondetectable immunological concentrations measured in a randomized pretest–posttest design....
2012-01-01
Background Multiple Imputation as usually implemented assumes that data are Missing At Random (MAR), meaning that the underlying missing data mechanism, given the observed data, is independent of the unobserved data. To explore the sensitivity of the inferences to departures from the MAR assumption, we applied the method proposed by Carpenter et al. (2007). This approach aims to approximate inferences under a Missing Not At random (MNAR) mechanism by reweighting estimates obtained after multiple imputation where the weights depend on the assumed degree of departure from the MAR assumption. Methods The method is illustrated with epidemiological data from a surveillance system of hepatitis C virus (HCV) infection in France during the 2001–2007 period. The subpopulation studied included 4343 HCV infected patients who reported drug use. Risk factors for severe liver disease were assessed. After performing complete-case and multiple imputation analyses, we applied the sensitivity analysis to 3 risk factors of severe liver disease: past excessive alcohol consumption, HIV co-infection and infection with HCV genotype 3. Results In these data, the association between severe liver disease and HIV was underestimated, if given the observed data the chance of observing HIV status is high when this is positive. Inference for two other risk factors were robust to plausible local departures from the MAR assumption. Conclusions We have demonstrated the practical utility of, and advocate, a pragmatic widely applicable approach to exploring plausible departures from the MAR assumption post multiple imputation. We have developed guidelines for applying this approach to epidemiological studies. PMID:22681630
Imputing historical statistics, soils information, and other land-use data to crop area
Perry, C. R., Jr.; Willis, R. W.; Lautenschlager, L.
1982-01-01
In foreign crop condition monitoring, satellite acquired imagery is routinely used. To facilitate interpretation of this imagery, it is advantageous to have estimates of the crop types and their extent for small area units, i.e., grid cells on a map represent, at 60 deg latitude, an area nominally 25 by 25 nautical miles in size. The feasibility of imputing historical crop statistics, soils information, and other ancillary data to crop area for a province in Argentina is studied.
Kim, Kwangwoo; Bang, So-Young; Lee, Hye-Soon; Bae, Sang-Cheol
2014-01-01
Genetic variations of human leukocyte antigen (HLA) genes within the major histocompatibility complex (MHC) locus are strongly associated with disease susceptibility and prognosis for many diseases, including many autoimmune diseases. In this study, we developed a Korean HLA reference panel for imputing classical alleles and amino acid residues of several HLA genes. An HLA reference panel has potential for use in identifying and fine-mapping disease associations with the MHC locus in East Asian populations, including Koreans. A total of 413 unrelated Korean subjects were analyzed for single nucleotide polymorphisms (SNPs) at the MHC locus and six HLA genes, including HLA-A, -B, -C, -DRB1, -DPB1, and -DQB1. The HLA reference panel was constructed by phasing the 5,858 MHC SNPs, 233 classical HLA alleles, and 1,387 amino acid residue markers from 1,025 amino acid positions as binary variables. The imputation accuracy of the HLA reference panel was assessed by measuring concordance rates between imputed and genotyped alleles of the HLA genes from a subset of the study subjects and East Asian HapMap individuals. Average concordance rates were 95.6% and 91.1% at 2-digit and 4-digit allele resolutions, respectively. The imputation accuracy was minimally affected by SNP density of a test dataset for imputation. In conclusion, the Korean HLA reference panel we developed was highly suitable for imputing HLA alleles and amino acids from MHC SNPs in East Asians, including Koreans.
Directory of Open Access Journals (Sweden)
Kwangwoo Kim
Full Text Available Genetic variations of human leukocyte antigen (HLA genes within the major histocompatibility complex (MHC locus are strongly associated with disease susceptibility and prognosis for many diseases, including many autoimmune diseases. In this study, we developed a Korean HLA reference panel for imputing classical alleles and amino acid residues of several HLA genes. An HLA reference panel has potential for use in identifying and fine-mapping disease associations with the MHC locus in East Asian populations, including Koreans. A total of 413 unrelated Korean subjects were analyzed for single nucleotide polymorphisms (SNPs at the MHC locus and six HLA genes, including HLA-A, -B, -C, -DRB1, -DPB1, and -DQB1. The HLA reference panel was constructed by phasing the 5,858 MHC SNPs, 233 classical HLA alleles, and 1,387 amino acid residue markers from 1,025 amino acid positions as binary variables. The imputation accuracy of the HLA reference panel was assessed by measuring concordance rates between imputed and genotyped alleles of the HLA genes from a subset of the study subjects and East Asian HapMap individuals. Average concordance rates were 95.6% and 91.1% at 2-digit and 4-digit allele resolutions, respectively. The imputation accuracy was minimally affected by SNP density of a test dataset for imputation. In conclusion, the Korean HLA reference panel we developed was highly suitable for imputing HLA alleles and amino acids from MHC SNPs in East Asians, including Koreans.
Design of a bovine low-density SNP array optimized for imputation.
Directory of Open Access Journals (Sweden)
Didier Boichard
Full Text Available The Illumina BovineLD BeadChip was designed to support imputation to higher density genotypes in dairy and beef breeds by including single-nucleotide polymorphisms (SNPs that had a high minor allele frequency as well as uniform spacing across the genome except at the ends of the chromosome where densities were increased. The chip also includes SNPs on the Y chromosome and mitochondrial DNA loci that are useful for determining subspecies classification and certain paternal and maternal breed lineages. The total number of SNPs was 6,909. Accuracy of imputation to Illumina BovineSNP50 genotypes using the BovineLD chip was over 97% for most dairy and beef populations. The BovineLD imputations were about 3 percentage points more accurate than those from the Illumina GoldenGate Bovine3K BeadChip across multiple populations. The improvement was greatest when neither parent was genotyped. The minor allele frequencies were similar across taurine beef and dairy breeds as was the proportion of SNPs that were polymorphic. The new BovineLD chip should facilitate low-cost genomic selection in taurine beef and dairy cattle.
Imputation of microsatellite alleles from dense SNP genotypes for parental verification
Directory of Open Access Journals (Sweden)
Matthew eMcclure
2012-08-01
Full Text Available Microsatellite (MS markers have recently been used for parental verification and are still the international standard despite higher cost, error rate, and turnaround time compared with Single Nucleotide Polymorphisms (SNP-based assays. Despite domestic and international interest from producers and research communities, no viable means currently exist to verify parentage for an individual unless all familial connections were analyzed using the same DNA marker type (MS or SNP. A simple and cost-effective method was devised to impute MS alleles from SNP haplotypes within breeds. For some MS, imputation results may allow inference across breeds. A total of 347 dairy cattle representing 4 dairy breeds (Brown Swiss, Guernsey, Holstein, and Jersey were used to generate reference haplotypes. This approach has been verified (>98% accurate for imputing the International Society of Animal Genetics (ISAG recommended panel of 12 MS for cattle parentage verification across a validation set of 1,307 dairy animals.. Implementation of this method will allow producers and breed associations to transition to SNP-based parentage verification utilizing MS genotypes from historical data on parents where SNP genotypes are missing. This approach may be applicable to additional cattle breeds and other species that wish to migrate from MS- to SNP- based parental verification.
Missing Data Imputation of Solar Radiation Data under Different Atmospheric Conditions
Turrado, Concepción Crespo; López, María del Carmen Meizoso; Lasheras, Fernando Sánchez; Gómez, Benigno Antonio Rodríguez; Rollé, José Luis Calvo; de Cos Juez, Francisco Javier
2014-01-01
Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE). This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW) and Multiple Linear Regression (MLR). The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW. PMID:25356644
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
Missing Data Imputation of Solar Radiation Data under Different Atmospheric Conditions
Directory of Open Access Journals (Sweden)
Concepción Crespo Turrado
2014-10-01
Full Text Available Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE. This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW and Multiple Linear Regression (MLR. The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW.
Missing data imputation of solar radiation data under different atmospheric conditions.
Turrado, Concepción Crespo; López, María Del Carmen Meizoso; Lasheras, Fernando Sánchez; Gómez, Benigno Antonio Rodríguez; Rollé, José Luis Calvo; Juez, Francisco Javier de Cos
2014-10-29
Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE). This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW) and Multiple Linear Regression (MLR). The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW.
Data Editing and Imputation in Business Surveys Using “R”
Directory of Open Access Journals (Sweden)
Elena Romascanu
2014-06-01
Full Text Available Purpose – Missing data are a recurring problem that can cause bias or lead to inefficient analyses. The objective of this paper is a direct comparison between the two statistical software features R and SPSS, in order to take full advantage of the existing automated methods for data editing process and imputation in business surveys (with a proper design of consistency rules as a partial alternative to the manual editing of data. Approach – The comparison of different methods on editing surveys data, in R with the ‘editrules’ and ‘survey’ packages because inside those, exist commonly used transformations in ofﬁcial statistics, as visualization of missing values pattern using ‘Amelia’ and ‘VIM’ packages, imputation approaches for longitudinal data using ‘VIMGUI’ and a comparison of another statistical software performance on the same features, such as SPSS. Findings – Data on business statistics received by NIS’s (National Institute of Statistics are not ready to be used for direct analysis due to in-record inconsistencies, errors and missing values from the collected data sets. The appropriate automatic methods from R packages, offers the ability to set the erroneous fields in edit-violating records, to verify the results after the imputation of missing values providing for users a flexible, less time consuming approach and easy to perform automation in R than in SPSS Macros syntax situations, when macros are very handy.
Auditing the multiply-related concepts within the UMLS.
Mougin, Fleur; Grabar, Natalia
2014-10-01
This work focuses on multiply-related Unified Medical Language System (UMLS) concepts, that is, concepts associated through multiple relations. The relations involved in such situations are audited to determine whether they are provided by source vocabularies or result from the integration of these vocabularies within the UMLS. We study the compatibility of the multiple relations which associate the concepts under investigation and try to explain the reason why they co-occur. Towards this end, we analyze the relations both at the concept and term levels. In addition, we randomly select 288 concepts associated through contradictory relations and manually analyze them. At the UMLS scale, only 0.7% of combinations of relations are contradictory, while homogeneous combinations are observed in one-third of situations. At the scale of source vocabularies, one-third do not contain more than one relation between the concepts under investigation. Among the remaining source vocabularies, seven of them mainly present multiple non-homogeneous relations between terms. Analysis at the term level also shows that only in a quarter of cases are the source vocabularies responsible for the presence of multiply-related concepts in the UMLS. These results are available at: http://www.isped.u-bordeaux2.fr/ArticleJAMIA/results_multiply_related_concepts.aspx. Manual analysis was useful to explain the conceptualization difference in relations between terms across source vocabularies. The exploitation of source relations was helpful for understanding why some source vocabularies describe multiple relations between a given pair of terms. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Fission multipliers for D-D/D-T neutron generators
International Nuclear Information System (INIS)
Lou, T.P.; Vujic, J.L.; Koivunoro, H.; Reijonen, J.; Leung, K.-N.
2003-01-01
A compact D-D/D-T fusion based neutron generator is being designed at the Lawrence Berkeley National Laboratory to have a potential yield of 10 12 D-D n/s and 10 14 D-T n/s. Because of its high neutron yield and compact size (∼20 cm in diameter by 4 cm long), this neutron generator design will be suitable for many applications. However, some applications required higher flux available from nuclear reactors and spallation neutron sources operated with GeV proton beams. In this study, a subcritical fission multiplier with k eff of 0.98 is coupled with the compact neutron generators in order to increase the neutron flux output. We have chosen two applications to show the gain in flux due to the use of fission multipliers--in-core irradiation and out-of-core irradiation. For the in-core irradiation, we have shown that a gain of ∼25 can be achieved in a positron production system using D-T generator. For the out-of-core irradiation, a gain of ∼17 times is obtained in Boron Neutron Capture Therapy (BNCT) using a D-D neutron generator. The total number of fission neutrons generated by a source neutron in a fission multiplier with k eff is ∼50. For the out-of-core irradiation, the theoretical maximum net multiplication is ∼30 due to the absorption of neutrons in the fuel. A discussion of the achievable multiplication and the theoretical multiplication will be presented in this paper
Spot Pricing When Lagrange Multipliers Are Not Unique
DEFF Research Database (Denmark)
Feng, Donghan; Xu, Zhao; Zhong, Jin
2012-01-01
Classical spot pricing theory is based on multipliers of the primal problem of an optimal market dispatch, i.e., the solution of the dual problem. However, the dual problem of market dispatch may yield multiple solutions. In these circumstances, spot pricing or any standard pricing practice based...... on a strict extension of the principles of spot pricing and surplus allocation, we propose a new pricing methodology that can yield unique, impartial, and robust solution. The new method has been analyzed and compared with other pricing approaches in accordance with spot pricing theory. Case studies support...
Monte Carlo technique for local perturbations in multiplying systems
International Nuclear Information System (INIS)
Bernnat, W.
1974-01-01
The use of the Monte Carlo method for the calculation of reactivity perturbations in multiplying systems due to changes in geometry or composition requires a correlated sampling technique to make such calculations economical or in the case of very small perturbations even feasible. The technique discussed here is suitable for local perturbations. Very small perturbation regions will be treated by an adjoint mode. The perturbation of the source distribution due to the changed system and its reaction on the reactivity worth or other values of interest is taken into account by a fission matrix method. The formulation of the method and its application are discussed. 10 references. (U.S.)
Practical model for the calculation of multiply scattered lidar returns
International Nuclear Information System (INIS)
Eloranta, E.W.
1998-01-01
An equation to predict the intensity of the multiply scattered lidar return is presented. Both the scattering cross section and the scattering phase function can be specified as a function of range. This equation applies when the cloud particles are larger than the lidar wavelength. This approximation considers photon trajectories with multiple small-angle forward-scattering events and one large-angle scattering that directs the photon back toward the receiver. Comparisons with Monte Carlo simulations, exact double-scatter calculations, and lidar data demonstrate that this model provides accurate results. copyright 1998 Optical Society of America
Statistics of electron multiplication in multiplier phototube: iterative method
International Nuclear Information System (INIS)
Grau Malonda, A.; Ortiz Sanchez, J.F.
1985-01-01
An iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situations are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average anti-r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (author)
Science with multiply-charged ions at Brookhaven National Laboratory
International Nuclear Information System (INIS)
Jones, K.W.; Johnson, B.M.; Meron, M.; Thieberger, P.
1987-01-01
The production of multiply-charged heavy ions at Brookhaven National Laboratory and their use in different types of experiments are discussed. The main facilities that are used are the Double MP Tandem Van de Graaff and the National Synchrotron Light Source. The capabilities of a versatile Atomic Physics Facility based on a combination of the two facilities and a possible new heavy-ion storage ring are summarized. It is emphasized that the production of heavy ions and the relevant science necessitates very flexible and diverse apparatus
Statistics of electron multiplication in a multiplier phototube; Iterative method
International Nuclear Information System (INIS)
Ortiz, J. F.; Grau, A.
1985-01-01
In the present paper an iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situation are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (Author) 11 refs
Chromatographic analysis and purification of multiply tritium-labelled eicosanoids
International Nuclear Information System (INIS)
Shevchenko, V.P.; Nagaev, I.Yu.; Myasoedov, N.F.
1988-01-01
A comparative study of different chromatographic techniques (gas-liquid (GLC), thin-layer (TLC), liquid (LC), high-pressure liquid (HPLC) chromatography) is presented. They were applied to the analysis and preparative purification of tritium-labelled eicosanoids with a molar radioactivity of 1.8-8.8 TBq/mmol, obtained by selective hydrogenation and by chemical or enzymic methods. The possibility of analyzing reaction mixtures and isolating individual multiply labelled eicosanoids with a chemical and radiochemical purity of 95-98% was demonstrated. Special features of HPLC for high molar radioactivity eicosanoids are considered. (author) 9 refs.; 6 tabs
Charge-transfer properties in the gas electron multiplier
International Nuclear Information System (INIS)
Han, Sanghyo; Kim, Yongkyun; Cho, Hyosung
2004-01-01
The charge transfer properties of a gas electron multiplier (GEM) were systematically investigated over a broad range of electric field configurations. The electron collection efficiency and the charge sharing were found to depend on the external fields, as well as on the GEM voltage. The electron collection efficiency increased with the collection field up to 90%, but was essentially independent of the drift field strength. A double conical GEM has a 10% gain increase with time due to surface charging by avalanche ions whereas this effect was eliminated with the cylindrical GEM. The positive-ion feedback is also estimated. (author)
233U breeding and neutron multiplying blankets for fusion reactors
International Nuclear Information System (INIS)
Cook, A.G.; Maniscalco, J.A.
1975-01-01
In this work, along with a previous paper three possible uses of 14-MeV deuterium--tritium fusion neutrons are investigated: energy production, neutron multiplication, and fissile-fuel breeding. The results presented include neutronic studies of fissioning and nonfissioning thorium systems, tritium breeding systems, various fuel options (UO 2 , UC, UC 2 , etc.), and uranium as well as refractory metal first-wall neutron-multiplying regions. A brief energy balance and an estimate of potential revenues for fusion devices are given to help illustrate the potentials of these designs
Determination of Ultimate Torque for Multiply Connected Cross Section Rod
Directory of Open Access Journals (Sweden)
V. L. Danilov
2015-01-01
Full Text Available The aim of this work is to determine load-carrying capability of the multiply cross-section rod. This calculation is based on the model of the ideal plasticity of the material, so that the desired ultimate torque is a torque at which the entire cross section goes into a plastic state.The article discusses the cylindrical multiply cross-section rod. To satisfy the equilibrium equation and the condition of plasticity simultaneously, two stress function Ф and φ are introduced. By mathematical transformations it has been proved that Ф is constant along the path, and a formula to find its values on the contours has been obtained. The paper also presents the rationale of the line of stress discontinuity and obtained relationships, which allow us to derive the equations break lines for simple interaction of neighboring circuits, such as two lines, straight lines and circles, circles and a different sign of the curvature.After substitution into the boundary condition at the end of the stress function Ф and mathematical transformations a formula is obtained to determine the ultimate torque for the multiply cross-section rod.Using the doubly connected cross-section and three-connected cross-section rods as an example the application of the formula of ultimate torque is studied.For doubly connected cross-section rod, the paper offers a formula of the torque versus the radius of the rod, the aperture radius and the distance between their centers. It also clearly demonstrates the torque dependence both on the ratio of the radii and on the displacement of hole. It is shown that the value of the torque is more influenced by the displacement of hole, rather than by the ratio of the radii.For the three-connected cross-section rod the paper shows the integration feature that consists in selection of a coordinate system. As an example, the ultimate torque is found by two methods: analytical one and 3D modeling. The method of 3D modeling is based on the Nadai
Quick, “Imputation-free” meta-analysis with proxy-SNPs
Directory of Open Access Journals (Sweden)
Meesters Christian
2012-09-01
Full Text Available Abstract Background Meta-analysis (MA is widely used to pool genome-wide association studies (GWASes in order to a increase the power to detect strong or weak genotype effects or b as a result verification method. As a consequence of differing SNP panels among genotyping chips, imputation is the method of choice within GWAS consortia to avoid losing too many SNPs in a MA. YAMAS (Yet Another Meta Analysis Software, however, enables cross-GWAS conclusions prior to finished and polished imputation runs, which eventually are time-consuming. Results Here we present a fast method to avoid forfeiting SNPs present in only a subset of studies, without relying on imputation. This is accomplished by using reference linkage disequilibrium data from 1,000 Genomes/HapMap projects to find proxy-SNPs together with in-phase alleles for SNPs missing in at least one study. MA is conducted by combining association effect estimates of a SNP and those of its proxy-SNPs. Our algorithm is implemented in the MA software YAMAS. Association results from GWAS analysis applications can be used as input files for MA, tremendously speeding up MA compared to the conventional imputation approach. We show that our proxy algorithm is well-powered and yields valuable ad hoc results, possibly providing an incentive for follow-up studies. We propose our method as a quick screening step prior to imputation-based MA, as well as an additional main approach for studies without available reference data matching the ethnicities of study participants. As a proof of principle, we analyzed six dbGaP Type II Diabetes GWAS and found that the proxy algorithm clearly outperforms naïve MA on the p-value level: for 17 out of 23 we observe an improvement on the p-value level by a factor of more than two, and a maximum improvement by a factor of 2127. Conclusions YAMAS is an efficient and fast meta-analysis program which offers various methods, including conventional MA as well as inserting proxy
Auger transitions in singly and multiply ionized atoms
International Nuclear Information System (INIS)
Mehlhorn, W.
1978-01-01
Some recent progress in Auger and autoionizing electron spectrometry of free metal atoms and of multiply ionized atoms is reviewed. The differences which arise between the spectra of atoms in the gaseous and the solid state are due to solid state effects. This will be shown for Cd as an example. The super Coster-Kronig transitions 3p-3d 2 (hole notation) and Coster-Kronig transitions 3p-3d 4s have been measured and compared with free-atom calculations for free Zn atoms. The experimental width GAMMA(3p)=(2.1+-0.2)eV found for the free atom agrees with the value obtained for solid Zn but is considerably smaller than the theoretical value for the free atom. Autoionizing spectra of Na following an L-shell excitation or ionization by different particles are compared and discussed. The nonisotropic angular distribution of electrons from the transition 2p 5 3s 2 2 Psub(3/2)→2p 6 +e - is compared with theoretical calculations. Two examples for Auger spectrometry of multiply ionized atoms are given: (1) excitation of neon target atoms by light and heavy ions, and (2) excitation of projectile ions Be + and B + in single gas collisions with CH 4 . A strong alignment of the excited atoms has also been found here
A High-Speed Design of Montgomery Multiplier
Fan, Yibo; Ikenaga, Takeshi; Goto, Satoshi
With the increase of key length used in public cryptographic algorithms such as RSA and ECC, the speed of Montgomery multiplication becomes a bottleneck. This paper proposes a high speed design of Montgomery multiplier. Firstly, a modified scalable high-radix Montgomery algorithm is proposed to reduce critical path. Secondly, a high-radix clock-saving dataflow is proposed to support high-radix operation and one clock cycle delay in dataflow. Finally, a hardware-reused architecture is proposed to reduce the hardware cost and a parallel radix-16 design of data path is proposed to accelerate the speed. By using HHNEC 0.25μm standard cell library, the implementation results show that the total cost of Montgomery multiplier is 130 KGates, the clock frequency is 180MHz and the throughput of 1024-bit RSA encryption is 352kbps. This design is suitable to be used in high speed RSA or ECC encryption/decryption. As a scalable design, it supports any key-length encryption/decryption up to the size of on-chip memory.
Imaging moving objects from multiply scattered waves and multiple sensors
International Nuclear Information System (INIS)
Miranda, Analee; Cheney, Margaret
2013-01-01
In this paper, we develop a linearized imaging theory that combines the spatial, temporal and spectral components of multiply scattered waves as they scatter from moving objects. In particular, we consider the case of multiple fixed sensors transmitting and receiving information from multiply scattered waves. We use a priori information about the multipath background. We use a simple model for multiple scattering, namely scattering from a fixed, perfectly reflecting (mirror) plane. We base our image reconstruction and velocity estimation technique on a modification of a filtered backprojection method that produces a phase-space image. We plot examples of point-spread functions for different geometries and waveforms, and from these plots, we estimate the resolution in space and velocity. Through this analysis, we are able to identify how the imaging system depends on parameters such as bandwidth and number of sensors. We ultimately show that enhanced phase-space resolution for a distribution of moving and stationary targets in a multipath environment may be achieved using multiple sensors. (paper)
Multiplied effect of heat and radiation in chemical stress relaxation
International Nuclear Information System (INIS)
Ito, Masayuki
1981-01-01
About the deterioration of rubber due to radiation, useful knowledge can be obtained by the measurement of chemical stress relaxation. As an example, the rubber coating of cables in a reactor containment vessel is estimated to be irradiated by weak radiation at the temperature between 60 and 90 deg C for about 40 years. In such case, it is desirable to establish the method of accelerated test of the deterioration. The author showed previously that the law of time-dose rate conversion holds in the case of radiation. In this study, the chemical stress relaxation to rubber was measured by the simultaneous application of heat and radiation, and it was found that there was the multiplied effect of heat and radiation in the stress relaxation speed. Therefore the factor of multiplication of heat and radiation was proposed to describe quantitatively the degree of the multiplied effect. The chloroprene rubber used was offered by Hitachi Cable Co., Ltd. The experimental method and the results are reported. The multiplication of heat and radiation is not caused by the direct cut of molecular chains by radiation, instead, it is based on the temperature dependence of various reaction rates at which the activated species reached the cut of molecular chains through complex reaction mechanism and the temperature dependence of the diffusion rate of oxygen in rubber. (Kako, I.)
Image restorations constrained by a multiply exposed picture
International Nuclear Information System (INIS)
Breedlove, J.R. Jr.; Kruger, R.P.; Trussell, H.J.; Hunt, B.R.
1977-01-01
There are a number of possible industrial and scientific applications of nanosecond cineradiographs. While the technology exists to produce closely spaced pulses of x rays for this application, the quality of the time-resolved radiographs is severely limited. The limitations arise from the necessity of using a fluorescent screen to convert the transmitted x rays to light and then using electro-optical imaging systems to gate and to record the images with conventional high-speed cameras. It has been proposed that in addition to the time-resolved images, a conventional multiply-exposed radiograph be obtained. Simulations are used to demonstrate that the additional information supplied by the multiply-exposed radiograph can be used to improve the quality of digital image restorations of the time-resolved pictures over what could be achieved with the degraded images alone. Because of the need for image registration and rubber sheet transformations, this problem is one which can best be solved on a digital, as opposed to an optical, computer
Directory of Open Access Journals (Sweden)
Katya L Masconi
Full Text Available Imputation techniques used to handle missing data are based on the principle of replacement. It is widely advocated that multiple imputation is superior to other imputation methods, however studies have suggested that simple methods for filling missing data can be just as accurate as complex methods. The objective of this study was to implement a number of simple and more complex imputation methods, and assess the effect of these techniques on the performance of undiagnosed diabetes risk prediction models during external validation.Data from the Cape Town Bellville-South cohort served as the basis for this study. Imputation methods and models were identified via recent systematic reviews. Models' discrimination was assessed and compared using C-statistic and non-parametric methods, before and after recalibration through simple intercept adjustment.The study sample consisted of 1256 individuals, of whom 173 were excluded due to previously diagnosed diabetes. Of the final 1083 individuals, 329 (30.4% had missing data. Family history had the highest proportion of missing data (25%. Imputation of the outcome, undiagnosed diabetes, was highest in stochastic regression imputation (163 individuals. Overall, deletion resulted in the lowest model performances while simple imputation yielded the highest C-statistic for the Cambridge Diabetes Risk model, Kuwaiti Risk model, Omani Diabetes Risk model and Rotterdam Predictive model. Multiple imputation only yielded the highest C-statistic for the Rotterdam Predictive model, which were matched by simpler imputation methods.Deletion was confirmed as a poor technique for handling missing data. However, despite the emphasized disadvantages of simpler imputation methods, this study showed that implementing these methods results in similar predictive utility for undiagnosed diabetes when compared to multiple imputation.
Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.
2013-01-01
Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; Pcreatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980
He, Jun; Xu, Jiaqi; Wu, Xiao-Lin; Bauck, Stewart; Lee, Jungjae; Morota, Gota; Kachman, Stephen D; Spangler, Matthew L
2018-04-01
SNP chips are commonly used for genotyping animals in genomic selection but strategies for selecting low-density (LD) SNPs for imputation-mediated genomic selection have not been addressed adequately. The main purpose of the present study was to compare the performance of eight LD (6K) SNP panels, each selected by a different strategy exploiting a combination of three major factors: evenly-spaced SNPs, increased minor allele frequencies, and SNP-trait associations either for single traits independently or for all the three traits jointly. The imputation accuracies from 6K to 80K SNP genotypes were between 96.2 and 98.2%. Genomic prediction accuracies obtained using imputed 80K genotypes were between 0.817 and 0.821 for daughter pregnancy rate, between 0.838 and 0.844 for fat yield, and between 0.850 and 0.863 for milk yield. The two SNP panels optimized on the three major factors had the highest genomic prediction accuracy (0.821-0.863), and these accuracies were very close to those obtained using observed 80K genotypes (0.825-0.868). Further exploration of the underlying relationships showed that genomic prediction accuracies did not respond linearly to imputation accuracies, but were significantly affected by genotype (imputation) errors of SNPs in association with the traits to be predicted. SNPs optimal for map coverage and MAF were favorable for obtaining accurate imputation of genotypes whereas trait-associated SNPs improved genomic prediction accuracies. Thus, optimal LD SNP panels were the ones that combined both strengths. The present results have practical implications on the design of LD SNP chips for imputation-enabled genomic prediction.
Tritium transport and release from lithium ceramic breeder materials
International Nuclear Information System (INIS)
Johnson, C.E.; Kopasz, J.P.; Tam, S.W.
1994-01-01
In an operating fusion reactor,, the tritium breeding blanket will reach a condition in which the tritium release rate equals the production rate. The tritium release rate must be fast enough that the tritium inventory in the blanket does not become excessive. Slow tritium release will result in a large tritium inventory, which is unacceptable from both economic and safety viewpoints As a consequence, considerable effort has been devoted to understanding the tritium release mechanism from ceramic breeders and beryllium neutron multipliers through theoretical, laboratory, and in-reactor studies. This information is being applied to the development of models for predicting tritium release for various blanket operating conditions
Subanti, S.; Hakim, A. R.; Hakim, I. M.
2018-03-01
This purpose of the current study aims is to analyze the multiplier analysis on mining sector in Indonesia. The mining sectors defined by coal and metal; crude oil, natural gas, and geothermal; and other mining and quarrying. The multiplier analysis based from input output analysis, this divided by income multiplier and output multiplier. This results show that (1) Indonesian mining sectors ranked 6th with contribute amount of 6.81% on national total output; (2) Based on total gross value added, this sector contribute amount of 12.13% or ranked 4th; (3) The value from income multiplier is 0.7062 and the value from output multiplier is 1.2426.
Implementation gap between the theory and practice of biodiversity offset multipliers
DEFF Research Database (Denmark)
Bull, Joseph William; Lloyd, Samuel P.; Strange, Niels
2017-01-01
literature on multipliers. Then, we collate data on multipliers implemented in practice, rep- resenting the most complete such assessment to date. Finally, we explore remaining design gaps relating to social, ethical, and governance considerations. Multiplier values should theoretically be tens or hundreds...... when considering, for example, ecological uncertainties. We propose even larger multipliers required to satisfy previously ignored considerations – including prospect theory, taboo trades, and power relationships. Conversely, our data analyses show that multipliers are smaller in practice, regularly...... for the implementation gap we have identified. At the same time, there is a need to explore when and where the social, ethical, and governance requirements for NNL reviewed here can be met through approaches other than multipliers....
Directory of Open Access Journals (Sweden)
Nawar Shara
Full Text Available Kidney and cardiovascular disease are widespread among populations with high prevalence of diabetes, such as American Indians participating in the Strong Heart Study (SHS. Studying these conditions simultaneously in longitudinal studies is challenging, because the morbidity and mortality associated with these diseases result in missing data, and these data are likely not missing at random. When such data are merely excluded, study findings may be compromised. In this article, a subset of 2264 participants with complete renal function data from Strong Heart Exams 1 (1989-1991, 2 (1993-1995, and 3 (1998-1999 was used to examine the performance of five methods used to impute missing data: listwise deletion, mean of serial measures, adjacent value, multiple imputation, and pattern-mixture. Three missing at random models and one non-missing at random model were used to compare the performance of the imputation techniques on randomly and non-randomly missing data. The pattern-mixture method was found to perform best for imputing renal function data that were not missing at random. Determining whether data are missing at random or not can help in choosing the imputation method that will provide the most accurate results.
Lazar, Cosmin; Gatto, Laurent; Ferro, Myriam; Bruley, Christophe; Burger, Thomas
2016-04-01
Missing values are a genuine issue in label-free quantitative proteomics. Recent works have surveyed the different statistical methods to conduct imputation and have compared them on real or simulated data sets and recommended a list of missing value imputation methods for proteomics application. Although insightful, these comparisons do not account for two important facts: (i) depending on the proteomics data set, the missingness mechanism may be of different natures and (ii) each imputation method is devoted to a specific type of missingness mechanism. As a result, we believe that the question at stake is not to find the most accurate imputation method in general but instead the most appropriate one. We describe a series of comparisons that support our views: For instance, we show that a supposedly "under-performing" method (i.e., giving baseline average results), if applied at the "appropriate" time in the data-processing pipeline (before or after peptide aggregation) on a data set with the "appropriate" nature of missing values, can outperform a blindly applied, supposedly "better-performing" method (i.e., the reference method from the state-of-the-art). This leads us to formulate few practical guidelines regarding the choice and the application of an imputation method in a proteomics context.
Missing data in clinical trials: control-based mean imputation and sensitivity analysis.
Mehrotra, Devan V; Liu, Fang; Permutt, Thomas
2017-09-01
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between-treatment difference in population means of a clinical endpoint that is free from the confounding effects of "rescue" medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post-rescue data. We caution that the commonly used mixed-effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for "dropouts" (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient-level imputation is not required. A supplemental "dropout = failure" analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between-treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations. Copyright © 2017 John Wiley & Sons, Ltd.
Ruel, Isabelle; Aljenedil, Sumayah; Sadri, Iman; de Varennes, Émilie; Hegele, Robert A; Couture, Patrick; Bergeron, Jean; Wanneh, Eric; Baass, Alexis; Dufour, Robert; Gaudet, Daniel; Brisson, Diane; Brunham, Liam R; Francis, Gordon A; Cermakova, Lubomira; Brophy, James M; Ryomoto, Arnold; Mancini, G B John; Genest, Jacques
2018-02-01
Familial hypercholesterolemia (FH) is the most frequent genetic disorder seen clinically and is characterized by increased LDL cholesterol (LDL-C) (>95th percentile), family history of increased LDL-C, premature atherosclerotic cardiovascular disease (ASCVD) in the patient or in first-degree relatives, presence of tendinous xanthomas or premature corneal arcus, or presence of a pathogenic mutation in the LDLR , PCSK9 , or APOB genes. A diagnosis of FH has important clinical implications with respect to lifelong risk of ASCVD and requirement for intensive pharmacological therapy. The concentration of baseline LDL-C (untreated) is essential for the diagnosis of FH but is often not available because the individual is already on statin therapy. To validate a new algorithm to impute baseline LDL-C, we examined 1297 patients. The baseline LDL-C was compared with the imputed baseline obtained within 18 months of the initiation of therapy. We compared the percent reduction in LDL-C on treatment from baseline with the published percent reductions. After eliminating individuals with missing data, nonstandard doses of statins, or medications other than statins or ezetimibe, we provide data on 951 patients. The mean ± SE baseline LDL-C was 243.0 (2.2) mg/dL [6.28 (0.06) mmol/L], and the mean ± SE imputed baseline LDL-C was 244.2 (2.6) mg/dL [6.31 (0.07) mmol/L] ( P = 0.48). There was no difference in response according to the patient's sex or in percent reduction between observed and expected for individual doses or types of statin or ezetimibe. We provide a validated estimation of baseline LDL-C for patients with FH that may help clinicians in making a diagnosis. © 2017 American Association for Clinical Chemistry.
Directory of Open Access Journals (Sweden)
Jiangxiu Zhou
2014-09-01
Full Text Available The purpose of this study is to demonstrate a way of dealing with missing data in clustered randomized trials by doing multiple imputation (MI with the PAN package in R through SAS. The procedure for doing MI with PAN through SAS is demonstrated in detail in order for researchers to be able to use this procedure with their own data. An illustration of the technique with empirical data was also included. In this illustration thePAN results were compared with pairwise deletion and three types of MI: (1 Normal Model (NM-MI ignoring the cluster structure; (2 NM-MI with dummy-coded cluster variables (fixed cluster structure; and (3 a hybrid NM-MI which imputes half the time ignoring the cluster structure, and the other half including the dummy-coded cluster variables. The empirical analysis showed that using PAN and the other strategies produced comparable parameter estimates. However, the dummy-coded MI overestimated the intraclass correlation, whereas MI ignoring the cluster structure and the hybrid MI underestimated the intraclass correlation. When compared with PAN, the p-value and standard error for the treatment effect were higher with dummy-coded MI, and lower with MI ignoring the clusterstructure, the hybrid MI approach, and pairwise deletion. Previous studies have shown that NM-MI is not appropriate for handling missing data in clustered randomized trials. This approach, in addition to the pairwise deletion approach, leads to a biased intraclass correlation and faultystatistical conclusions. Imputation in clustered randomized trials should be performed with PAN. We have demonstrated an easy way for using PAN through SAS.
Private Debt Overhang and the Government Spending Multiplier: Evidence for the United States
Bernardini, Marco; Peersman, Gert
2015-01-01
Using state-dependent local projection methods and historical U.S. data, we find that government spending multipliers are considerably larger in periods of private debt overhang. In particular, we find significant crowding-out of personal consumption and investment in low-debt states, resulting in multipliers that are significantly below one. Conversely, in periods of private debt overhang, there is a strong crowding-in effect, while multipliers are much larger than one. In high-debt states, ...
Deflation Expectation Financial System and Decline in Money Multiplier(in Japanese)
IIDA Yasuyuki
2005-01-01
The money multiplier is in a continuous downward trend now since the bubble burst, and, in addition, the trend has accelerated after 2000. It is said that the monetary policy is difficult because the money multiplier has declined. To think about the monetary policy for the future, we should think about the cause of the decline of the money multiplier. I want to verify two typical hypotheses "Deflation Expectation Hypothesis" and "Financial System Hypothesis" for the decision of the money mult...
Using the Superpopulation Model for Imputations and Variance Computation in Survey Sampling
Directory of Open Access Journals (Sweden)
Petr Novák
2012-03-01
Full Text Available This study is aimed at variance computation techniques for estimates of population characteristics based on survey sampling and imputation. We use the superpopulation regression model, which means that the target variable values for each statistical unit are treated as random realizations of a linear regression model with weighted variance. We focus on regression models with one auxiliary variable and no intercept, which have many applications and straightforward interpretation in business statistics. Furthermore, we deal with caseswhere the estimates are not independent and thus the covariance must be computed. We also consider chained regression models with auxiliary variables as random variables instead of constants.
Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things
Yan, Xiaobo; Xiong, Weiqing; Hu, Liang; Wang, Feng; Zhao, Kuo
2015-01-01
This paper addresses missing value imputation for the Internet of Things (IoT). Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the red...
DEFF Research Database (Denmark)
Goode, Ellen L; Fridley, Brooke L; Vierkant, Robert A
2009-01-01
Polymorphisms in genes critical to cell cycle control are outstanding candidates for association with ovarian cancer risk; numerous genes have been interrogated by multiple research groups using differing tagging single-nucleotide polymorphism (SNP) sets. To maximize information gleaned from......, and rs3212891; CDK2 rs2069391, rs2069414, and rs17528736; and CCNE1 rs3218036. These results exemplify the utility of imputation in candidate gene studies and lend evidence to a role of cell cycle genes in ovarian cancer etiology, suggest a reduced set of SNPs to target in additional cases and controls....
Non-imputability, criminal dangerousness and curative safety measures: myths and realities
Directory of Open Access Journals (Sweden)
Frank Harbottle Quirós
2017-04-01
Full Text Available The curative safety measures are imposed in a criminal proceeding to the non-imputable people provided that through a prognosis it is concluded in an affirmative way about its criminal dangerousness. Although this statement seems very elementary, in judicial practice several myths remain in relation to these legal institutes whose versions may vary, to a greater or lesser extent, between the different countries of the world. In this context, the present article formulates ten myths based on the experience of Costa Rica and provides an explanation that seeks to weaken or knock them down, inviting the reader to reflect on them.
A suggested approach for imputation of missing dietary data for young children in daycare
Stevens, June; Ou, Fang-Shu; Truesdale, Kimberly P.; Zeng, Donglin; Vaughn, Amber E.; Pratt, Charlotte; Ward, Dianne S.
2015-01-01
Background: Parent-reported 24-h diet recalls are an accepted method of estimating intake in young children. However, many children eat while at childcare making accurate proxy reports by parents difficult.Objective: The goal of this study was to demonstrate a method to impute missing weekday lunch and daytime snack nutrient data for daycare children and to explore the concurrent predictive and criterion validity of the method.Design: Data were from children aged 2-5 years in the My Parenting...
Directory of Open Access Journals (Sweden)
Domingues M. O.
2013-12-01
Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution.
Hydrogen retention behavior of beryllides as advanced neutron multipliers
Directory of Open Access Journals (Sweden)
Y. Fujii
2016-12-01
Full Text Available Beryllium intermetallic compounds (beryllides are the most promising candidate materials for use as advanced neutron multipliers in future fusion reactors because of their low swelling and high stability at high temperatures. Recently, beryllium–titanium beryllide pebbles such as Be12Ti have been successfully fabricated using a novel granulation process. In this study, the fundamental aspects of the behavior of hydrogen isotopes in Be12Ti pebbles were investigated via thermal desorption spectroscopy and transmission electron microscopy. In addition, atomistic calculations using first principles electronic-structure methods were applied to determine the solution energy of hydrogen in Be12Ti. The results showed simpler and weaker hydrogen-trapping efficiency for Be12Ti than for pure Be.
Development of a thick gas electron multiplier for microdosimetry
International Nuclear Information System (INIS)
Orchard, G.M.; Chin, K.; Prestwich, W.V.; Waker, A.J.; Byun, S.H.
2011-01-01
A new tissue-equivalent proportional counter based on a thick gas electron multiplier (THGEM) was developed and tested for microdosimetry. A systematic test was conducted at the McMaster Accelerator Laboratory to investigate the overall performance of the prototype detector. A mixed neutron-gamma-ray radiation field was generated using the 7 Li(p,n) reaction. The detector was operated at low voltage initially to test the stability and then the relative multiplication gain was measured as a function of the operating high voltage. A drift potential of 100 V and a THGEM bias of 727 V generated a multiplication gain sufficient for the detection of both neutron and gamma-ray radiation. A consistent microdosimetric pattern was observed between the THGEM detector and standard TEPC for microdosimetry.
Hardware matrix multiplier/accumulator for lattice gauge theory calculations
International Nuclear Information System (INIS)
Christ, N.H.; Terrano, A.E.
1984-01-01
The design and operating characteristics of a special-purpose matrix multiplier/accumulator are described. The device is connected through a standard interface to a host PDP11 computer. It provides a set of high-speed, matrix-oriented instructions which can be called from a program running on the host. The resulting operations accelerate the complex matrix arithmetic required for a class of Monte Carlo calculations currently of interest in high energy particle physics. A working version of the device is presently being used to carry out a pure SU(3) lattice gauge theory calculation using a PDP11/23 with a performance twice that obtainable on a VAX11/780. (orig.)
Research on nonlinearity effect of secondary electron multiplier
International Nuclear Information System (INIS)
Wei Xingjian; Liao Junsheng; Deng Dachao; Yu Chunrong; Yuan Li
2007-01-01
The nonlinearity of secondary electron multiplier (SEM) of a thermal ionization mass spectrometer has been researched by using UTB-500 uranium isotope reference material and multi-collecting technique. The results show that the nonlinearity effect of SEM exists in the whole ion counting range, and there is an extreme point of the nonlinearity when the ion counting rate is about 20000 cps. The deviation between measured value of the extreme point and the reference value of the reference sample can be up to 3%, and the nonlinearity obeys logarithm linearity law on both sides of extreme point. A kind of mathematics model of nonlinearity calibration has been put forward. Using this model, the nonlinearity of SEM of TIMS can be calibrated. (authors)
Charge-exchange collisions of multiply charged ions with atoms
International Nuclear Information System (INIS)
Grozdanov, T.P.; Janev, R.K.
1978-01-01
The problem of electron transfer between neutral atoms and multiply charged ions is considered at low and medium energies. It is assumed that a large number of final states are available for the electron transition so that the electron-capture process is treated as a tunnel effect caused by the strong attractive Coulomb field of the multicharged ions. The electron transition probability is obtained in a closed form using the modified-comparison-equation method to solve the Schroedinger equation. An approximately linear dependence of the one-electron transfer cross section on the charge of multicharged ion is found. Cross-section calculations of a number of charge-exchange reactions are performed
Multiplier ideal sheaves and analytic methods in algebraic geometry
International Nuclear Information System (INIS)
Demailly, J.-P.
2001-01-01
Our main purpose here is to describe a few analytic tools which are useful to study questions such as linear series and vanishing theorems for algebraic vector bundles. One of the early successes of analytic methods in this context is Kodaira's use of the Bochner technique in relation with the theory of harmonic forms, during the decade 1950-60.The idea is to represent cohomology classes by harmonic forms and to prove vanishing theorems by means of suitable a priori curvature estimates. We pursue the study of L2 estimates, in relation with the Nullstellenstatz and with the extension problem. We show how subadditivity can be used to derive an approximation theorem for (almost) plurisubharmonic functions: any such function can be approximated by a sequence of (almost) plurisubharmonic functions which are smooth outside an analytic set, and which define the same multiplier ideal sheaves. From this, we derive a generalized version of the hard Lefschetz theorem for cohomology with values in a pseudo-effective line bundle; namely, the Lefschetz map is surjective when the cohomology groups are twisted by the relevant multiplier ideal sheaves. These notes are essentially written with the idea of serving as an analytic tool- box for algebraic geometers. Although efficient algebraic techniques exist, our feeling is that the analytic techniques are very flexible and offer a large variety of guidelines for more algebraic questions (including applications to number theory which are not discussed here). We made a special effort to use as little prerequisites and to be as self-contained as possible; hence the rather long preliminary sections dealing with basic facts of complex differential geometry
Multiplier ideal sheaves and analytic methods in algebraic geometry
Energy Technology Data Exchange (ETDEWEB)
Demailly, J -P [Universite de Grenoble I, Institut Fourier, Saint-Martin d' Heres (France)
2001-12-15
Our main purpose here is to describe a few analytic tools which are useful to study questions such as linear series and vanishing theorems for algebraic vector bundles. One of the early successes of analytic methods in this context is Kodaira's use of the Bochner technique in relation with the theory of harmonic forms, during the decade 1950-60.The idea is to represent cohomology classes by harmonic forms and to prove vanishing theorems by means of suitable a priori curvature estimates. We pursue the study of L2 estimates, in relation with the Nullstellenstatz and with the extension problem. We show how subadditivity can be used to derive an approximation theorem for (almost) plurisubharmonic functions: any such function can be approximated by a sequence of (almost) plurisubharmonic functions which are smooth outside an analytic set, and which define the same multiplier ideal sheaves. From this, we derive a generalized version of the hard Lefschetz theorem for cohomology with values in a pseudo-effective line bundle; namely, the Lefschetz map is surjective when the cohomology groups are twisted by the relevant multiplier ideal sheaves. These notes are essentially written with the idea of serving as an analytic tool- box for algebraic geometers. Although efficient algebraic techniques exist, our feeling is that the analytic techniques are very flexible and offer a large variety of guidelines for more algebraic questions (including applications to number theory which are not discussed here). We made a special effort to use as little prerequisites and to be as self-contained as possible; hence the rather long preliminary sections dealing with basic facts of complex differential geometry.
Multiply charged ions from solid substances with the mVINIS Ion Source
International Nuclear Information System (INIS)
Dragani, I; Nedeljkovi, T; Jovovi, J; Siljegovic, M; Dobrosavljevic, A
2007-01-01
We have used the well known metal-ions-from-volatile-compounds (MIVOC) method at the mVINIS Ion Source to produce the multiply charged ion beams form solid substances. Based on this method the very intense and stable multiply charged ion beams of several solid substances having the high melting points were extracted. The ion yields and the spectra of multiply charged ion beams obtained from solid materials like Fe and Hf will be presented. We have utilized the multiply charged ion beams from solid substances to irradiate the polymers, fullerenes and glassy carbon at the low energy channel for modification of materials
Gottlieb, Assaf; Daneshjou, Roxana; DeGorter, Marianne; Bourgeois, Stephane; Svensson, Peter J; Wadelius, Mia; Deloukas, Panos; Montgomery, Stephen B; Altman, Russ B
2017-11-24
Genome-wide association studies are useful for discovering genotype-phenotype associations but are limited because they require large cohorts to identify a signal, which can be population-specific. Mapping genetic variation to genes improves power and allows the effects of both protein-coding variation as well as variation in expression to be combined into "gene level" effects. Previous work has shown that warfarin dose can be predicted using information from genetic variation that affects protein-coding regions. Here, we introduce a method that improves dose prediction by integrating tissue-specific gene expression. In particular, we use drug pathways and expression quantitative trait loci knowledge to impute gene expression-on the assumption that differential expression of key pathway genes may impact dose requirement. We focus on 116 genes from the pharmacokinetic and pharmacodynamic pathways of warfarin within training and validation sets comprising both European and African-descent individuals. We build gene-tissue signatures associated with warfarin dose in a cohort-specific manner and identify a signature of 11 gene-tissue pairs that significantly augments the International Warfarin Pharmacogenetics Consortium dosage-prediction algorithm in both populations. Our results demonstrate that imputed expression can improve dose prediction and bridge population-specific compositions. MATLAB code is available at https://github.com/assafgo/warfarin-cohort.
Multiple imputation to account for measurement error in marginal structural models
Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.
2015-01-01
Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338
Combining item response theory with multiple imputation to equate health assessment questionnaires.
Gu, Chenyang; Gutman, Roee
2017-09-01
The assessment of patients' functional status across the continuum of care requires a common patient assessment tool. However, assessment tools that are used in various health care settings differ and cannot be easily contrasted. For example, the Functional Independence Measure (FIM) is used to evaluate the functional status of patients who stay in inpatient rehabilitation facilities, the Minimum Data Set (MDS) is collected for all patients who stay in skilled nursing facilities, and the Outcome and Assessment Information Set (OASIS) is collected if they choose home health care provided by home health agencies. All three instruments or questionnaires include functional status items, but the specific items, rating scales, and instructions for scoring different activities vary between the different settings. We consider equating different health assessment questionnaires as a missing data problem, and propose a variant of predictive mean matching method that relies on Item Response Theory (IRT) models to impute unmeasured item responses. Using real data sets, we simulated missing measurements and compared our proposed approach to existing methods for missing data imputation. We show that, for all of the estimands considered, and in most of the experimental conditions that were examined, the proposed approach provides valid inferences, and generally has better coverages, relatively smaller biases, and shorter interval estimates. The proposed method is further illustrated using a real data set. © 2016, The International Biometric Society.
Multiple Imputation to Account for Measurement Error in Marginal Structural Models.
Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J
2015-09-01
Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.
Directory of Open Access Journals (Sweden)
Assaf Gottlieb
2017-11-01
Full Text Available Abstract Background Genome-wide association studies are useful for discovering genotype–phenotype associations but are limited because they require large cohorts to identify a signal, which can be population-specific. Mapping genetic variation to genes improves power and allows the effects of both protein-coding variation as well as variation in expression to be combined into “gene level” effects. Methods Previous work has shown that warfarin dose can be predicted using information from genetic variation that affects protein-coding regions. Here, we introduce a method that improves dose prediction by integrating tissue-specific gene expression. In particular, we use drug pathways and expression quantitative trait loci knowledge to impute gene expression—on the assumption that differential expression of key pathway genes may impact dose requirement. We focus on 116 genes from the pharmacokinetic and pharmacodynamic pathways of warfarin within training and validation sets comprising both European and African-descent individuals. Results We build gene-tissue signatures associated with warfarin dose in a cohort-specific manner and identify a signature of 11 gene-tissue pairs that significantly augments the International Warfarin Pharmacogenetics Consortium dosage-prediction algorithm in both populations. Conclusions Our results demonstrate that imputed expression can improve dose prediction and bridge population-specific compositions. MATLAB code is available at https://github.com/assafgo/warfarin-cohort
FCMPSO: An Imputation for Missing Data Features in Heart Disease Classification
Salleh, Mohd Najib Mohd; Ashikin Samat, Nurul
2017-08-01
The application of data mining and machine learning in directing clinical research into possible hidden knowledge is becoming greatly influential in medical areas. Heart Disease is a killer disease around the world, and early prevention through efficient methods can help to reduce the mortality number. Medical data may contain many uncertainties, as they are fuzzy and vague in nature. Nonetheless, imprecise features data such as no values and missing values can affect quality of classification results. Nevertheless, the other complete features are still capable to give information in certain features. Therefore, an imputation approach based on Fuzzy C-Means and Particle Swarm Optimization (FCMPSO) is developed in preprocessing stage to help fill in the missing values. Then, the complete dataset is trained in classification algorithm, Decision Tree. The experiment is trained with Heart Disease dataset and the performance is analysed using accuracy, precision, and ROC values. Results show that the performance of Decision Tree is increased after the application of FCMSPO for imputation.
Roth, Philip L; Le, Huy; Oh, In-Sue; Van Iddekinge, Chad H; Bobko, Philip
2018-06-01
Meta-analysis has become a well-accepted method for synthesizing empirical research about a given phenomenon. Many meta-analyses focus on synthesizing correlations across primary studies, but some primary studies do not report correlations. Peterson and Brown (2005) suggested that researchers could use standardized regression weights (i.e., beta coefficients) to impute missing correlations. Indeed, their beta estimation procedures (BEPs) have been used in meta-analyses in a wide variety of fields. In this study, the authors evaluated the accuracy of BEPs in meta-analysis. We first examined how use of BEPs might affect results from a published meta-analysis. We then developed a series of Monte Carlo simulations that systematically compared the use of existing correlations (that were not missing) to data sets that incorporated BEPs (that impute missing correlations from corresponding beta coefficients). These simulations estimated ρ̄ (mean population correlation) and SDρ (true standard deviation) across a variety of meta-analytic conditions. Results from both the existing meta-analysis and the Monte Carlo simulations revealed that BEPs were associated with potentially large biases when estimating ρ̄ and even larger biases when estimating SDρ. Using only existing correlations often substantially outperformed use of BEPs and virtually never performed worse than BEPs. Overall, the authors urge a return to the standard practice of using only existing correlations in meta-analysis. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Directory of Open Access Journals (Sweden)
Lotz Meredith J
2008-01-01
Full Text Available Abstract Background Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures × time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. Results We found that the optimal imputation algorithms (LSA, LLS, and BPCA are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Conclusion Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA
Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C
2008-01-10
Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity
Directory of Open Access Journals (Sweden)
Peter K Joshi
Full Text Available The analysis of less common variants in genome-wide association studies promises to elucidate complex trait genetics but is hampered by low power to reliably detect association. We show that addition of population-specific exome sequence data to global reference data allows more accurate imputation, particularly of less common SNPs (minor allele frequency 1-10% in two very different European populations. The imputation improvement corresponds to an increase in effective sample size of 28-38%, for SNPs with a minor allele frequency in the range 1-3%.
Golino, Hudson F.; Gomes, Cristiano M. A.
2016-01-01
This paper presents a non-parametric imputation technique, named random forest, from the machine learning field. The random forest procedure has two main tuning parameters: the number of trees grown in the prediction and the number of predictors used. Fifty experimental conditions were created in the imputation procedure, with different…
Energy Technology Data Exchange (ETDEWEB)
Fiedler, H. [UNEP Chemicals, Chatelaine (Switzerland)
2004-09-15
The Stockholm Convention on Persistent Organic Pollutants (POPs) entered into force on 17 May 2004 with 50 Parties. In May 2004, 59 countries had ratified or acceded the Convention. The objective of the Convention is ''to protect human health and the environment from persistent organic pollutants''. For intentionally produced POPs, e.g., pesticides and industrial chemicals such as hexachlorobenzene and polychlorinated biphenyls, this will be achieved by stop of production and use. For unintentionally generated POPs, such as polychlorinated dibenzo-pdioxins (PCDD) and polychlorinated dibenzofurans (PCDF), measures have to be taken to ''reduce the total releases derived from anthropogenic sources''; the final goal is ultimate elimination, where feasible. Under the Convention, Parties have to establish and maintain release inventories to prove the continuous release reduction. Since many countries do not have the technical and financial capacity to measure all releases from all potential PCDD/PCDF sources, UNEP Chemicals has developed the ''Standardized Toolkit for the Identification of Quantification of Dioxin and Furan Releases'' (''Toolkit'' for short), a methodology to estimate annual releases from a number of sources. With this methodology, annual releases can be estimated by multiplying process-specific default emission factors provided in the Toolkit with national activity data. At the seventh session of the Intergovernmental Negotiating Committee, the Toolkit was recommended to be used by countries when reporting national release data to the Conference of the Parties. The Toolkit is especially used by developing countries and countries with economies in transition where no measured data are available. Results from Uruguay, Thailand, Jordan, Philippines, and Brunei Darussalam have been published.
Jacobi's last multiplier and symmetries for the Kepler problem plus a lineal story
International Nuclear Information System (INIS)
Nucci, M C; Leach, P G L
2004-01-01
We calculate the first integrals of the Kepler problem by the method of Jacobi's last multiplier using the symmetries for the equations of motion. Also we provide another example which shows that Jacobi's last multiplier together with Lie symmetries unveils many first integrals neither necessarily algebraic nor rational whereas other published methods may yield just one
The long-run relationship between the Japanese credit and money multipliers
Mototsugu Fukushige
2013-01-01
The standard argument is that while money creation and credit creation have different channels, they provide the same theoretical size of multipliers. However, there is usually some difference in practice. Consequently, in this paper we investigate the long-run relationship between the credit and money multipliers in Japan.
A microchannel plate X-ray multiplier with rising-time less than 170 ps
International Nuclear Information System (INIS)
Zhao Shicheng; Ouyang Bin
1987-01-01
The time reponse of a microchannel plate X-ray multiplier has been improved considerably by using a coupling construction of coaxial tapers. The experimental calibration results with laser plasma X-ray source show that the rising-time of the multiplier is less than 170 ps
International Nuclear Information System (INIS)
Hahn, S.F.; Burch, J.L.
1980-01-01
A series of data on high count rate channel electron multipliers revealed an initial drop and subsequent recovery of gains in exponential fashion. The FWHM of the pulse height distribution at the initial stage of testing can be used as a good criterion for the selection of operating bias voltage of the channel electron multiplier
Investigation of the Decelerating Field of an Electron Multiplier under Negative Ion Impact
DEFF Research Database (Denmark)
Larsen, Elfinn; Kjeldgaard, K.
1973-01-01
The effect of the decelerating field of an electron multiplier towards negative ions was investigated under standard mass spectrometric conditions. Diminishing of this decelerating field by changing of the potential of the electron multiplier increased the overall sensitivity to negative ions...
Kabisch, Maria; Hamann, Ute; Lorenzo Bermejo, Justo
2017-10-17
Genotypes not directly measured in genetic studies are often imputed to improve statistical power and to increase mapping resolution. The accuracy of standard imputation techniques strongly depends on the similarity of linkage disequilibrium (LD) patterns in the study and reference populations. Here we develop a novel approach for genotype imputation in low-recombination regions that relies on the coalescent and permits to explicitly account for population demographic factors. To test the new method, study and reference haplotypes were simulated and gene trees were inferred under the basic coalescent and also considering population growth and structure. The reference haplotypes that first coalesced with study haplotypes were used as templates for genotype imputation. Computer simulations were complemented with the analysis of real data. Genotype concordance rates were used to compare the accuracies of coalescent-based and standard (IMPUTE2) imputation. Simulations revealed that, in LD-blocks, imputation accuracy relying on the basic coalescent was higher and less variable than with IMPUTE2. Explicit consideration of population growth and structure, even if present, did not practically improve accuracy. The advantage of coalescent-based over standard imputation increased with the minor allele frequency and it decreased with population stratification. Results based on real data indicated that, even in low-recombination regions, further research is needed to incorporate recombination in coalescence inference, in particular for studies with genetically diverse and admixed individuals. To exploit the full potential of coalescent-based methods for the imputation of missing genotypes in genetic studies, further methodological research is needed to reduce computer time, to take into account recombination, and to implement these methods in user-friendly computer programs. Here we provide reproducible code which takes advantage of publicly available software to facilitate
LENUS (Irish Health Repository)
Hardouin, Jean-Benoit
2011-07-14
Abstract Background Nowadays, more and more clinical scales consisting in responses given by the patients to some items (Patient Reported Outcomes - PRO), are validated with models based on Item Response Theory, and more specifically, with a Rasch model. In the validation sample, presence of missing data is frequent. The aim of this paper is to compare sixteen methods for handling the missing data (mainly based on simple imputation) in the context of psychometric validation of PRO by a Rasch model. The main indexes used for validation by a Rasch model are compared. Methods A simulation study was performed allowing to consider several cases, notably the possibility for the missing values to be informative or not and the rate of missing data. Results Several imputations methods produce bias on psychometrical indexes (generally, the imputation methods artificially improve the psychometric qualities of the scale). In particular, this is the case with the method based on the Personal Mean Score (PMS) which is the most commonly used imputation method in practice. Conclusions Several imputation methods should be avoided, in particular PMS imputation. From a general point of view, it is important to use an imputation method that considers both the ability of the patient (measured for example by his\\/her score), and the difficulty of the item (measured for example by its rate of favourable responses). Another recommendation is to always consider the addition of a random process in the imputation method, because such a process allows reducing the bias. Last, the analysis realized without imputation of the missing data (available case analyses) is an interesting alternative to the simple imputation in this context.
Area efficient radix 4/sup 2/ 64 point pipeline fft architecture using modified csd multiplier
International Nuclear Information System (INIS)
Siddiq, F.; Muhammad, T.; Iqbal, M.
2014-01-01
A modified Fast Fourier Transform (FFT) based radix 42 algorithm for Orthogonal Frequency Division Multiplexing (OFDM) systems is presented. When compared with similar schemes like Canonic signed digit (CSD) Constant Multiplier, the modified CSD multiplier can provide a improvement of more than 36% in terms of multiplicative complexity. In Comparison of area being occupied the amount of Full adders is reduced by 32% and amount of half adders is reduced by 42%. The modified CSD multiplier scheme is implemented on Xilinx ISE 10.1 using Spartan-III XC3S1000 FPGA as a target device. The synthesis results of modified CSD Multiplier on Xilinx show efficient Twiddle Factor ROM Design and effective area reduction in comparison to CSD constant multiplier. (author)
Temperature Insensitive Current-Mode Four Quadrant Multiplier Using Single CFCTA
Directory of Open Access Journals (Sweden)
Tuntrakool Sunti
2017-01-01
Full Text Available A four quadrant multiplier of two current input signals using active building block, namely current follower cascaded transconductance amplifier (CFCTA is presented in this paper. The proposed multiplier consists of only single CFCTA without the use of any passive element. The presented circuit has low impedance at current input node and high impedance at current output node which is convenient for cascading in current mode circuit without the need of current buffer circuits. The output current can multiply two input currents with temperature insensitivity. Moreover, the magnitude of output current can be controlled electronically via DC bias current. With only single active building block, the presented multiplier is suitable for integrated circuit implementation for analog signal processing. Simulation results from a PSpice program are presented in order to demonstrate the multiplier proposed here.
Time-division-multiplex control scheme for voltage multiplier rectifiers
Directory of Open Access Journals (Sweden)
Bin-Han Liu
2017-03-01
Full Text Available A voltage multiplier rectifier with a novel time-division-multiplexing (TDM control scheme for high step-up converters is proposed in this study. In the proposed TDM control scheme, two full-wave voltage doubler rectifiers can be combined to realise a voltage quadrupler rectifier. The proposed voltage quadrupler rectifier can reduce transformer turn ratio and transformer size for high step-up converters and also reduce voltage stress for the output capacitors and rectifier diodes. An N-times voltage rectifier can be straightforwardly produced by extending the concepts from the proposed TDM control scheme. A phase-shift full-bridge (PSFB converter is adopted in the primary side of the proposed voltage quadrupler rectifier to construct a PSFB quadrupler converter. Experimental results for the PSFB quadrupler converter demonstrate the performance of the proposed TDM control scheme for voltage quadrupler rectifiers. An 8-times voltage rectifier is simulated to determine the validity of extending the proposed TDM control scheme to realise an N-times voltage rectifier. Experimental and simulation results show that the proposed TDM control scheme has great potential to be used in high step-up converters.
A Novel and Efficient Hardware Implementation of Scalar Point Multiplier
Directory of Open Access Journals (Sweden)
M. Masoumi
2012-12-01
Full Text Available A new and highly efficient architecture for elliptic curve scalar point multiplication is presented. To achieve the maximum architectural and timing improvements we have reorganized and reordered the critical path of the Lopez-Dahab scalar point multiplication architecture such that logic structures are implemented in parallel and operations in the critical path are diverted to noncritical paths. The results we obtained show that with G = 55 our proposed design is able to compute GF(2163 elliptic curve scalar multiplication in 9.6 μs with the maximum achievable frequency of 250 MHz on Xilinx Virtex-4 (XC4VLX200, where G is the digit size of the underlying digit-serial finite field multiplier. Another implementation variant for less resource consumption is also proposed. With G=33, the design performs the same operation in 11.6 μs at 263 MHz on the same platform. The results of synthesis show that in the first implementation 17929 slices or 20% of the chip area is occupied which makes it suitable for speed critical cryptographic applications while in the second implementation 14203 slices or 16% of the chip area is utilized which makes it suitable for applications that may require speed-area trade-off. The new design shows superior performance compared to the previously reported designs.
DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro
2016-10-01
This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.
The objective of this study is to investigate single nucleotide polymorphism (SNP) genotypes imputation of Hereford cattle. Purebred Herefords were from two sources, Line 1 Hereford (N=240) and representatives of Industry Herefords (N=311). Using different reference panels of 62 and 494 males with 1...
2010-04-01
... 21 Food and Drugs 9 2010-04-01 2010-04-01 false May the Office of National Drug Control Policy impute conduct of one person to another? 1404.630 Section 1404.630 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) General Principles Relating to Suspension and Debarment Actions § 1404.630...
Minica, C.C.; Dolan, C.V.; Willemsen, G.; Vink, J.M.; Boomsma, D.I.
2013-01-01
When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of
Kenneth B. Pierce; Janet L. Ohmann; Michael C. Wimberly; Matthew J. Gregory; Jeremy S. Fried
2009-01-01
Land managers need consistent information about the geographic distribution of wildland fuels and forest structure over large areas to evaluate fire risk and plan fuel treatments. We compared spatial predictions for 12 fuel and forest structure variables across three regions in the western United States using gradient nearest neighbor (GNN) imputation, linear models (...
Directory of Open Access Journals (Sweden)
Hardt Jochen
2012-12-01
Full Text Available Abstract Background Multiple imputation is becoming increasingly popular. Theoretical considerations as well as simulation studies have shown that the inclusion of auxiliary variables is generally of benefit. Methods A simulation study of a linear regression with a response Y and two predictors X1 and X2 was performed on data with n = 50, 100 and 200 using complete cases or multiple imputation with 0, 10, 20, 40 and 80 auxiliary variables. Mechanisms of missingness were either 100% MCAR or 50% MAR + 50% MCAR. Auxiliary variables had low (r=.10 vs. moderate correlations (r=.50 with X’s and Y. Results The inclusion of auxiliary variables can improve a multiple imputation model. However, inclusion of too many variables leads to downward bias of regression coefficients and decreases precision. When the correlations are low, inclusion of auxiliary variables is not useful. Conclusion More research on auxiliary variables in multiple imputation should be performed. A preliminary rule of thumb could be that the ratio of variables to cases with complete data should not go below 1 : 3.
Improved imputation of low-frequency and rare variants using the UK10K haplotype reference panel
DEFF Research Database (Denmark)
Huang, Jie; Howie, Bryan; Mccarthy, Shane
2015-01-01
Imputing genotypes from reference panels created by whole-genome sequencing (WGS) provides a cost-effective strategy for augmenting the single-nucleotide polymorphism (SNP) content of genome-wide arrays. The UK10K Cohorts project has generated a data set of 3,781 whole genomes sequenced at low de...
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false May the Federal Mediation and Conciliation Service impute...) FEDERAL MEDIATION AND CONCILIATION SERVICE GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) General Principles Relating to Suspension and Debarment Actions § 1471.630 May the Federal Mediation and...
Rosner, Bernard; Colditz, Graham A.
2011-01-01
Purpose Age at menopause, a major marker in the reproductive life, may bias results for evaluation of breast cancer risk after menopause. Methods We follow 38,948 premenopausal women in 1980 and identify 2,586 who reported hysterectomy without bilateral oophorectomy, and 31,626 who reported natural menopause during 22 years of follow-up. We evaluate risk factors for natural menopause, impute age at natural menopause for women reporting hysterectomy without bilateral oophorectomy and estimate the hazard of reaching natural menopause in the next 2 years. We apply this imputed age at menopause to both increase sample size and to evaluate the relation between postmenopausal exposures and risk of breast cancer. Results Age, cigarette smoking, age at menarche, pregnancy history, body mass index, history of benign breast disease, and history of breast cancer were each significantly related to age at natural menopause; duration of oral contraceptive use and family history of breast cancer were not. The imputation increased sample size substantially and although some risk factors after menopause were weaker in the expanded model (height, and alcohol use), use of hormone therapy is less biased. Conclusions Imputing age at menopause increases sample size, broadens generalizability making it applicable to women with hysterectomy, and reduces bias. PMID:21441037
Improved imputation of low-frequency and rare variants using the UK10K haplotype reference panel
J. Huang (Jie); B. Howie (Bryan); S. McCarthy (Shane); Y. Memari (Yasin); K. Walter (Klaudia); J.L. Min (Josine L.); P. Danecek (Petr); G. Malerba (Giovanni); E. Trabetti (Elisabetta); H.-F. Zheng (Hou-Feng); G. Gambaro (Giovanni); J.B. Richards (Brent); R. Durbin (Richard); N.J. Timpson (Nicholas); J. Marchini (Jonathan); N. Soranzo (Nicole); S.H. Al Turki (Saeed); A. Amuzu (Antoinette); C. Anderson (Carl); R. Anney (Richard); D. Antony (Dinu); M.S. Artigas; M. Ayub (Muhammad); S. Bala (Senduran); J.C. Barrett (Jeffrey); I.E. Barroso (Inês); P.L. Beales (Philip); M. Benn (Marianne); J. Bentham (Jamie); S. Bhattacharya (Shoumo); E. Birney (Ewan); D.H.R. Blackwood (Douglas); M. Bobrow (Martin); E. Bochukova (Elena); P.F. Bolton (Patrick F.); R. Bounds (Rebecca); C. Boustred (Chris); G. Breen (Gerome); M. Calissano (Mattia); K. Carss (Keren); J.P. Casas (Juan Pablo); J.C. Chambers (John C.); R. Charlton (Ruth); K. Chatterjee (Krishna); L. Chen (Lu); A. Ciampi (Antonio); S. Cirak (Sebahattin); P. Clapham (Peter); G. Clement (Gail); G. Coates (Guy); M. Cocca (Massimiliano); D.A. Collier (David); C. Cosgrove (Catherine); T. Cox (Tony); N.J. Craddock (Nick); L. Crooks (Lucy); S. Curran (Sarah); D. Curtis (David); A. Daly (Allan); I.N.M. Day (Ian N.M.); A.G. Day-Williams (Aaron); G.V. Dedoussis (George); T. Down (Thomas); Y. Du (Yuanping); C.M. van Duijn (Cornelia); I. Dunham (Ian); T. Edkins (Ted); R. Ekong (Rosemary); P. Ellis (Peter); D.M. Evans (David); I.S. Farooqi (I. Sadaf); D.R. Fitzpatrick (David R.); P. Flicek (Paul); J. Floyd (James); A.R. Foley (A. Reghan); C.S. Franklin (Christopher S.); M. Futema (Marta); L. Gallagher (Louise); P. Gasparini (Paolo); T.R. Gaunt (Tom); M. Geihs (Matthias); D. Geschwind (Daniel); C.M.T. Greenwood (Celia); H. Griffin (Heather); D. Grozeva (Detelina); X. Guo (Xiaosen); X. Guo (Xueqin); H. Gurling (Hugh); D. Hart (Deborah); A.E. Hendricks (Audrey E.); P.A. Holmans (Peter A.); L. Huang (Liren); T. Hubbard (Tim); S.E. Humphries (Steve E.); M.E. Hurles (Matthew); P.G. Hysi (Pirro); V. Iotchkova (Valentina); A. Isaacs (Aaron); D.K. Jackson (David K.); Y. Jamshidi (Yalda); J. Johnson (Jon); C. Joyce (Chris); K.J. Karczewski (Konrad); J. Kaye (Jane); T. Keane (Thomas); J.P. Kemp (John); K. Kennedy (Karen); A. Kent (Alastair); J. Keogh (Julia); F. Khawaja (Farrah); M.E. Kleber (Marcus); M. Van Kogelenberg (Margriet); A. Kolb-Kokocinski (Anja); J.S. Kooner (Jaspal S.); G. Lachance (Genevieve); C. Langenberg (Claudia); C. Langford (Cordelia); D. Lawson (Daniel); I. Lee (Irene); E.M. van Leeuwen (Elisa); M. Lek (Monkol); R. Li (Rui); Y. Li (Yingrui); J. Liang (Jieqin); H. Lin (Hong); R. Liu (Ryan); J. Lönnqvist (Jouko); L.R. Lopes (Luis R.); M.C. Lopes (Margarida); J. Luan; D.G. MacArthur (Daniel G.); M. Mangino (Massimo); G. Marenne (Gaëlle); W. März (Winfried); J. Maslen (John); A. Matchan (Angela); I. Mathieson (Iain); P. McGuffin (Peter); A.M. McIntosh (Andrew); A.G. McKechanie (Andrew G.); A. McQuillin (Andrew); S. Metrustry (Sarah); N. Migone (Nicola); H.M. Mitchison (Hannah M.); A. Moayyeri (Alireza); J. Morris (James); R. Morris (Richard); D. Muddyman (Dawn); F. Muntoni; B.G. Nordestgaard (Børge G.); K. Northstone (Kate); M.C. O'donovan (Michael); S. O'Rahilly (Stephen); A. Onoufriadis (Alexandros); K. Oualkacha (Karim); M.J. Owen (Michael J.); A. Palotie (Aarno); K. Panoutsopoulou (Kalliope); V. Parker (Victoria); J.R. Parr (Jeremy R.); L. Paternoster (Lavinia); T. Paunio (Tiina); F. Payne (Felicity); S.J. Payne (Stewart J.); J.R.B. Perry (John); O.P.H. Pietiläinen (Olli); V. Plagnol (Vincent); R.C. Pollitt (Rebecca C.); S. Povey (Sue); M.A. Quail (Michael A.); L. Quaye (Lydia); L. Raymond (Lucy); K. Rehnström (Karola); C.K. Ridout (Cheryl K.); S.M. Ring (Susan); G.R.S. Ritchie (Graham R.S.); N. Roberts (Nicola); R.L. Robinson (Rachel L.); D.B. Savage (David); P.J. Scambler (Peter); S. Schiffels (Stephan); M. Schmidts (Miriam); N. Schoenmakers (Nadia); R.H. Scott (Richard H.); R.A. Scott (Robert); R.K. Semple (Robert K.); E. Serra (Eva); S.I. Sharp (Sally I.); A.C. Shaw (Adam C.); H.A. Shihab (Hashem A.); S.-Y. Shin (So-Youn); D. Skuse (David); K.S. Small (Kerrin); C. Smee (Carol); G.D. Smith; L. Southam (Lorraine); O. Spasic-Boskovic (Olivera); T.D. Spector (Timothy); D. St. Clair (David); B. St Pourcain (Beate); J. Stalker (Jim); E. Stevens (Elizabeth); J. Sun (Jianping); G. Surdulescu (Gabriela); J. Suvisaari (Jaana); P. Syrris (Petros); I. Tachmazidou (Ioanna); R. Taylor (Rohan); J. Tian (Jing); M.D. Tobin (Martin); D. Toniolo (Daniela); M. Traglia (Michela); A. Tybjaerg-Hansen; A.M. Valdes; A.M. Vandersteen (Anthony M.); A. Varbo (Anette); P. Vijayarangakannan (Parthiban); P.M. Visscher (Peter); L.V. Wain (Louise); J.T. Walters (James); G. Wang (Guangbiao); J. Wang (Jun); Y. Wang (Yu); K. Ward (Kirsten); E. Wheeler (Eleanor); P.H. Whincup (Peter); T. Whyte (Tamieka); H.J. Williams (Hywel J.); K.A. Williamson (Kathleen); C. Wilson (Crispian); S.G. Wilson (Scott); K. Wong (Kim); C. Xu (Changjiang); J. Yang (Jian); G. Zaza (Gianluigi); E. Zeggini (Eleftheria); F. Zhang (Feng); P. Zhang (Pingbo); W. Zhang (Weihua)
2015-01-01
textabstractImputing genotypes from reference panels created by whole-genome sequencing (WGS) provides a cost-effective strategy for augmenting the single-nucleotide polymorphism (SNP) content of genome-wide arrays. The UK10K Cohorts project has generated a data set of 3,781 whole genomes sequenced
van Leeuwen, E.M.; Karssen, L.C.; Deelen, J.; Isaacs, A.; Medina-Gomez, C.; Mbarek, H.; Kanterakis, A.; Trompet, S.; Postmus, I.; Verweij, N.; van Enckevort, D.; Huffman, J.E.; White, C.C.; Feitosa, M.F.; Bartz, T.M.; Manichaikul, A.; Joshi, P.K.; Peloso, G.M.; Deelen, P.; Dijk, F.; Willemsen, G.; de Geus, E.J.C.; Milaneschi, Y.; Penninx, B.W.J.H.; Francioli, L.C.; Menelaou, A.; Pulit, S.L.; Rivadeneira, F.; Hofman, A.; Oostra, B.A.; Franco, O.H.; Mateo Leach, I.; Beekman, M.; de Craen, A.J.; Uh, H.W.; Trochet, H.; Hocking, L.J.; Porteous, D.J.; Sattar, N.; Packard, C.J.; Buckley, B.M.; Brody, J.A.; Bis, J.C.; Rotter, J.I.; Mychaleckyj, J.C.; Campbell, H.; Duan, Q.; Lange, L.A.; Wilson, J.F.; Hayward, C.; Polasek, O.; Vitart, V.; Rudan, I.; Wright, A.F.; Rich, S.S.; Psaty, B.M.; Borecki, I.B.; Kearney, P.M.; Stott, D.J.; Cupples, L.A.; Jukema, J.W.; van der Harst, P.; Sijbrands, E.J.; Hottenga, J.J.; Uitterlinden, A.G.; Swertz, M.A.; van Ommen, G.J.B; Bakker, P.I.W.; Slagboom, P.E.; Boomsma, D.I.; Wijmenga, C.; van Duijn, C.M.
2015-01-01
Variants associated with blood lipid levels may be population-specific. To identify low-frequency variants associated with this phenotype, population-specific reference panels may be used. Here we impute nine large Dutch biobanks (∼35,000 samples) with the population-specific reference panel created
31 CFR 19.630 - May the Department of the Treasury impute conduct of one person to another?
2010-07-01
... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false May the Department of the Treasury impute conduct of one person to another? 19.630 Section 19.630 Money and Finance: Treasury Office of the Secretary of the Treasury GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) General Principles...
A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets
Carrig, Madeline M.; Manrique-Vallier, Daniel; Ranby, Krista W.; Reiter, Jerome P.; Hoyle, Rick H.
2015-01-01
Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches. PMID:26257437
Impute DC link (IDCL) cell based power converters and control thereof
Divan, Deepakraj M.; Prasai, Anish; Hernendez, Jorge; Moghe, Rohit; Iyer, Amrit; Kandula, Rajendra Prasad
2016-04-26
Power flow controllers based on Imputed DC Link (IDCL) cells are provided. The IDCL cell is a self-contained power electronic building block (PEBB). The IDCL cell may be stacked in series and parallel to achieve power flow control at higher voltage and current levels. Each IDCL cell may comprise a gate drive, a voltage sharing module, and a thermal management component in order to facilitate easy integration of the cell into a variety of applications. By providing direct AC conversion, the IDCL cell based AC/AC converters reduce device count, eliminate the use of electrolytic capacitors that have life and reliability issues, and improve system efficiency compared with similarly rated back-to-back inverter system.
Sulovari, Arvis; Li, Dawei
2014-07-19
Genome-wide association studies (GWAS) have successfully identified genes associated with complex human diseases. Although much of the heritability remains unexplained, combining single nucleotide polymorphism (SNP) genotypes from multiple studies for meta-analysis will increase the statistical power to identify new disease-associated variants. Meta-analysis requires same allele definition (nomenclature) and genome build among individual studies. Similarly, imputation, commonly-used prior to meta-analysis, requires the same consistency. However, the genotypes from various GWAS are generated using different genotyping platforms, arrays or SNP-calling approaches, resulting in use of different genome builds and allele definitions. Incorrect assumptions of identical allele definition among combined GWAS lead to a large portion of discarded genotypes or incorrect association findings. There is no published tool that predicts and converts among all major allele definitions. In this study, we have developed a tool, GACT, which stands for Genome build and Allele definition Conversion Tool, that predicts and inter-converts between any of the common SNP allele definitions and between the major genome builds. In addition, we assessed several factors that may affect imputation quality, and our results indicated that inclusion of singletons in the reference had detrimental effects while ambiguous SNPs had no measurable effect. Unexpectedly, exclusion of genotypes with missing rate > 0.001 (40% of study SNPs) showed no significant decrease of imputation quality (even significantly higher when compared to the imputation with singletons in the reference), especially for rare SNPs. GACT is a new, powerful, and user-friendly tool with both command-line and interactive online versions that can accurately predict, and convert between any of the common allele definitions and between genome builds for genome-wide meta-analysis and imputation of genotypes from SNP-arrays or deep
Configurable multiplier modules for an adaptive computing system
Directory of Open Access Journals (Sweden)
O. A. Pfänder
2006-01-01
Full Text Available The importance of reconfigurable hardware is increasing steadily. For example, the primary approach of using adaptive systems based on programmable gate arrays and configurable routing resources has gone mainstream and high-performance programmable logic devices are rivaling traditional application-specific hardwired integrated circuits. Also, the idea of moving from the 2-D domain into a 3-D design which stacks several active layers above each other is gaining momentum in research and industry, to cope with the demand for smaller devices with a higher scale of integration. However, optimized arithmetic blocks in course-grain reconfigurable arrays as well as field-programmable architectures still play an important role. In countless digital systems and signal processing applications, the multiplication is one of the critical challenges, where in many cases a trade-off between area usage and data throughput has to be made. But the a priori choice of word-length and number representation can also be replaced by a dynamic choice at run-time, in order to improve flexibility, area efficiency and the level of parallelism in computation. In this contribution, we look at an adaptive computing system called 3-D-SoftChip to point out what parameters are crucial to implement flexible multiplier blocks into optimized elements for accelerated processing. The 3-D-SoftChip architecture uses a novel approach to 3-dimensional integration based on flip-chip bonding with indium bumps. The modular construction, the introduction of interfaces to realize the exchange of intermediate data, and the reconfigurable sign handling approach will be explained, as well as a beneficial way to handle and distribute the numerous required control signals.
Confinement of multiply charged ions in an ECRH mirror plasma
International Nuclear Information System (INIS)
Petty, C.C.
1989-06-01
This thesis is an experimental study of multiply charged ions in the Constance B mirror experiment. By measuring the ion densities, end loss fluxes and ion temperatures, the parallel confinement times for the first five charge states of oxygen and neon plasmas are determined. The parallel ion confinement times increase with charge state and peak on axis, both indications of an ion-confining potential dip created by the hot electrons. The radial profile of ion end loss is usually hollow due to large ion radial transport (τ paralleli ∼ τ perpendiculari ), with the peak fluxes occurring at the edge of the electron cyclotron resonance zone. Several attempts are made to increase the end loss of selected ion species. Using minority ICRH, the end loss flux of resonant ions increases by 20% in cases when radial transport induced by ICRH is not too severe. A large antenna voltage can also extinguish the plasma. By adding helium to an oxygen plasma, the end loss of O 6+ increases by 80% due to decreased ion radial transport. An ion model is developed to predict the ion densities, end loss fluxes and confinement times in the plasma center using the ion particle balance equations, the quasineutrality condition and theoretical confinement time formulas. The model generally agrees with the experimental data for oxygen and neon plasmas to within experimental error. Under certain conditions spatial diffusion appears to determine the parallel ion confinement time of the highest charge states. For oxygen plasmas during ICRH, the measured parallel confinement time of the resonant ions is much shorter than their theoretical value, probably due to rf diffusion of the ions into the loss cone. 58 refs., 101 figs., 16 tabs
Theory of Pulsed Neutron Experiments in Highly Heterogeneous Multiplying Media
International Nuclear Information System (INIS)
Corno, S.E.
1965-01-01
In this work we investigate the time and space dependence of the neutron flux within a highly heterogeneous assembly, in which pulsed or sinusoidally modulated neutrons are injected. We consider, for the sake of simplicity, a device consisting of a cylindrical block of heavy moderator, along the axis of which a line-shaped region of fissionable material is located. The driving neutron source is assumed to be located on one of the end faces of the cylinder. The extent of the fissionable region allows us to deal with it as with an absorbing and multiplying singularity of the neutron field. As our attention is mostly concentrated on space and time variation of the neutron flux, rather crude approximations are assumed as far as the energy dependence of the neutron population is concerned. Within the limits of the age-diffusion theory, the response of the device to any neutron excitation may be found in closed form. For a sinusoidally modulated source of given frequency, it may easily be shown that, if the axial singularity were a purely absorbing one, the neutron waves being propagated along the device would possess a phase shift; a wavelength and an attenuation constant depending on the absorbing properties of the singularity. This picture becomes more and more complicated when neutron multiplication occurs. For this general case the solution derived in our paper obviously turns out to be dependent on both absorption and multiplication properties of the singularity. This circumstance suggests, among others, the idea of using a device of the type described above for testing fuel elements of heterogeneous reactors. (author) [fr
Permeability criteria for effective function of passive countercurrent multiplier.
Layton, H E; Knepper, M A; Chou, C L
1996-01-01
The urine concentrating effect of the mammalian renal inner medulla has been attributed to countercurrent multiplication of a transepithelial osmotic difference arising from passive absorption of NaCl from thin ascending limbs of long loops of Henle. This study assesses, both mathematically and experimentally, whether the permeability criteria for effective function of this passive hypothesis are consistent with transport properties measured in long loops of Henle of chinchilla. Mathematical simulations incorporating loop of Henle transepithelial permeabilities idealized for the passive hypothesis generated a steep inner medullary osmotic gradient, confirming the fundamental feasibility of the passive hypothesis. However, when permeabilities measured in chinchilla were used, no inner medullary gradient was generated. A key parameter in the apparent failure of the passive hypothesis is the long-loop descending limb (LDL) urea permeability, which must be small to prevent significant transepithelial urea flux into inner medullary LDL. Consequently, experiments in isolated perfused thin LDL were conducted to determine whether the urea permeability may be lower under conditions more nearly resembling those in the inner medulla. LDL segments were dissected from 30-70% of the distance along the inner medullary axis of the chinchilla kidney. The factors tested were NaCl concentration (125-400 mM in perfusate and bath), urea concentration (5-500 mM in perfusate and bath), calcium concentration (2-8 mM in perfusate and bath), and protamine concentration (300 micrograms/ml in perfusate). None of these factors significantly altered the measured urea permeability, which exceeded 20 x 10(-5) cm/s for all conditions. Simulation results show that this moderately high urea permeability in LDL is an order of magnitude too high for effective operation of the passive countercurrent multiplier.
Effect of the equity multiplier indicator in companies according the sectors
Directory of Open Access Journals (Sweden)
Lenka Lízalová
2013-01-01
Full Text Available Managers carry out the demand of the owners to maximise the rentability of invested capital with regards to the taken risk. The tool that evaluates the suitability to indebt in order to reach a higher rentability is the equity multiplier indicator. An analysis of the multiplier was carried out on 10 years of data from 456 Czech companies. Based on the data from these companies the influence of two components of the multiplier, which characterise the influence of indebtedness on the return on equity, was analysed. These components are “financial leverage” and “interest burden”, these having an antagonistic effect. The low variability of the equity multiplier is apparent in the companies of the administrative and support service sector and it is also relatively low in the companies of the agriculture, forestry and fishing sector; on the contrary, in for example the professional, scientific and technical activities and the sector of water, sewage and waste there are companies with higher variability of the equity multiplier. The paper identifies companies (in view of their sector specialization inclining to a larger utility of debts to increase the return on equity. The largest equity is reached in companies of the construction sector; the lowest effect of the multiplier is to be found in companies of the agriculture sector. The resulting value of the multiplier is to a large extent determined by the financial leverage indicator, to a lower extent and at the same time negatively by the interest burden indicator.
Baker, Jannah; White, Nicole; Mengersen, Kerrie
2014-11-20
Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.
Effect of the channel electron multiplier connection diagram on its parameters
International Nuclear Information System (INIS)
Ajnbund, M.R.
1976-01-01
Basic alternatives of connection of a channel electron multiplier are described. A dependence of a gain factor and amplitude resolution of the channel electron multiplier upon its connection diagram is studied. The studies have shown that the maximum gain factor is typical of an open-output circuit where the signal is recorded from the anode of the channel electron multiplier at a potential with respect to the channel outlet. The highest amplitude resolution is inherent in a separate-anode circuit where the loading resistance is connected directly to the channel outlet
Multiply Surface-Functionalized Nanoporous Carbon for Vehicular Hydrogen Storage
Energy Technology Data Exchange (ETDEWEB)
Pfeifer, Peter [Univ. of Missouri, Columbia, MO (United States). Dept. of Physics; Gillespie, Andrew [Univ. of Missouri, Columbia, MO (United States). Dept. of Physics; Stalla, David [Univ. of Missouri, Columbia, MO (United States). Dept. of Physics; Dohnke, Elmar [Univ. of Missouri, Columbia, MO (United States). Dept. of Physics
2017-02-20
The purpose of the project “Multiply Surface-Functionalized Nanoporous Carbon for Vehicular Hydrogen Storage” is the development of materials that store hydrogen (H_{2}) by adsorption in quantities and at conditions that outperform current compressed-gas H_{2} storage systems for electric power generation from hydrogen fuel cells (HFCs). Prominent areas of interest for HFCs are light-duty vehicles (“hydrogen cars”) and replacement of batteries with HFC systems in a wide spectrum of applications, ranging from forklifts to unmanned areal vehicles to portable power sources. State-of-the-art compressed H_{2} tanks operate at pressures between 350 and 700 bar at ambient temperature and store 3-4 percent of H_{2} by weight (wt%) and less than 25 grams of H_{2} per liter (g/L) of tank volume. Thus, the purpose of the project is to engineer adsorbents that achieve storage capacities better than compressed H_{2} at pressures less than 350 bar. Adsorption holds H_{2} molecules as a high-density film on the surface of a solid at low pressure, by virtue of attractive surface-gas interactions. At a given pressure, the density of the adsorbed film is the higher the stronger the binding of the molecules to the surface is (high binding energies). Thus, critical for high storage capacities are high surface areas, high binding energies, and low void fractions (high void fractions, such as in interstitial space between adsorbent particles, “waste” storage volume by holding hydrogen as non-adsorbed gas). Coexistence of high surface area and low void fraction makes the ideal adsorbent a nanoporous monolith, with pores wide enough to hold high-density hydrogen films, narrow enough to minimize storage as non-adsorbed gas, and thin walls between pores to minimize the volume occupied by solid instead of hydrogen. A monolith can be machined to fit into a rectangular tank (low pressure, conformable tank), cylindrical tank
International Nuclear Information System (INIS)
Kostic, Lj.
2003-01-01
The influence of the stochastically pulsed Poisson source to the statistical properties of the subcritical multiplying system is analyzed in the paper. It is shown a strong dependence on the pulse period and pulse width of the source (author)
Polarization of X rays of multiply charged ions in dense high-temperature plasma
Baronova, EO; Dolgov, AN; Yakubovskii, LK
2004-01-01
The development of a method for studying the features of X-ray emission by multiply charged ions in a dense hot plasma is considered. These features are determined by the radiation polarization phenomenon.
Karatsuba-Ofman Multiplier with Integrated Modular Reduction for GF(2m
Directory of Open Access Journals (Sweden)
CUEVAS-FARFAN, E.
2013-05-01
Full Text Available In this paper a novel GF(2m multiplier based on Karatsuba-Ofman Algorithm is presented. A binary field multiplication in polynomial basis is typically viewed as a two steps process, a polynomial multiplication followed by a modular reduction step. This research proposes a modification to the original Karatsuba-Ofman Algorithm in order to integrate the modular reduction inside the polynomial multiplication step. Modular reduction is achieved by using parallel linear feedback registers. The new algorithm is described in detail and results from a hardware implementation on FPGA technology are discussed. The hardware architecture is described in VHDL and synthesized for a Virtex-6 device. Although the proposed field multiplier can be implemented for arbitrary finite fields, the targeted finite fields are recommended for Elliptic Curve Cryptography. Comparing other KOA multipliers, our proposed multiplier uses 36% less area resources and improves the maximum delay in 10%.
Nuclear retention of multiply spliced HIV-1 RNA in resting CD4+ T cells.
Directory of Open Access Journals (Sweden)
Kara G Lassen
2006-07-01
Full Text Available HIV-1 latency in resting CD4+ T cells represents a major barrier to virus eradication in patients on highly active antiretroviral therapy (HAART. We describe here a novel post-transcriptional block in HIV-1 gene expression in resting CD4+ T cells from patients on HAART. This block involves the aberrant localization of multiply spliced (MS HIV-1 RNAs encoding the critical positive regulators Tat and Rev. Although these RNAs had no previously described export defect, we show that they exhibit strict nuclear localization in resting CD4+ T cells from patients on HAART. Overexpression of the transcriptional activator Tat from non-HIV vectors allowed virus production in these cells. Thus, the nuclear retention of MS HIV-1 RNA interrupts a positive feedback loop and contributes to the non-productive nature of infection of resting CD4+ T cells. To define the mechanism of nuclear retention, proteomic analysis was used to identify proteins that bind MS HIV-1 RNA. Polypyrimidine tract binding protein (PTB was identified as an HIV-1 RNA-binding protein differentially expressed in resting and activated CD4+ T cells. Overexpression of PTB in resting CD4+ T cells from patients on HAART allowed cytoplasmic accumulation of HIV-1 RNAs. PTB overexpression also induced virus production by resting CD4+ T cells. Virus culture experiments showed that overexpression of PTB in resting CD4+ T cells from patients on HAART allowed release of replication-competent virus, while preserving a resting cellular phenotype. Whether through effects on RNA export or another mechanism, the ability of PTB to reverse latency without inducing cellular activation is a result with therapeutic implications.
Directory of Open Access Journals (Sweden)
Puett Robin C
2009-10-01
Full Text Available Abstract Background There is increasing interest in the study of place effects on health, facilitated in part by geographic information systems. Incomplete or missing address information reduces geocoding success. Several geographic imputation methods have been suggested to overcome this limitation. Accuracy evaluation of these methods can be focused at the level of individuals and at higher group-levels (e.g., spatial distribution. Methods We evaluated the accuracy of eight geo-imputation methods for address allocation from ZIP codes to census tracts at the individual and group level. The spatial apportioning approaches underlying the imputation methods included four fixed (deterministic and four random (stochastic allocation methods using land area, total population, population under age 20, and race/ethnicity as weighting factors. Data included more than 2,000 geocoded cases of diabetes mellitus among youth aged 0-19 in four U.S. regions. The imputed distribution of cases across tracts was compared to the true distribution using a chi-squared statistic. Results At the individual level, population-weighted (total or under age 20 fixed allocation showed the greatest level of accuracy, with correct census tract assignments averaging 30.01% across all regions, followed by the race/ethnicity-weighted random method (23.83%. The true distribution of cases across census tracts was that 58.2% of tracts exhibited no cases, 26.2% had one case, 9.5% had two cases, and less than 3% had three or more. This distribution was best captured by random allocation methods, with no significant differences (p-value > 0.90. However, significant differences in distributions based on fixed allocation methods were found (p-value Conclusion Fixed imputation methods seemed to yield greatest accuracy at the individual level, suggesting use for studies on area-level environmental exposures. Fixed methods result in artificial clusters in single census tracts. For studies
Imputation of the rare HOXB13 G84E mutation and cancer risk in a large population-based cohort.
Directory of Open Access Journals (Sweden)
Thomas J Hoffmann
2015-01-01
Full Text Available An efficient approach to characterizing the disease burden of rare genetic variants is to impute them into large well-phenotyped cohorts with existing genome-wide genotype data using large sequenced referenced panels. The success of this approach hinges on the accuracy of rare variant imputation, which remains controversial. For example, a recent study suggested that one cannot adequately impute the HOXB13 G84E mutation associated with prostate cancer risk (carrier frequency of 0.0034 in European ancestry participants in the 1000 Genomes Project. We show that by utilizing the 1000 Genomes Project data plus an enriched reference panel of mutation carriers we were able to accurately impute the G84E mutation into a large cohort of 83,285 non-Hispanic White participants from the Kaiser Permanente Research Program on Genes, Environment and Health Genetic Epidemiology Research on Adult Health and Aging cohort. Imputation authenticity was confirmed via a novel classification and regression tree method, and then empirically validated analyzing a subset of these subjects plus an additional 1,789 men from Kaiser specifically genotyped for the G84E mutation (r2 = 0.57, 95% CI = 0.37–0.77. We then show the value of this approach by using the imputed data to investigate the impact of the G84E mutation on age-specific prostate cancer risk and on risk of fourteen other cancers in the cohort. The age-specific risk of prostate cancer among G84E mutation carriers was higher than among non-carriers. Risk estimates from Kaplan-Meier curves were 36.7% versus 13.6% by age 72, and 64.2% versus 24.2% by age 80, for G84E mutation carriers and non-carriers, respectively (p = 3.4x10-12. The G84E mutation was also associated with an increase in risk for the fourteen other most common cancers considered collectively (p = 5.8x10-4 and more so in cases diagnosed with multiple cancer types, both those including and not including prostate cancer, strongly suggesting
Beyond the static money multiplier: in search of a dynamic theory of money
Berardi, Michele
2007-01-01
In this paper, we analyze the process of money creation in a credit economy. We start from the consideration that the traditional money multiplier is a poor description of this process and present an alternative and dynamic approach that takes into account the heterogeneity of agents in the economy and their interactions. We show that this heterogeneity can account for the instability of the multiplier and that it can make the system path-dependent. By using concepts and techniques borrowed f...
Atlantic Richfield Hanford Company californium multiplier/delayed neutron counter safety analysis
International Nuclear Information System (INIS)
Zimmer, W.H.
1976-08-01
The Californium Multiplier (CFX) is a subcritical assembly of uranium surrounding 252 Cf spontaneously fissioning neutron sources; its function is to multiply the neutron flux to a level useful for activation analysis. This document summarizes the safety analysis aspects of the CFX, DNC, pneumatic transfer system, and instrumentation and to detail all the aspects of the total facility as a starting point for the ARHCO Safety Analysis Review. Recognized hazards and steps already taken to neutralize them are itemized
Utilization of a channel electron multiplier for counting-measurement on condensed molecular jet
International Nuclear Information System (INIS)
Le Bihan, A.M.; Bottiglioni, F.; Coutant, J.; Fois, M.; CEA Centre d'Etudes Nucleaires de Fontenay-aux-Roses, 92
1974-01-01
A channel electron multiplier has been used for counting ionized clusters containing up to a few thousands molecules; clusters are accelerated towards a negative (approximately-220V) copper target; a larger negative bias (approximately-3000V) is applied to the multiplier entrance so as to collect positive secondary ions and/or reflected cluster fragments; in the present application this gives better signal to noise ratio than detecting clusters directly or by secondary electron emission on the target [fr
Solution of second order linear fuzzy difference equation by Lagrange's multiplier method
Directory of Open Access Journals (Sweden)
Sankar Prasad Mondal
2016-06-01
Full Text Available In this paper we execute the solution procedure for second order linear fuzzy difference equation by Lagrange's multiplier method. In crisp sense the difference equation are easy to solve, but when we take in fuzzy sense it forms a system of difference equation which is not so easy to solve. By the help of Lagrange's multiplier we can solved it easily. The results are illustrated by two different numerical examples and followed by two applications.
DEFF Research Database (Denmark)
Thingholm, Tine E; Jensen, Ole N; Robinson, Phillip J
2008-01-01
spectrometric analysis, such as immobilized metal affinity chromatography or titanium dioxide the coverage of the phosphoproteome of a given sample is limited. Here we report a simple and rapid strategy - SIMAC - for sequential separation of mono-phosphorylated peptides and multiply phosphorylated peptides from...... and an optimized titanium dioxide chromatographic method. More than double the total number of identified phosphorylation sites was obtained with SIMAC, primarily from a three-fold increase in recovery of multiply phosphorylated peptides....
International Nuclear Information System (INIS)
Griffith, Candice D.; Mahadevan, Sankaran
2015-01-01
This paper develops a probabilistic approach that could use empirical data to derive values of performance shaping factor (PSF) multipliers for use in quantitative human reliability analysis (HRA). The proposed approach is illustrated with data on sleep deprivation effects on performance. A review of existing HRA methods reveals that sleep deprivation is not explicitly included at present, and expert opinion is frequently used to inform HRA model multipliers. In this paper, quantitative data from empirical studies regarding the effect of continuous hours of wakefulness on performance measures (reaction time, accuracy, and number of lapses) are used to develop a method to derive PSF multiplier values for sleep deprivation, in the context of the SPAR-H model. Data is extracted from the identified studies according to the meta-analysis research synthesis method and used to investigate performance trends and error probabilities. The error probabilities in test and control conditions are compared, and the resulting probability ratios are suggested for use in informing the selection of PSF multipliers in HRA methods. Although illustrated for sleep deprivation, the proposed methodology is general, and can be applied to other performance shaping factors. - Highlights: • Method proposed to derive performance shaping factor multipliers from empirical data. • Studies reporting the effect of sleep deprivation on performance are analyzed. • Test data using psychomotor vigilance tasks are analyzed. • Error probability multipliers computed for reaction time, lapses, and accuracy measures.
Nearest neighbor imputation using spatial-temporal correlations in wireless sensor networks.
Li, YuanYuan; Parker, Lynne E
2014-01-01
Missing data is common in Wireless Sensor Networks (WSNs), especially with multi-hop communications. There are many reasons for this phenomenon, such as unstable wireless communications, synchronization issues, and unreliable sensors. Unfortunately, missing data creates a number of problems for WSNs. First, since most sensor nodes in the network are battery-powered, it is too expensive to have the nodes retransmit missing data across the network. Data re-transmission may also cause time delays when detecting abnormal changes in an environment. Furthermore, localized reasoning techniques on sensor nodes (such as machine learning algorithms to classify states of the environment) are generally not robust enough to handle missing data. Since sensor data collected by a WSN is generally correlated in time and space, we illustrate how replacing missing sensor values with spatially and temporally correlated sensor values can significantly improve the network's performance. However, our studies show that it is important to determine which nodes are spatially and temporally correlated with each other. Simple techniques based on Euclidean distance are not sufficient for complex environmental deployments. Thus, we have developed a novel Nearest Neighbor (NN) imputation method that estimates missing data in WSNs by learning spatial and temporal correlations between sensor nodes. To improve the search time, we utilize a k d-tree data structure, which is a non-parametric, data-driven binary search tree. Instead of using traditional mean and variance of each dimension for k d-tree construction, and Euclidean distance for k d-tree search, we use weighted variances and weighted Euclidean distances based on measured percentages of missing data. We have evaluated this approach through experiments on sensor data from a volcano dataset collected by a network of Crossbow motes, as well as experiments using sensor data from a highway traffic monitoring application. Our experimental
Ondeck, Nathaniel T; Fu, Michael C; Skrip, Laura A; McLynn, Ryan P; Su, Edwin P; Grauer, Jonathan N
2018-03-01
Despite the advantages of large, national datasets, one continuing concern is missing data values. Complete case analysis, where only cases with complete data are analyzed, is commonly used rather than more statistically rigorous approaches such as multiple imputation. This study characterizes the potential selection bias introduced using complete case analysis and compares the results of common regressions using both techniques following unicompartmental knee arthroplasty. Patients undergoing unicompartmental knee arthroplasty were extracted from the 2005 to 2015 National Surgical Quality Improvement Program. As examples, the demographics of patients with and without missing preoperative albumin and hematocrit values were compared. Missing data were then treated with both complete case analysis and multiple imputation (an approach that reproduces the variation and associations that would have been present in a full dataset) and the conclusions of common regressions for adverse outcomes were compared. A total of 6117 patients were included, of which 56.7% were missing at least one value. Younger, female, and healthier patients were more likely to have missing preoperative albumin and hematocrit values. The use of complete case analysis removed 3467 patients from the study in comparison with multiple imputation which included all 6117 patients. The 2 methods of handling missing values led to differing associations of low preoperative laboratory values with commonly studied adverse outcomes. The use of complete case analysis can introduce selection bias and may lead to different conclusions in comparison with the statistically rigorous multiple imputation approach. Joint surgeons should consider the methods of handling missing values when interpreting arthroplasty research. Copyright © 2017 Elsevier Inc. All rights reserved.
Michael Weber; Michaela Denk
2011-01-01
International organizations collect data from national authorities to create multivariate cross-sectional time series for their analyses. As data from countries with not yet well-established statistical systems may be incomplete, the bridging of data gaps is a crucial challenge. This paper investigates data structures and missing data patterns in the cross-sectional time series framework, reviews missing value imputation techniques used for micro data in official statistics, and discusses the...
Helms, Ronald W; Reece, Laura Helms; Helms, Russell W; Helms, Mary W
2011-03-01
Missing not at random (MNAR) post-dropout missing data from a longitudinal clinical trial result in the collection of "biased data," which leads to biased estimators and tests of corrupted hypotheses. In a full rank linear model analysis the model equation, E[Y] = Xβ, leads to the definition of the primary parameter β = (X'X)(-1)X'E[Y], and the definition of linear secondary parameters of the form θ = Lβ = L(X'X)(-1)X'E[Y], including, for example, a parameter representing a "treatment effect." These parameters depend explicitly on E[Y], which raises the questions: What is E[Y] when some elements of the incomplete random vector Y are not observed and MNAR, or when such a Y is "completed" via imputation? We develop a rigorous, readily interpretable definition of E[Y] in this context that leads directly to definitions of β, Bias(β) = E[β] - β, Bias(θ) = E[θ] - Lβ, and the extent of hypothesis corruption. These definitions provide a basis for evaluating, comparing, and removing biases induced by various linear imputation methods for MNAR incomplete data from longitudinal clinical trials. Linear imputation methods use earlier data from a subject to impute values for post-dropout missing values and include "Last Observation Carried Forward" (LOCF) and "Baseline Observation Carried Forward" (BOCF), among others. We illustrate the methods of evaluating, comparing, and removing biases and the effects of testing corresponding corrupted hypotheses via a hypothetical but very realistic longitudinal analgesic clinical trial.
Directory of Open Access Journals (Sweden)
Galina A. Manokhina
2012-11-01
Full Text Available The article highlights the main questions concerning possible consequences of replacement of nowadays operating system in the form of a single tax in reference to imputed income with patent system of the taxation. The main advantages and drawbacks of new system of the taxation are shown, including the opinion that not the replacement of one special mode of the taxation with another is more effective, but the introduction of patent a taxation system as an auxilary system.
Energy Technology Data Exchange (ETDEWEB)
Riggi, S., E-mail: sriggi@oact.inaf.it [INAF - Osservatorio Astrofisico di Catania (Italy); Riggi, D. [Keras Strategy - Milano (Italy); Riggi, F. [Dipartimento di Fisica e Astronomia - Università di Catania (Italy); INFN, Sezione di Catania (Italy)
2015-04-21
Identification of charged particles in a multilayer detector by the energy loss technique may also be achieved by the use of a neural network. The performance of the network becomes worse when a large fraction of information is missing, for instance due to detector inefficiencies. Algorithms which provide a way to impute missing information have been developed over the past years. Among the various approaches, we focused on normal mixtures’ models in comparison with standard mean imputation and multiple imputation methods. Further, to account for the intrinsic asymmetry of the energy loss data, we considered skew-normal mixture models and provided a closed form implementation in the Expectation-Maximization (EM) algorithm framework to handle missing patterns. The method has been applied to a test case where the energy losses of pions, kaons and protons in a six-layers’ Silicon detector are considered as input neurons to a neural network. Results are given in terms of reconstruction efficiency and purity of the various species in different momentum bins.
RIDDLE: Race and ethnicity Imputation from Disease history with Deep LEarning.
Directory of Open Access Journals (Sweden)
Ji-Sung Kim
2018-04-01
Full Text Available Anonymized electronic medical records are an increasingly popular source of research data. However, these datasets often lack race and ethnicity information. This creates problems for researchers modeling human disease, as race and ethnicity are powerful confounders for many health exposures and treatment outcomes; race and ethnicity are closely linked to population-specific genetic variation. We showed that deep neural networks generate more accurate estimates for missing racial and ethnic information than competing methods (e.g., logistic regression, random forest, support vector machines, and gradient-boosted decision trees. RIDDLE yielded significantly better classification performance across all metrics that were considered: accuracy, cross-entropy loss (error, precision, recall, and area under the curve for receiver operating characteristic plots (all p < 10-9. We made specific efforts to interpret the trained neural network models to identify, quantify, and visualize medical features which are predictive of race and ethnicity. We used these characterizations of informative features to perform a systematic comparison of differential disease patterns by race and ethnicity. The fact that clinical histories are informative for imputing race and ethnicity could reflect (1 a skewed distribution of blue- and white-collar professions across racial and ethnic groups, (2 uneven accessibility and subjective importance of prophylactic health, (3 possible variation in lifestyle, such as dietary habits, and (4 differences in background genetic variation which predispose to diseases.
Imputation-based analysis of association studies: candidate regions and quantitative traits.
Directory of Open Access Journals (Sweden)
Bertrand Servin
2007-07-01
Full Text Available We introduce a new framework for the analysis of association studies, designed to allow untyped variants to be more effectively and directly tested for association with a phenotype. The idea is to combine knowledge on patterns of correlation among SNPs (e.g., from the International HapMap project or resequencing data in a candidate region of interest with genotype data at tag SNPs collected on a phenotyped study sample, to estimate ("impute" unmeasured genotypes, and then assess association between the phenotype and these estimated genotypes. Compared with standard single-SNP tests, this approach results in increased power to detect association, even in cases in which the causal variant is typed, with the greatest gain occurring when multiple causal variants are present. It also provides more interpretable explanations for observed associations, including assessing, for each SNP, the strength of the evidence that it (rather than another correlated SNP is causal. Although we focus on association studies with quantitative phenotype and a relatively restricted region (e.g., a candidate gene, the framework is applicable and computationally practical for whole genome association studies. Methods described here are implemented in a software package, Bim-Bam, available from the Stephens Lab website http://stephenslab.uchicago.edu/software.html.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method
Directory of Open Access Journals (Sweden)
Jun-He Yang
2017-01-01
Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Multiple imputation of rainfall missing data in the Iberian Mediterranean context
Miró, Juan Javier; Caselles, Vicente; Estrela, María José
2017-11-01
Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.
Multiple imputation for estimating the risk of developing dementia and its impact on survival.
Yu, Binbing; Saczynski, Jane S; Launer, Lenore
2010-10-01
Dementia, Alzheimer's disease in particular, is one of the major causes of disability and decreased quality of life among the elderly and a leading obstacle to successful aging. Given the profound impact on public health, much research has focused on the age-specific risk of developing dementia and the impact on survival. Early work has discussed various methods of estimating age-specific incidence of dementia, among which the illness-death model is popular for modeling disease progression. In this article we use multiple imputation to fit multi-state models for survival data with interval censoring and left truncation. This approach allows semi-Markov models in which survival after dementia depends on onset age. Such models can be used to estimate the cumulative risk of developing dementia in the presence of the competing risk of dementia-free death. Simulations are carried out to examine the performance of the proposed method. Data from the Honolulu Asia Aging Study are analyzed to estimate the age-specific and cumulative risks of dementia and to examine the effect of major risk factors on dementia onset and death.
Analysis of Case-Control Association Studies: SNPs, Imputation and Haplotypes
Chatterjee, Nilanjan
2009-11-01
Although prospective logistic regression is the standard method of analysis for case-control data, it has been recently noted that in genetic epidemiologic studies one can use the "retrospective" likelihood to gain major power by incorporating various population genetics model assumptions such as Hardy-Weinberg-Equilibrium (HWE), gene-gene and gene-environment independence. In this article we review these modern methods and contrast them with the more classical approaches through two types of applications (i) association tests for typed and untyped single nucleotide polymorphisms (SNPs) and (ii) estimation of haplotype effects and haplotype-environment interactions in the presence of haplotype-phase ambiguity. We provide novel insights to existing methods by construction of various score-tests and pseudo-likelihoods. In addition, we describe a novel two-stage method for analysis of untyped SNPs that can use any flexible external algorithm for genotype imputation followed by a powerful association test based on the retrospective likelihood. We illustrate applications of the methods using simulated and real data. © Institute of Mathematical Statistics, 2009.
Analysis of Case-Control Association Studies: SNPs, Imputation and Haplotypes
Chatterjee, Nilanjan; Chen, Yi-Hau; Luo, Sheng; Carroll, Raymond J.
2009-01-01
Although prospective logistic regression is the standard method of analysis for case-control data, it has been recently noted that in genetic epidemiologic studies one can use the "retrospective" likelihood to gain major power by incorporating various population genetics model assumptions such as Hardy-Weinberg-Equilibrium (HWE), gene-gene and gene-environment independence. In this article we review these modern methods and contrast them with the more classical approaches through two types of applications (i) association tests for typed and untyped single nucleotide polymorphisms (SNPs) and (ii) estimation of haplotype effects and haplotype-environment interactions in the presence of haplotype-phase ambiguity. We provide novel insights to existing methods by construction of various score-tests and pseudo-likelihoods. In addition, we describe a novel two-stage method for analysis of untyped SNPs that can use any flexible external algorithm for genotype imputation followed by a powerful association test based on the retrospective likelihood. We illustrate applications of the methods using simulated and real data. © Institute of Mathematical Statistics, 2009.
RIDDLE: Race and ethnicity Imputation from Disease history with Deep LEarning
Kim, Ji-Sung
2018-04-26
Anonymized electronic medical records are an increasingly popular source of research data. However, these datasets often lack race and ethnicity information. This creates problems for researchers modeling human disease, as race and ethnicity are powerful confounders for many health exposures and treatment outcomes; race and ethnicity are closely linked to population-specific genetic variation. We showed that deep neural networks generate more accurate estimates for missing racial and ethnic information than competing methods (e.g., logistic regression, random forest, support vector machines, and gradient-boosted decision trees). RIDDLE yielded significantly better classification performance across all metrics that were considered: accuracy, cross-entropy loss (error), precision, recall, and area under the curve for receiver operating characteristic plots (all p < 10-9). We made specific efforts to interpret the trained neural network models to identify, quantify, and visualize medical features which are predictive of race and ethnicity. We used these characterizations of informative features to perform a systematic comparison of differential disease patterns by race and ethnicity. The fact that clinical histories are informative for imputing race and ethnicity could reflect (1) a skewed distribution of blue- and white-collar professions across racial and ethnic groups, (2) uneven accessibility and subjective importance of prophylactic health, (3) possible variation in lifestyle, such as dietary habits, and (4) differences in background genetic variation which predispose to diseases.
Determination of stress multipliers for thin perforated plates with square array of holes
International Nuclear Information System (INIS)
Bhattacharya, A.; Murli, B.; Kushwaha, H.S.
1991-01-01
The peak stress multipliers are required to determine the maximum stresses in perforated plates for the realistic evaluation of their fatigue life. The Section III of ASME Boiler and Pressure Vessels Code does not provide any information about such multipliers to be used in thin perforated plates with square penetration pattern. Although such multipliers for membrane loadings are available in literature, they were obtained either by classical analysis or by photoelastic experiments and there is no significant finite element analysis in this area. Also it has been a common practice among designers to apply the same multipliers for loads producing bending type of stress. The stress multipliers in bending are lower than those in membrane. Therefore a reduction of resultant peak stress occurs if proper stress multipliers are used for bending. The present paper is aimed at developing a finite element technique which can be used for determining the peak stress multipliers in thin plates for membrane as well as bending loads. A quarter symmetric part of a 3 x 3 square array was chosen for the analysis. The results were obtained by computer programs PAFEC and COSMOS/M using 2-D plane stress elements for the membrane and degenerated 3-D shell element for the bending part. The results for the membrane are compared with Bailey, Hicks and Hulbert and with Meijers' finite element results for the bending part. A study was made at the initial stage by analysing a 6 x 6 square array to see the effect of holes beyond one pitch, which were left out by the 3 x 3 array and the effect of additional holes was found to be negligible. Therefore it was decided to carry out further analysis with 3 x 3 square array. Photoelastic experiments were also performed to validate the results obtained by theoretical analysis. (author)
International Nuclear Information System (INIS)
Seifert, M.
1999-01-01
The Swiss Gas Industry has carried out a systematic, technical estimate of methane release from the complete supply chain from production to consumption for the years 1992/1993. The result of this survey provided a conservative value, amounting to 0.9% of the Swiss domestic output. A continuation of the study taking into account new findings with regard to emission factors and the effect of the climate is now available, which provides a value of 0.8% for the target year of 1996. These results show that the renovation of the network has brought about lower losses in the local gas supplies, particularly for the grey cast iron pipelines. (author)
Lagrangian relaxation technique in power systems operation planning: Multipliers updating problem
Energy Technology Data Exchange (ETDEWEB)
Ruzic, S. [Electric Power Utility of Serbia, Belgrade (Yugoslavia)
1995-11-01
All Lagrangian relaxation based approaches to the power systems operation planning have an important common part: the Lagrangian multipliers correction procedure. It is the subject of this paper. Different approaches presented in the literature are discussed and an original method for the Lagrangian multipliers updating is proposed. The basic idea of this new method is to update Lagrangian multipliers trying to satisfy Khun-Tucker optimality conditions. Instead of the dual function maximization the `distance of optimality function` is defined and minimized. If Khun-Tucker optimality conditions are satisfied the value of this function is in range (-1,0); otherwise the function has a big positive value. This method called `the distance of optimality method` takes into account future changes in planning generations due to the Lagrangian multipliers updating. The influence of changes in a multiplier associated to one system constraint to the satisfaction of some other system requirements is also considered. The numerical efficiency of the proposed method is analyzed and compared with results obtained using the sub-gradient technique. 20 refs, 2 tabs
Rais, Muhammad H.
2010-06-01
This paper presents Field Programmable Gate Array (FPGA) implementation of standard and truncated multipliers using Very High Speed Integrated Circuit Hardware Description Language (VHDL). Truncated multiplier is a good candidate for digital signal processing (DSP) applications such as finite impulse response (FIR) and discrete cosine transform (DCT). Remarkable reduction in FPGA resources, delay, and power can be achieved using truncated multipliers instead of standard parallel multipliers when the full precision of the standard multiplier is not required. The truncated multipliers show significant improvement as compared to standard multipliers. Results show that the anomaly in Spartan-3 AN average connection and maximum pin delay have been efficiently reduced in Virtex-4 device.
A high reliability automatic multiplier for a mass spectrometer ion detector circuit
International Nuclear Information System (INIS)
Hoshino, Kiichi; Satooka, Sakae
1978-01-01
An automatic multiplier of an ion detector circuit for measurement of isotopic abundance ratio of heavy hydrogen to be used with a single collector has been constructed. This multiplier works at 1/1, 1/5, 1/20, 1/100, 1/500, 1/2000 and infinity, and the input voltage which is required to change the range from 1/1 to 1/5 is 10 mV and that from 1/2000 to infinity is 20 V. As the amplifier preceding the automatic multiplier, a vibrating reed electrometer which generates maximum output of 30 V is used. On measurement, marks which indicate the magnifications are recorded on the chart of electronic recorder. It is possible to set the minimum magnification at 1/1, 1/5, or 1/20 by a switch for setting the minimum magnification. (author)
Design of Low Power Multiplier with Energy Efficient Full Adder Using DPTAAL
Directory of Open Access Journals (Sweden)
A. Kishore Kumar
2013-01-01
Full Text Available Asynchronous adiabatic logic (AAL is a novel lowpower design technique which combines the energy saving benefits of asynchronous systems with adiabatic benefits. In this paper, energy efficient full adder using double pass transistor with asynchronous adiabatic logic (DPTAAL is used to design a low power multiplier. Asynchronous adiabatic circuits are very low power circuits to preserve energy for reuse, which reduces the amount of energy drawn directly from the power supply. In this work, an 8×8 multiplier using DPTAAL is designed and simulated, which exhibits low power and reliable logical operations. To improve the circuit performance at reduced voltage level, double pass transistor logic (DPL is introduced. The power results of the proposed multiplier design are compared with the conventional CMOS implementation. Simulation results show significant improvement in power for clock rates ranging from 100 MHz to 300 MHz.
Design of a High Linearity Four-Quadrant Analog Multiplier in Wideband Frequency Range
Directory of Open Access Journals (Sweden)
Abdul kareem Mokif Obais
2017-05-01
Full Text Available In this paper, a voltage mode four quadrant analog multiplier in the wideband frequency rangeis designed using a wideband operational amplifier (OPAMP and squaring circuits. The wideband OPAMP is designed using 10 identical NMOS transistorsand operated with supply voltages of ±12V. Two NMOS transistors and two wideband OPAMP are utilized in the design of the proposed squaring circuit. All the NMOS transistors are based on 0.35µm NMOStechnology. The multiplier has input and output voltage ranges of ±10 V, high range of linearity from -10 V to +10 V, and cutoff frequency of about 5 GHz. The proposed multiplier is designed on PSpice in Orcad 16.6
Proposal for electro-optic multiplier based on dual transverse electro-optic Kerr effect.
Li, Changsheng
2008-10-20
A novel electro-optic multiplier is proposed, which can perform voltage multiplication operation by use of the Kerr medium exhibiting dual transverse electro-optic Kerr effect. In this kind of Kerr medium, electro-optic phase retardation is proportional to the square of its applied electric field, and orientations of the field-induced birefringent axes are only related to the direction of the field. Based on this effect, we can design an electro-optic multiplier by selecting the crystals of 6/mmm, 432, and m3m classes and isotropic Kerr media such as glass. Simple calculation demonstrates that a kind of glass-ceramic material with a large Kerr constant can be used for the design of the proposed electro-optic multiplier.
Economic Multipliers and Sectoral Linkages: Ghana and the New Oil Sector
Directory of Open Access Journals (Sweden)
Dennis Nchor
2016-01-01
Full Text Available The study seeks to assess the structure of the economy of Ghana in terms of changes in the economic structure before and after the production of oil in commercial quantities. This is viewed with regards to economic multipliers, sectoral interdependence and trade concentration. The results show that changes occurred with regards to multipliers and sectoral interdependence. The output multipliers of most sectors have declined. The results also show that the agricultural sector experienced an initial decline in its growth while industry experienced an increase. The performance of the services sector was relatively stable for the period covered by the study. There is a decline in the level of trade concentration though on a whole the concentration index is still high. The study employed input-output modeling techniques and the data was obtained from the Ghana statistical service and the World Development Indicators.
Wang, Chaolong; Zhan, Xiaowei; Liang, Liming; Abecasis, Gonçalo R.; Lin, Xihong
2015-01-01
Accurate estimation of individual ancestry is important in genetic association studies, especially when a large number of samples are collected from multiple sources. However, existing approaches developed for genome-wide SNP data do not work well with modest amounts of genetic data, such as in targeted sequencing or exome chip genotyping experiments. We propose a statistical framework to estimate individual ancestry in a principal component ancestry map generated by a reference set of individuals. This framework extends and improves upon our previous method for estimating ancestry using low-coverage sequence reads (LASER 1.0) to analyze either genotyping or sequencing data. In particular, we introduce a projection Procrustes analysis approach that uses high-dimensional principal components to estimate ancestry in a low-dimensional reference space. Using extensive simulations and empirical data examples, we show that our new method (LASER 2.0), combined with genotype imputation on the reference individuals, can substantially outperform LASER 1.0 in estimating fine-scale genetic ancestry. Specifically, LASER 2.0 can accurately estimate fine-scale ancestry within Europe using either exome chip genotypes or targeted sequencing data with off-target coverage as low as 0.05×. Under the framework of LASER 2.0, we can estimate individual ancestry in a shared reference space for samples assayed at different loci or by different techniques. Therefore, our ancestry estimation method will accelerate discovery in disease association studies not only by helping model ancestry within individual studies but also by facilitating combined analysis of genetic data from multiple sources. PMID:26027497
Radiation techniques in crop and plant breeding. Multiplying the benefits
International Nuclear Information System (INIS)
Ahloowalia, B.S.
1998-01-01
World food production is based on growing a wide variety of fruits, vegetables, and crops developed through advances in science. Plant breeders have produced multiple varieties that grow well in various types of soils and under diverse climates in different regions of the world. Conventionally, this is done by sexual hybridization. This involves transferring pollen from one parent plant to another to obtain hybrids. The subsequent generations of these hybrids are grown to select plants which combine the desired characters of the parents. However, another method exists by which the genetic make-up of a given plant variety can be changed without crossing with another variety. With this method, a variety retains all its original attributes but is upgraded in one or two changed characteristics. This method is based on radiation-induced genetic changes, and its referred to as ''induced mutations''. During the past thirty years, more than 1800 mutant varieties of plants have been released, many, of which were induced with radiation. Plant tissue and cell culture (also called in vitro culture) in combination with radiation is a powerful technique to induce mutations, particularly for the improvement of vegetatively propagated crops. These crops include cassava, garlic, potato, sweet potato, yams, sugarcane, ornamentals such as chrysanthemum, carnation, roses, tulips, daffodil, and many fruits (e.g. apple, banana, plantain, citrus, date palm, grape, papaya, passion fruit, and kiwi fruit). In some of these plants, either there is no seed set (e.g. banana) or the seed progeny produces plants which do not have the right combination of the desired characteristics. These techniques are also useful in the improvement of forest trees having a long lifespan before they produce fruit and seed. This article briefly reviews advances in plant breeding techniques, with a view towards improving the transfer of technologies to more countries
Energy Technology Data Exchange (ETDEWEB)
Raievski, V
1948-09-01
This report describes a new type of ring-shape fast electronic counter (de-multiplier) with a resolution capacity equivalent to the one made by Regener (Rev. of Scientific Instruments USA 1946, 17, 180-89) but requiring two-times less electronic valves. This report follows the general description of electronic de-multipliers made by J. Ailloud (CEA--001). The ring comprises 5 flip-flop circuits with two valves each. The different elements of the ring are calculated with enough details to allow the transfer of this calculation to different valve types. (J.S.)
Spectral multipliers on spaces of distributions associated with non-negative self-adjoint operators
DEFF Research Database (Denmark)
Georgiadis, Athanasios; Nielsen, Morten
2018-01-01
and Triebel–Lizorkin spaces with full range of indices is established too. As an application, we obtain equivalent norm characterizations for the spaces mentioned above. Non-classical spaces as well as Lebesgue, Hardy, (generalized) Sobolev and Lipschitz spaces are also covered by our approach.......We consider spaces of homogeneous type associated with a non-negative self-adjoint operator whose heat kernel satisfies certain upper Gaussian bounds. Spectral multipliers are introduced and studied on distributions associated with this operator. The boundedness of spectral multipliers on Besov...
Angular distribution of 662keV multiply-Compton scattered gamma rays in copper
International Nuclear Information System (INIS)
Singh, Manpreet; Singh, Gurvinderjit; Sandhu, B.S.; Singh, Bhajan
2007-01-01
The angular distribution of multiple Compton scattering of 662keV gamma photons, obtained from six Curie 137 Cs source, incident on copper scatterer of varying thickness is studied experimentally in both the forward and backward hemispheres. The scattered photons are detected by a 51mmx51mm NaI(Tl) scintillation detector. The full-energy peak corresponding to singly scattered events is reconstructed analytically. We observe that the numbers of multiply scattered events, having same energy as in the singly scattered distribution, first increases with increase in target thickness and then saturate. The optimum thickness at which the multiply scattered events saturate is determined at different scattering angles
Song, Minsun; Wheeler, William; Caporaso, Neil E; Landi, Maria Teresa; Chatterjee, Nilanjan
2018-03-01
Genome-wide association studies (GWAS) are now routinely imputed for untyped single nucleotide polymorphisms (SNPs) based on various powerful statistical algorithms for imputation trained on reference datasets. The use of predicted allele counts for imputed SNPs as the dosage variable is known to produce valid score test for genetic association. In this paper, we investigate how to best handle imputed SNPs in various modern complex tests for genetic associations incorporating gene-environment interactions. We focus on case-control association studies where inference for an underlying logistic regression model can be performed using alternative methods that rely on varying degree on an assumption of gene-environment independence in the underlying population. As increasingly large-scale GWAS are being performed through consortia effort where it is preferable to share only summary-level information across studies, we also describe simple mechanisms for implementing score tests based on standard meta-analysis of "one-step" maximum-likelihood estimates across studies. Applications of the methods in simulation studies and a dataset from GWAS of lung cancer illustrate ability of the proposed methods to maintain type-I error rates for the underlying testing procedures. For analysis of imputed SNPs, similar to typed SNPs, the retrospective methods can lead to considerable efficiency gain for modeling of gene-environment interactions under the assumption of gene-environment independence. Methods are made available for public use through CGEN R software package. © 2017 WILEY PERIODICALS, INC.
Lopes, F B; Wu, X-L; Li, H; Xu, J; Perkins, T; Genho, J; Ferretti, R; Tait, R G; Bauck, S; Rosa, G J M
2018-02-01
Reliable genomic prediction of breeding values for quantitative traits requires the availability of sufficient number of animals with genotypes and phenotypes in the training set. As of 31 October 2016, there were 3,797 Brangus animals with genotypes and phenotypes. These Brangus animals were genotyped using different commercial SNP chips. Of them, the largest group consisted of 1,535 animals genotyped by the GGP-LDV4 SNP chip. The remaining 2,262 genotypes were imputed to the SNP content of the GGP-LDV4 chip, so that the number of animals available for training the genomic prediction models was more than doubled. The present study showed that the pooling of animals with both original or imputed 40K SNP genotypes substantially increased genomic prediction accuracies on the ten traits. By supplementing imputed genotypes, the relative gains in genomic prediction accuracies on estimated breeding values (EBV) were from 12.60% to 31.27%, and the relative gain in genomic prediction accuracies on de-regressed EBV was slightly small (i.e. 0.87%-18.75%). The present study also compared the performance of five genomic prediction models and two cross-validation methods. The five genomic models predicted EBV and de-regressed EBV of the ten traits similarly well. Of the two cross-validation methods, leave-one-out cross-validation maximized the number of animals at the stage of training for genomic prediction. Genomic prediction accuracy (GPA) on the ten quantitative traits was validated in 1,106 newly genotyped Brangus animals based on the SNP effects estimated in the previous set of 3,797 Brangus animals, and they were slightly lower than GPA in the original data. The present study was the first to leverage currently available genotype and phenotype resources in order to harness genomic prediction in Brangus beef cattle. © 2018 Blackwell Verlag GmbH.
Ratcliffe, B; El-Dien, O G; Klápště, J; Porth, I; Chen, C; Jaquish, B; El-Kassaby, Y A
2015-12-01
Genomic selection (GS) potentially offers an unparalleled advantage over traditional pedigree-based selection (TS) methods by reducing the time commitment required to carry out a single cycle of tree improvement. This quality is particularly appealing to tree breeders, where lengthy improvement cycles are the norm. We explored the prospect of implementing GS for interior spruce (Picea engelmannii × glauca) utilizing a genotyped population of 769 trees belonging to 25 open-pollinated families. A series of repeated tree height measurements through ages 3-40 years permitted the testing of GS methods temporally. The genotyping-by-sequencing (GBS) platform was used for single nucleotide polymorphism (SNP) discovery in conjunction with three unordered imputation methods applied to a data set with 60% missing information. Further, three diverse GS models were evaluated based on predictive accuracy (PA), and their marker effects. Moderate levels of PA (0.31-0.55) were observed and were of sufficient capacity to deliver improved selection response over TS. Additionally, PA varied substantially through time accordingly with spatial competition among trees. As expected, temporal PA was well correlated with age-age genetic correlation (r=0.99), and decreased substantially with increasing difference in age between the training and validation populations (0.04-0.47). Moreover, our imputation comparisons indicate that k-nearest neighbor and singular value decomposition yielded a greater number of SNPs and gave higher predictive accuracies than imputing with the mean. Furthermore, the ridge regression (rrBLUP) and BayesCπ (BCπ) models both yielded equal, and better PA than the generalized ridge regression heteroscedastic effect model for the traits evaluated.
International Nuclear Information System (INIS)
Carlen, E.A.; Loffredo, M.I.
1989-01-01
We show how to obtain a complete correspondence between stochastic and quantum mechanics on multiply connected spaces. We do this by introducing a stochastic mechanical analog of the hydrodynamical circulation, relating it to the topological properties of the configuration space, and using it to constrain the stochastic mechanical variational principles. (orig.)
An Exploration of Social Media Use among Multiply Minoritized LGBTQ Youth
Lucero, Alfie Leanna
2013-01-01
This study responds to a need for research in a fast-growing and significant area of study, that of exploring, understanding, and documenting the numerous ways that multiply marginalized LGBTQ youth between the ages of 14 and 17 use social media. The primary research question examined whether social media provide safe spaces for multiply…
New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case
Directory of Open Access Journals (Sweden)
Monika Pinchas
2016-02-01
Full Text Available Recently, a new blind adaptive deconvolution algorithm was proposed based on a new closed-form approximated expression for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output where the output and input probability density function (pdf of the deconvolutional process were approximated with the maximum entropy density approximation technique. The Lagrange multipliers for the output pdf were set to those used for the input pdf. Although this new blind adaptive deconvolution method has been shown to have improved equalization performance compared to the maximum entropy blind adaptive deconvolution algorithm recently proposed by the same author, it is not applicable for the very noisy case. In this paper, we derive new Lagrange multipliers for the output and input pdfs, where the Lagrange multipliers related to the output pdf are a function of the channel noise power. Simulation results indicate that the newly obtained blind adaptive deconvolution algorithm using these new Lagrange multipliers is robust to signal-to-noise ratios (SNR, unlike the previously proposed method, and is applicable for the whole range of SNR down to 7 dB. In addition, we also obtain new closed-form approximated expressions for the conditional expectation and mean square error (MSE.
Czech Academy of Sciences Publication Activity Database
Herman, Zdeněk
2015-01-01
Roč. 378, FEB 2015 (2015), s. 113-126 ISSN 1387-3806 Institutional support: RVO:61388955 Keywords : Multiply-charged ions * Dynamics of chemical reactions * Beam scattering Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.183, year: 2015
The impact of founder events on chromosomal variability in multiply mating species
DEFF Research Database (Denmark)
Pool, John E; Nielsen, Rasmus
2008-01-01
size reductions and recent bottlenecks leading to decreased X/A diversity ratios. Here we use theory and simulation to investigate a separate demographic effect-that of founder events involving multiply mated females-and find that it leads to much stronger reductions in X/A diversity ratios than...
Strength of the reversible, garbage-free 2 ^{k} ±1 multiplier
DEFF Research Database (Denmark)
Rotenberg, Eva; Cranch, James; Thomsen, Michael Kirkedal
2013-01-01
Recently, a reversible garbage-free 2 k ±1 constant-multiplier circuit was presented by Axelsen and Thomsen. This was the first construction of a garbage-free, reversible circuit for multiplication with non-trivial constants. At the time, the strength, that is, the range of constants obtainable...
Studies of collision mechanisms in electron capture by slow multiply charged ions
International Nuclear Information System (INIS)
Gilbody, H B; McCullough, R W
2004-01-01
We review measurements based on translational energy spectroscopy which are being used to identify and assess the relative importance of the various collision mechanisms involved in one-electron capture by slow multiply charged ions in collisions with simple atoms and molecules
Wu, Zhenkai; Ding, Jing; Zhao, Dahang; Zhao, Li; Li, Hai; Liu, Jianlin
2017-07-10
The multiplier method was introduced by Paley to calculate the timing for temporary hemiepiphysiodesis. However, this method has not been verified in terms of clinical outcome measure. We aimed to (1) predict the rate of angular correction per year (ACPY) at the various corresponding ages by means of multiplier method and verify the reliability based on the data from the published studies and (2) screen out risk factors for deviation of prediction. A comprehensive search was performed in the following electronic databases: Cochrane, PubMed, and EMBASE™. A total of 22 studies met the inclusion criteria. If the actual value of ACPY from the collected date was located out of the range of the predicted value based on the multiplier method, it was considered as the deviation of prediction (DOP). The associations of patient characteristics with DOP were assessed with the use of univariate logistic regression. Only one article was evaluated as moderate evidence; the remaining articles were evaluated as poor quality. The rate of DOP was 31.82%. In the detailed individual data of included studies, the rate of DOP was 55.44%. The multiplier method is not reliable in predicting the timing for temporary hemiepiphysiodesis, even though it is prone to be more reliable for the younger patients with idiopathic genu coronal deformity.
High dynamic range isotope ratio measurements using an analog electron multiplier
Czech Academy of Sciences Publication Activity Database
Williams, P.; Lorinčík, Jan; Franzreb, K.; Herwig, R.
2013-01-01
Roč. 45, č. 1 (2013), s. 549-552 ISSN 0142-2421 R&D Projects: GA MŠk ME 894 Institutional support: RVO:67985882 Keywords : Isotope ratios * electron multiplier * dynamic range Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.393, year: 2013
Evans, C. J.; Johnson, C. J.
1988-01-01
A blind multiply handicapped preschooler was taught to respond appropriately to two adjacency pair types ("where question-answer" and "comment-acknowledgement"). The two alternative language acquisition strategies available to blind children were encouraged: echolalia to maintain communicative interactions and manual searching…
Pointwise Multipliers on Spaces of Homogeneous Type in the Sense of Coifman and Weiss
Directory of Open Access Journals (Sweden)
Yanchang Han
2014-01-01
homogeneous type in the sense of Coifman and Weiss, pointwise multipliers of inhomogeneous Besov and Triebel-Lizorkin spaces are obtained. We make no additional assumptions on the quasi-metric or the doubling measure. Hence, the results of this paper extend earlier related results to a more general setting.
Best approximation of the Dunkl Multiplier Operators Tk,ℓ,m
Directory of Open Access Journals (Sweden)
Fethi Soltani
2015-03-01
Full Text Available We study some class of Dunkl multiplier operators Tk,ℓ,m; and we give for them an application of the theory of reproducing kernels to the Tikhonov regularization,which gives the best approximation of the operators Tk,ℓ,m on a Hilbert spaces Hskℓ.
Schweinzer, J; Brandenburg, R; Bray, [No Value; Hoekstra, R; Aumayr, F; Janev, RK; Winter, HP
New experimental and theoretical cross-section data for inelastic collision processes of Li atoms in the ground state and excited states (up to n = 4) with electrons, protons, and multiply charged ions have been reported since the database assembled by Wutte et al. [ATOMIC DATA AND NUCLEAR DATA
Composite Field Multiplier based on Look-Up Table for Elliptic Curve Cryptography Implementation
Directory of Open Access Journals (Sweden)
Marisa W. Paryasto
2013-09-01
Full Text Available Implementing a secure cryptosystem requires operations involving hundreds of bits. One of the most recommended algorithm is Elliptic Curve Cryptography (ECC. The complexity of elliptic curve algorithms and parameters with hundreds of bits requires specific design and implementation strategy. The design architecture must be customized according to security requirement, available resources and parameter choices. In this work we propose the use of composite field to implement finite field multiplication for ECC implementation. We use 299-bit keylength represented in GF((21323 instead of in GF(2299. Composite field multiplier can be implemented using different multiplier for ground-field and for extension field. In this paper, LUT is used for multiplication in the ground-field and classic multiplieris used for the extension field multiplication. A generic architecture for the multiplier is presented. Implementation is done with VHDL with the target device Altera DE2. The work in this paper uses the simplest algorithm to confirm the idea that by dividing field into composite, use different multiplier for base and extension field would give better trade-off for time and area. This work will be the beginning of our more advanced further research that implements composite-field using Mastrovito Hybrid, KOA and LUT.
Structural brain network analysis in families multiply affected with bipolar I disorder
Forde, Natalie J.; O'Donoghue, Stefani; Scanlon, Cathy; Emsell, Louise; Chaddock, Chris; Leemans, Alexander; Jeurissen, Ben; Barker, Gareth J.; Cannon, Dara M.; Murray, Robin M.; McDonald, Colm
2015-01-01
Disrupted structural connectivity is associated with psychiatric illnesses including bipolar disorder (BP). Here we use structural brain network analysis to investigate connectivity abnormalities in multiply affected BP type I families, to assess the utility of dysconnectivity as a biomarker and its
Discrete Green’s function diakoptics for stable FDTD interaction between multiply-connected domains
Hon, de B.P.; Arnold, J.M.; Graglia, R.D.
2007-01-01
We have developed FDTD boundary conditions based on discrete Green's function diakoptics for arbitrary multiply-connected 2D domains. The associated Z-domain boundary operator is symmetric, with an imaginary part that can be proved to be positive semi-definite on the upper half of the unit circle in
Composite Field Multiplier based on Look-Up Table for Elliptic Curve Cryptography Implementation
Directory of Open Access Journals (Sweden)
Marisa W. Paryasto
2012-04-01
Full Text Available Implementing a secure cryptosystem requires operations involving hundreds of bits. One of the most recommended algorithm is Elliptic Curve Cryptography (ECC. The complexity of elliptic curve algorithms and parameters with hundreds of bits requires specific design and implementation strategy. The design architecture must be customized according to security requirement, available resources and parameter choices. In this work we propose the use of composite field to implement finite field multiplication for ECC implementation. We use 299-bit keylength represented in GF((21323 instead of in GF(2299. Composite field multiplier can be implemented using different multiplier for ground-field and for extension field. In this paper, LUT is used for multiplication in the ground-field and classic multiplieris used for the extension field multiplication. A generic architecture for the multiplier is presented. Implementation is done with VHDL with the target device Altera DE2. The work in this paper uses the simplest algorithm to confirm the idea that by dividing field into composite, use different multiplier for base and extension field would give better trade-off for time and area. This work will be the beginning of our more advanced further research that implements composite-field using Mastrovito Hybrid, KOA and LUT.
Time-area efficient multiplier-free recursive filter architectures for FPGA implementation
DEFF Research Database (Denmark)
Shajaan, Mohammad; Sørensen, John Aasted
1996-01-01
Simultaneous design of multiplier-free recursive filters (IIR filters) and their hardware implementation in Xilinx field programmable gate array (XC4000) is presented. The hardware design methodology leads to high performance recursive filters with sampling frequencies in the interval 15-21 MHz (...
Time-area efficient multiplier-free filter architectures for FPGA implementation
DEFF Research Database (Denmark)
Shajaan, Mohammad; Nielsen, Karsten; Sørensen, John Aasted
1995-01-01
Simultaneous design of multiplier-free filters and their hardware implementation in Xilinx field programmable gate array (XC4000) is presented. The filter synthesis method is a new approach based on cascade coupling of low order sections. The complexity of the design algorithm is 𝒪 (filter o...
Testing of money multiplier model for Pakistan: does monetary base carry any information?
Directory of Open Access Journals (Sweden)
Muhammad Arshad Khan
2010-02-01
Full Text Available This paper tests the constancy and stationarity of mechanic version of the money multiplier model for Pakistan using monthly data over the period 1972M1-2009M2. We split the data into pre-liberalization (1972M1-1990M12 and post-liberalization (1991M1-2009M2 periods to examine the impact of financial sector reforms. We first examine the constancy and stationarity of the money multiplier and the results suggest the money multiplier remains non-stationary for the entire sample period and sub-periods. We then tested cointegration between money supply and monetary base and find the evidence of cointegration between two variables for the entire period and two sub-periods. The coefficient restrictions are satisfied only for the post-liberalization period. Two-way long-run causality between money supply and monetary base is found for the entire period and post-liberalization. For the post-liberalization period the evidence of short-run causality running from monetary base to money supply is also identified. On the whole, the results suggest that money multiplier model can serve as framework for conducting short-run monetary policy in Pakistan. However, the monetary authority may consider the co-movements between money supply and reserve money at the time of conducting monetary policy.
Implementation of neutron activation analysis in the neutron multiplier CS-ISCTN (first part)
International Nuclear Information System (INIS)
Contreras, R.; Ixquiac, M.; Hernandez, O.; Herrera, E.F.; Diaz, O.; Lopez, R.; Alvarez, I.; Manso, M.V.; Padron, G.; D Alessandro, K.
1997-01-01
The detection limit of 32 elements are determined after experimental evaluation of the neutron flux components in the irradiation position of the neutron multiplier CS-ISCTN. The control of the thermal flux was carry up, comparing the experimental results obtained through three convention used determination of the reaction rate, with the theoretical obtained before
Counting efficiency for liquid scintillator systems with a single multiplier phototube
International Nuclear Information System (INIS)
Grau Malonda, A.; Garcia-Torano, E.
1984-01-01
In this paper counting efficiency as a function of a free parameter (the figure of merit) has been computed. The results are applicable to liquid scintillator systems with a single multiplier phototube. Tables of counting efficiency for 62 pure beta emitters are given for figures of merit in the range 0.25 to 50. (Author) 16 refs
Fourier Multipliers on Decomposition Spaces of Modulation and Triebel–Lizorkin Type
DEFF Research Database (Denmark)
Cleanthous, G.; Georgiadis, Athanasios; Nielsen, Morten
2018-01-01
spaces in both the isotropic and an anisotropic setting. We derive a boundedness result for Fourier multipliers on anisotropic decomposition spaces of modulation and Triebel–Lizorkin type. As an application, we obtain equivalent quasi-norm characterizations for this class of decomposition spaces....
Evaluation of multiplier effect of housing investments in the city economy
Ovsiannikova, T.; Rabtsevich, O.; Yugova, I.
2017-01-01
The given study presents evaluation of the role and significance of housing investments providing stable social and economic development of a city. It also justifies multiplier impact of investments in housing construction on all the sectors of urban economy. Growth of housing investments generates multiplier effect triggering the development of other different interrelated sectors. The paper suggests approach developed by the authors to evaluate the level of city development. It involves defining gross city product on the basis of integral criterion of gross value added of types of economic activities in the city economy. The algorithm of gross value added generation in urban economy is presented as a result of multiplier effect of housing investments. The evaluation of the mentioned effect was shown on the case of the city of Tomsk (Russia). The study has revealed that multiplier effect allows obtaining four rubles of added value out of one ruble of housing investments in the city economy. Methods used in the present study include the ones of the System of National Accounts, as well as methods of statistical and structural analysis. It has been proved that priority investment in housing construction is considered to be the key factor for stable social and economic development of the city. Developed approach is intended for justification of priority directions in municipal and regional investment policy. City and regional governing bodies and potential investors are the ones to apply the given approach.
Ultra-low-power, class-AB, CMOS four-quadrant current multiplier
Sawigun, C.; Serdijn, W.A.
2009-01-01
A class-AB four-quadrant current multiplier constituted by a class-AB current amplifier and a current splitter which can handle input signals in excess of ten times the bias current is presented. The proposed circuit operation is based on the exponential characteristic of BJTs or subthreshold
Mixed Analog/Digital Matrix-Vector Multiplier for Neural Network Synapses
DEFF Research Database (Denmark)
Lehmann, Torsten; Bruun, Erik; Dietrich, Casper
1996-01-01
In this work we present a hardware efficient matrix-vector multiplier architecture for artificial neural networks with digitally stored synapse strengths. We present a novel technique for manipulating bipolar inputs based on an analog two's complements method and an accurate current rectifier...
Radiation-hardened I2L 8*8 multiplier circuit
International Nuclear Information System (INIS)
Doyle, B.R.; Kreps, S.A.; Van Vonno, N.W.; Lake, G.W.
1979-01-01
Development of improved Substrate Fed I 2 L (SFL) processing has been combined with geometry and fanout constraints to design a radiation hardened LSI 8.8 Multiplier. This study describes details of the process and circuit design and gives resultant electrical and radiation test performance
Full inelastic cross section, effective stopping and ranges of fast multiply charged ions
International Nuclear Information System (INIS)
Alimov, R.A.; Arslanbekov, T.U.; Matveev, B.I.; Rakhmatov, A.S.
1994-01-01
Inelastic processes taking place in collision of fast multiply charged ions with atoms are considered on the base of mechanism of sudden momentum transfer. The simple estimations are proposed of full inelastic cross sections, effective stopping and ion ranges in gaseous medium. (author). 10 refs
The Multiplier Effect: A Strategy for the Continuing Education of School Psychologists
Lesiak, Walter; And Others
1975-01-01
Twenty-two school psychologists participated in a year long institute designed to test the use of a multiplier effect in the continuing professional development of school psychologists in Michigan. Results indicated that 550 school psychologists attended two in-service meetings with generally favorable reactions. (Author)
The net multiplier is a new key sector indicator : Reply to De Mesnard's comment
Oosterhaven, Jan
Most of the comment of de Mesnard applies to a causal interpretation of the net multiplier that is applied to economically impossible exogenous (changes in) total output. This reply shows that this interpretation is incorrect and that his further argumentation is based on a time inconsistent
Electron and X-ray emission in collisions of multiply charged ions and atoms
International Nuclear Information System (INIS)
Woerlee, P.H.
1979-01-01
The author presents experimental results of electron and X-ray emission following slow collisions of multiply charged ions and atoms. The aim of the investigation was to study the mechanisms which are responsible for the emission. (G.T.H.)
Neidert, Pamela L.; Iwata, Brian A.; Dozier, Claudia L.
2005-01-01
We describe the assessment and treatment of 2 children with autism spectrum disorder whose problem behaviors (self-injury, aggression, and disruption) were multiply controlled. Results of functional analyses indicated that the children's problem behaviors were maintained by both positive reinforcement (attention) and negative reinforcement (escape…
Colliding beam studies of electron detachement from H- by multiply-charged ions
International Nuclear Information System (INIS)
Melchert, F.; Benner, M.; Kruedener, S.; Schulze, R.; Meuser, S.; Pfaff, S.; Petri, S.; Huber, K.; Salzborn, E.; Presnyakov, L.P.; Uskov, D.B.
1993-01-01
Employing the crossed-beams technique, we have investigated electron-detachment processes from H - in collisions with multiply-charged noble gas ions A q+ . Absolute cross sections for single- and double-electron removal have been measured at center-of-mass energies from 50 keV to 200 keV and charge states q up to 8
Improved 64-bit Radix-16 Booth Multiplier Based on Partial Product Array Height Reduction
DEFF Research Database (Denmark)
Antelo, Elisardo; Montuschi, Paolo; Nannarelli, Alberto
2016-01-01
, a reduction of one unit in the maximum height is achieved. This reduction may add flexibility during the design of the pipelined multiplier to meet the design goals, it may allow further optimizations of the partial product array reduction stage in terms of area/delay/power and/or may allow additional addends...
KAJIAN EFEK MULTIPLIER PRODUK UNGGULAN BERBASIS KLUSTER UKM PENGOLAHAN IKAN ASAP
Directory of Open Access Journals (Sweden)
Yusmar Ardhi Hidayat
2015-05-01
Full Text Available The purpose of this research are to analyze scale of production of leading commodities and multiplier effect of cultivation and smoked fish in Wonosari, Bonang Demak. This research applies census method in collecting data from all business unit which identified as leading commodities in Wirosari Village, Bonang, Demak Regency. Regarding survey conducted, there are 18 catfish breeders and 49 smoked fish small business used as respondent. Primary data used in this research are rate of production in basis goods, land area, capital, raw materials, manpower, and income multiplier. To support empirical discussion, tools of analysis used in this research are descriptive statistics and income multiplier. Results of this research are primary commodities in Wonosari Village are smoked fish and fresh cat fish. Total production of smoked fish reaches 6.4 Ton each day for with type of smoked fish such as river cat fish, tongkol, sting-ray, cat fish, and other river fish. Meanwhile total production of catfish breeding reaches 105 Ton in first harvest after 2-3 months. Based on that number, smoked fish business promise higher profit than profits catfish breeding. Tujuan penelitian ini adalah menganalisis tingkat produksi dan efek multiplier produk unggulan budidaya dan pengasapan ikan di Desa Wonosari, Bonang Kabupaten Demak. Penelitian mengunakan metode sensus dengan mencari data dari semua unit usaha yang merupakan produk unggulan di Desa Wirosari, Bonang Kecamatan Demak. Responden yang diperoleh sejumlah 18 pembudidaya ikan dan 49 usaha pengasapan ikan. Data primer yang akan digunakan yaitu data jumlah produksi komoditas unggulan, luas lahan, jumlah modal, bahan baku, tenaga kerja, dan multiplier pendapatan. Alat analisis yang digunakan dalam penelitian ini adalah statistik deskriptif, dan indeks multiplier pendapatan. Hasil penelitian menunjukkan bahwa komoditas unggulan Desa Wonosari Kecamatan Bonang Kabupaten Demak adalah Ikan Asap dan Budidaya Ikan Lele
DO PUBLIC AND PRIVATE DEBT LEVELS AFFECT THE SIZE OF FISCAL MULTIPLIERS?
Directory of Open Access Journals (Sweden)
Chairul Adi
2017-09-01
Full Text Available This paper investigates the effectiveness of fiscal policies – as measured by the impact and cumulative multipliers – and how they interact with public and private debt. Harnessing the moderated panel regression approach, based on the yearly data set of several economies during the period from 1996 to 2012, the analysis is focused on the impact of spending-and-revenue-based fiscal policies on economic growth and how these fiscal instruments interact with public and private indebtedness. The result of spending stimuli advocates the basic Keynesian theory. An increase in public expenditures contemporaneously generates a positive multiplier, of around 0.29 – 0.44 and around 0.45 – 0.58 during two years. Decomposing the expenditures into their elements, this paper documents a stronger impact from public investment than that from government purchases. On the other hand, the revenue stimuli seem to follow the Ricardian Equivalence Hypothesis (REH, arguing that current tax cuts are inconsequential. The impact and cumulative multipliers for this fiscal instrument have mixed results, ranging from -0.21 to 0.05 and -0.26 to 0.06, respectively. Moreover, no robust evidence is found to support the argument that government debt moderates the effectiveness of fiscal policies. The size of the multipliers for both spending and revenue policies remain constant with the level of public debt. On the other hand, private debt appears to show a statistically significant moderating effect on spending stimuli. Its impact on spending multipliers, however, is economically insignificant. The moderation effect of private debt on the revenue stimuli does not seem to exist. Finally, this paper documents that both public and private debt exhibit a negative and statistically significant estimation for economic output.
Directory of Open Access Journals (Sweden)
Aleksandr Alekseev
2015-07-01
Full Text Available We establish necessary and sufficient conditions for existence of an integrating multiplier of a special form for systems of two cubic differential equations of the first order. We further study bifurcations of such systems with the change of parameters of their integrating multipliers.
DEFF Research Database (Denmark)
Goswami, Kavita; Pandey, Bishwajeet; Hussain, Dil muhammed Akbar
2016-01-01
Multiplier is used for multiplication of a signal and a constant in digital signal processing (DSP). 28nm technology based Vedic multiplier is implemented with use of VHDL HDL, Xilinx ISE, Kintex-7 FPGA and XPower Analyzer. Vedic multiplier gain speed improvements by parallelizing the generation...... Programmable Gate Array (FPGA) in order to reduce the development cost. The development cost for Application Specific Integrated Circuits (ASICs) are high in compare to FPGA. Selection of the most energy efficient IO standards in place of signal gating is the main design methodology for design of energy...... efficient Vedic multiplier.There is 68.51%, 69.86%, 74.65%, and 78.39% contraction in total power of Vedic multiplier on 28nm Kintex-7 FPGA, when we use HSTL_II in place of HSTL_II_DCI_18 at 56.7oC, 53.5oC, 40oC and 21oC respectively....
Guo, Wei-Li; Huang, De-Shuang
2017-08-22
Transcription factors (TFs) are DNA-binding proteins that have a central role in regulating gene expression. Identification of DNA-binding sites of TFs is a key task in understanding transcriptional regulation, cellular processes and disease. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) enables genome-wide identification of in vivo TF binding sites. However, it is still difficult to map every TF in every cell line owing to cost and biological material availability, which poses an enormous obstacle for integrated analysis of gene regulation. To address this problem, we propose a novel computational approach, TFBSImpute, for predicting additional TF binding profiles by leveraging information from available ChIP-seq TF binding data. TFBSImpute fuses the dataset to a 3-mode tensor and imputes missing TF binding signals via simultaneous completion of multiple TF binding matrices with positional consistency. We show that signals predicted by our method achieve overall similarity with experimental data and that TFBSImpute significantly outperforms baseline approaches, by assessing the performance of imputation methods against observed ChIP-seq TF binding profiles. Besides, motif analysis shows that TFBSImpute preforms better in capturing binding motifs enriched in observed data compared with baselines, indicating that the higher performance of TFBSImpute is not simply due to averaging related samples. We anticipate that our approach will constitute a useful complement to experimental mapping of TF binding, which is beneficial for further study of regulation mechanisms and disease.
DEFF Research Database (Denmark)
Pryce, J E; Johnston, J; Hayes, B J
2014-01-01
detection in genome-wide association studies and the accuracy of genomic selection may increase when the low-density genotypes are imputed to higher density. Genotype data were available from 10 research herds: 5 from Europe [Denmark, Germany, Ireland, the Netherlands, and the United Kingdom (UK)], 2 from...... reference populations. Although it was not possible to use a combined reference population, which would probably result in the highest accuracies of imputation, differences arising from using 2 high-density reference populations on imputing 50,000-marker genotypes of 583 animals (from the UK) were...... information exploited. The UK animals were also included in the North American data set (n = 1,579) that was imputed to high density using a reference population of 2,018 bulls. After editing, 591,213 genotypes on 5,999 animals from 10 research herds remained. The correlation between imputed allele...
Enhancing shelf life of minimally processed multiplier onion using silicone membrane.
Naik, Ravindra; Ambrose, Dawn C P; Raghavan, G S Vijaya; Annamalai, S J K
2014-12-01
The aim of storage of minimal processed product is to increase the shelf life and thereby extend the period of availability of minimally processed produce. The silicone membrane makes use of the ability of polymer to permit selective passage of gases at different rates according to their physical and chemical properties. Here, the product stored maintains its own atmosphere by the combined effects of respiration process of the commodity and the diffusion rate through the membrane. A study was undertaken to enhance the shelf life of minimally processed multiplier onion with silicone membrane. The respiration activity was recorded at a temperature of 30 ± 2 °C (RH = 60 %) and 5 ± 1 °C (RH = 90 %). The respiration was found to be 23.4, 15.6, 10 mg CO2kg(-1)h(-1) at 5 ± 1 °C and 140, 110, 60 mg CO2kg(-1) h(-1) at 30 ± 2° for the peeled, sliced and diced multiplier onion, respectively. The respiration rate for the fresh multiplier onion was recorded to be 5, 10 mg CO2kg(-1) h(-1) at 5 ± 1 °C and 30 ± 1 ° C, respectively. Based on the shelf life studies and on the sensory evaluation, it was found that only the peeled multiplier onion could be stored. The sliced and diced multiplier onion did not have the required shelf life. The shelf life of the multiplier onion in the peel form could be increased from 4-5 days to 14 days by using the combined effect of silicone membrane (6 cm(2)/kg) and low temperature (5 ± 1 °C).
International Nuclear Information System (INIS)
Bedilov, R.M.; Bedilov, M.R.; Sabitov, M.M.; Matnazarov, A.; Niyozov, B.
2004-01-01
Full text: It is known, under interaction of laser radiation with solid surface a power density q > 0.01 W/cm 2 are observed destruction of a solid and issue of electrons, ions, neutrals, neutrons, plasmas, and also radiation in a wide ranges of a spectra. Despite of a plenty of works, devoted to study of processes of interaction, the studies of feature of destruction of solids by laser beam in process of formation multiply charged ions are insufficiently investigated. The results of study feature of destruction of solids by laser radiation in process of formation multiply charged ions are given in this work. In our experiments, we used the mass spectrometer with single-channel laser radiation. The laser installation had the following parameters: a power density of laser radiation q=(0.1-50) GW/cm 2 ; the angle of incidence a=18 deg. to the target surface Al, (W). It was obtained experimentally dynamics of morphology of destruction and also mass - charge and energy spectra of multiply charged ions formed under interaction of laser radiation with Al (W) in the intensity range q=(0.1-50) GW/cm 2 . These studies showed features of destruction Al(W) by laser radiation, i.e. invariable of value evaporation mass from a surface of a solid increase as the laser intensity q. But thus temperature a pair increases in accordance with increase of flow density of a laser radiation. Increase of temperature the pair gives in formation of multiply charged plasma. It is typical that, as q of the laser increases the maximum charge number of ions in laser plasma considerably increase and their energy spectra extend toward higher energies. For example, under q=0.1 GW/cm 2 and 50 GW/cm 2 the maximum charge number of ions Al (W) are equal to Z max = 1 and 7, respectively. From the experimental data obtained, we can conclude that, the formed multiply charged plasma practically completely absorption laser radiation and 'shielding' a target surface for various metals at power densities
Time-resolved PHERMEX image restorations constrained with an additional multiply-exposed image
International Nuclear Information System (INIS)
Kruger, R.P.; Breedlove, J.R. Jr.; Trussell, H.J.
1978-06-01
There are a number of possible industrial and scientific applications of nanosecond cineradiographs. Although the technology exists to produce closely spaced pulses of x rays for this application, the quality of the time-resolved radiographs is severely limited. The limitations arise from the necessity of using a fluorescent screen to convert the transmitted x rays to light and then using electro-optical imaging systems to gate and to record the images with conventional high-speed cameras. It has been proposed that, in addition to the time-resolved images, a conventional multiply exposed radiograph be obtained. This report uses both PHERMEX and conventional photographic simulations to demonstrate that the additional information supplied by the multiply exposed radiograph can be used to improve the quality of digital image restorations of the time-resolved pictures over what could be achieved with the degraded images alone
Enhanced fuel production in thorium/lithium hybrid blankets utilizing uranium multipliers
Energy Technology Data Exchange (ETDEWEB)
Pitulski, R.H.
1979-10-01
A consistent neutronics analysis is performed to determine the effectiveness of uranium bearing neutron multiplier zones on increasing the production of U/sup 233/ in thorium/lithium blankets for use in a tokamak fusion-fission hybrid reactor. The nuclear performance of these blankets is evaluated as a function of zone thicknesses and exposure by using the coupled transport burnup code ANISN-CINDER-HIC. Various parameters such as U/sup 233/, Pu/sup 239/, and H/sup 3/ production rates, the blanket energy multiplication, isotopic composition of the fuels, and neutron leakages into the various zones are evaluated during a 5 year (6 MW.y.m/sup -2/) exposure period. Although the results of this study were obtained for a tokomak magnetic fusion device, the qualitative behavior associated with the use of the uranium bearing neutron multiplier should be applicable to all fusion-fission hybrids.
Enhanced fuel production in thorium/lithium hybrid blankets utilizing uranium multipliers
International Nuclear Information System (INIS)
Pitulski, R.H.
1979-10-01
A consistent neutronics analysis is performed to determine the effectiveness of uranium bearing neutron multiplier zones on increasing the production of U 233 in thorium/lithium blankets for use in a tokamak fusion-fission hybrid reactor. The nuclear performance of these blankets is evaluated as a function of zone thicknesses and exposure by using the coupled transport burnup code ANISN-CINDER-HIC. Various parameters such as U 233 , Pu 239 , and H 3 production rates, the blanket energy multiplication, isotopic composition of the fuels, and neutron leakages into the various zones are evaluated during a 5 year (6 MW.y.m -2 ) exposure period. Although the results of this study were obtained for a tokomak magnetic fusion device, the qualitative behavior associated with the use of the uranium bearing neutron multiplier should be applicable to all fusion-fission hybrids
Lasher, Mark E.; Henderson, Thomas B.; Drake, Barry L.; Bocker, Richard P.
1986-09-01
The modified signed-digit (MSD) number representation offers full parallel, carry-free addition. A MSD adder has been described by the authors. This paper describes how the adder can be used in a tree structure to implement an optical multiply algorithm. Three different optical schemes, involving position, polarization, and intensity encoding, are proposed for realizing the trinary logic system. When configured in the generic multiplier architecture, these schemes yield the combinatorial logic necessary to carry out the multiplication algorithm. The optical systems are essentially three dimensional arrangements composed of modular units. Of course, this modularity is important for design considerations, while the parallelism and noninterfering communication channels of optical systems are important from the standpoint of reduced complexity. The authors have also designed electronic hardware to demonstrate and model the combinatorial logic required to carry out the algorithm. The electronic and proposed optical systems will be compared in terms of complexity and speed.
Preconditioned alternating direction method of multipliers for inverse problems with constraints
International Nuclear Information System (INIS)
Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie
2017-01-01
We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method. (paper)
Directory of Open Access Journals (Sweden)
Lina Hamouche
2017-02-01
Full Text Available Bacteria adopt social behavior to expand into new territory, led by specialized swarmers, before forming a biofilm. Such mass migration of Bacillus subtilis on a synthetic medium produces hyperbranching dendrites that transiently (equivalent to 4 to 5 generations of growth maintain a cellular monolayer over long distances, greatly facilitating single-cell gene expression analysis. Paradoxically, while cells in the dendrites (nonswarmers might be expected to grow exponentially, the rate of swarm expansion is constant, suggesting that some cells are not multiplying. Little attention has been paid to which cells in a swarm are actually multiplying and contributing to the overall biomass. Here, we show in situ that DNA replication, protein translation and peptidoglycan synthesis are primarily restricted to the swarmer cells at dendrite tips. Thus, these specialized cells not only lead the population forward but are apparently the source of all cells in the stems of early dendrites. We developed a simple mathematical model that supports this conclusion.
On the design of a radix-10 online floating-point multiplier
McIlhenny, Robert D.; Ercegovac, Milos D.
2009-08-01
This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.
International Nuclear Information System (INIS)
Singh, Manpreet; Singh, Gurvinderjit; Singh, Bhajan; Sandhu, B.S.
2008-01-01
The gamma photons continue to soften in energy as the number of scatterings increases in the target having finite dimensions both in depth and lateral dimensions. The number of multiply scattered photons increases with an increase in target thickness, and saturates at a particular value of the target thickness known as saturation thickness (depth). The present measurements are carried out to study the energy dependence of saturation thickness of multiply scattered gamma photons from targets of various thicknesses. The scattered photons are detected by a properly shielded NaI(Tl) gamma ray detector placed at 90 deg. to the incident beam. We observe that the saturation thickness increases with increasing incident gamma photon energy. Monte Carlo calculations based upon the package developed by Bauer and Pattison [Compton scattering experiments at the HMI (1981), HMI-B 364, pp. 1-106] support the present experimental results
Effect of cooling on the efficiency of Schottky varactor frequency multipliers at millimeter waves
Louhi, Jyrki; Raiesanen, Antti; Erickson, Neal
1992-01-01
The efficiency of the Schottky diode multiplier can be increased by cooling the diode to 77 K. The main reason for better efficiency is the increased mobility of the free carriers. Because of that the series resistance decreases and a few dB higher efficiency can be expected at low input power levels. At high output frequencies and at high power levels, the current saturation decreases the efficiency of the multiplication. When the diode is cooled the maximum current of the diode increases and much more output power can be expected. There are also slight changes in the I-V characteristic and in the diode junction capacitance, but they have a negligible effect on the efficiency of the multiplier.
Effects of layer-multiplying and interface on the content of β-transcrystallization in PP
International Nuclear Information System (INIS)
Lei, Fan; Li, Jiang; Guo, Shaoyun
2015-01-01
The alternating multilayered polypropylene (PP layer)/β-nucleating agent filled-polypropylene (β-PP layer) were prepared through layer-multiplying extrusion combined with an assembly of layer-multiplying elements (LM Es). The content of β-crystal was firstly evaluated by differential scanning calorimetry (DSC), which indicated that the relative amount of the β-crystal increased from 38.67% to 81.22% with the increase of layer numbers from 2-layer to 128-layer. It was well consistent with the results of X-ray diffraction (XRD). The morphology observation of β-crystal by polarizing microscope (POM) revealed that the closely packed nuclei in the interface could induce numerous β-transcrystallization in pure PP layer due to the confinement effect. The non-isothermal crystallization kinetic analysis via Mozhishen’s methods manifested that the crystallization rate was greatly enhanced by the augment of the layered interface
Synthesis of highly faceted multiply twinned gold nanocrystals stabilized by polyoxometalates
International Nuclear Information System (INIS)
Yuan Junhua; Chen Yuanxian; Han Dongxue; Zhang Yuanjian; Shen Yanfei; Wang Zhijuan; Niu Li
2006-01-01
A novel and facile chemical synthesis of highly faceted multiply twinned gold nanocrystals is reported. The gold nanocrystals are hexagonal in transmission electron microscopy and icosahedral in scanning electron microscopy. Phosphotungstic acid (PTA), which was previously reduced, serves as a reductant and stabilizer for the synthesis of gold nanocrystals. The PTA-gold nanocomposites are quite stable in aqueous solutions, and electrochemically active towards the hydrogen evolution reaction
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2017-01-01
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra...
Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Adabi, Saba; Nasiriavanaki, Mohammadreza
2018-01-01
Photoacoustic imaging (PAI) is an emerging medical imaging modality capable of providing high spatial resolution of Ultrasound (US) imaging and high contrast of optical imaging. Delay-and-Sum (DAS) is the most common beamforming algorithm in PAI. However, using DAS beamformer leads to low resolution images and considerable contribution of off-axis signals. A new paradigm namely Delay-Multiply-and-Sum (DMAS), which was originally used as a reconstruction algorithm in confocal microwave imaging...
Ionizing device comprising a microchannel electron multiplier with secondary electron emission
International Nuclear Information System (INIS)
Chalmeton, Vincent.
1974-01-01
The present invention relates to a ionizing device comprising a microchannel electron multiplier involving secondary electron emission as a means of ionization. A system of electrodes is used to accelerate said electrons, ionize the gas and extract the ions from thus created plasma. Said ionizer is suitable for bombarding the target in neutron sources (target of the type of nickel molybdenum coated with tritiated titanium or with a tritium deuterium mixture) [fr
New holographic limit of AdS5(multiply-in-circle sign)S5
International Nuclear Information System (INIS)
Hatsuda, Machiko; Siegel, Warren
2003-01-01
We reexamine the projective light cone limit of the gauge-invariant Green-Schwarz action on five-dimensional anti-de Sitter (multiply-in-circle sign) the five-sphere. It implies the usual holography for AdS 5 , but also (a complex) one for S 5 . The result is N=4 projective superspace, which unlike N=4 harmonic superspace can describe N=4 super Yang-Mills theory off shell
Controlled G-Frames and Their G-Multipliers in Hilbert spaces
Rahimi, Asghar; Fereydooni, Abolhassan
2012-01-01
Multipliers have been recently introduced by P. Balazs as operators for Bessel sequences and frames in Hilbert spaces. These are operators that combine (frame-like) analysis, a multiplication with a fixed sequence (called the symbol) and synthesis. Weighted and controlled frames have been introduced to improve the numerical efficiency of iterative algorithms for inverting the frame operator Also g-frames are the most popular generalization of frames that include almost all of the frame extens...
Silicon Photo-Multiplier Radiation Hardness Tests with a White Neutron Beam
International Nuclear Information System (INIS)
Montanari, A.; Tosi, N.; Pietropaolo, A.; Andreotti, M.; Baldini, W.; Calabrese, R.; Cibinetto, G.; Luppi, E.; Cotta Ramusino, A.; Malaguti, R.; Santoro, V.; Tellarini, G.; Tomassetti, L.; De Donato, C.; Reali, E.
2013-06-01
We report radiation hardness tests performed, with a white neutron beam, at the Geel Electron Linear Accelerator in Belgium on silicon Photo-Multipliers. These are semiconductor photon detectors made of a square matrix of Geiger-Mode Avalanche photo-diodes on a silicon substrate. Several samples from different manufacturers have been irradiated integrating up to about 6.2 x 10 9 1-MeV-equivalent neutrons per cm 2 . (authors)
Limitations in THz Power Generation with Schottky Diode Varactor Frequency Multipliers
DEFF Research Database (Denmark)
Krozer, Viktor; Loata, G.; Grajal, J.
2002-01-01
, at increasing frequencies the power drops with f-3 instead of the f-2 predicted by theory. In this contribution we provide an overview of state-of-the-art results. A comparison with theoretically achievable multiplier performance reveals that the devices employed at higher frequencies are operating...... inefficiently and the design and fabrication capabilities have not reached the maturity encountered at lower THz frequencies....
Ad Hoc Microphone Array Beamforming Using the Primal-Dual Method of Multipliers
DEFF Research Database (Denmark)
Tavakoli, Vincent Mohammad; Jensen, Jesper Rindom; Heusdens, Richard
2016-01-01
In the recent years, there have been increasing amount of researches aiming at optimal beamforming with ad hoc microphone arrays, mostly with fusion-based schemes. However, huge amount of computational complexity and communication overhead impede many of these algorithms from being useful in prac...... the distributed linearly-constrained minimum variance beamformer using the the state of the art primal-dual method of multipliers. We study the proposed algorithm with an experiment....
Outer-shell transitions in collisions between multiply charged ions and atoms
International Nuclear Information System (INIS)
Bloemen, E.W.P.
1980-01-01
The study of collisions between multiply charged ions and atoms (molecules) is of importance in different areas of research. Usually, the most important process is capture of an electron from the target atom into the projectile ion. In most cases the electron goes to an excited state of the projectile ion. These electron capture processes are studied. The author also studied direct excitation of the target atom and of the projectile ion. (Auth.)
The Growth Points of Regional Economy and Regression Estimation for Branch Investment Multipliers
Directory of Open Access Journals (Sweden)
Nina Pavlovna Goridko
2018-03-01
Full Text Available The article develops the methodology of using investment multipliers to identify growth points for a regional economy. The paper discusses various options for the assessment of multiplicative effects caused by investments in certain sectors of the economy. All calculations are carried out on the example of economy of the Republic of Tatarstan for the period 2005–2015. The instrument of regression modeling using the method of least squares, permits to estimate sectoral and cross-sectoral investment multipliers in the economy of the Republic of Tatarstan. Moreover, this method allows to assess the elasticity of gross output of regional economy and its individual sectors depending on investment in various sectors of the economy. Calculations results allowed to identify three growth points of the economy of the Republic of Tatarstan. They are mining industry, manufacturing industry and construction. The success of a particular industry or sub-industry in a country or a region should be measured not only by its share in macro-system’s gross output or value added, but also by the multiplicative effect that investments in the industry have on the development of other industries, on employment and on general national or regional product. In recent years, the growth of the Russian was close to zero. Thus, it is crucial to understand the structural consequences of the increasing investments in various sectors of the Russian economy. In this regard, the problems solved in the article are relevant for a number of countries and regions with a similar economic situation. The obtained results can be applied for similar estimations of investment multipliers as well as multipliers of government spending, and other components of aggregate demand in various countries and regions to identify growth points. Investments in these growth points will induce the greatest and the most evident increment of the outcome from the macro-system’s economic activities.
Relations between bilinear multipliers on Rn,Tn Rn,Tn Rn,Tn and Zn
Indian Academy of Sciences (India)
Since then the study of bilinear multiplier operators which commute with simultaneous translations have attracted a great deal of ... Unlike in the linear case, the boundedness of the symbol ψC is not known. In this article we will be dealing with .... For the converse, let ψ ∈ M p3 p1,p2 (Z). For f, g ∈ C∞ c (R), we have.
ROLE OF AGRO-INDUSTRY IN BANGLADESH ECONOMY: AN EMPIRICAL ANALYSIS OF LINKAGES AND MULTIPLIERS
Quddus, Md. Abdul
2009-01-01
The study was undertaken to evaluate the contribution of agro-industry in the Bangladesh economy. The latest two input-output tables of the year 1993-94 and 2001-2002 in Bangladesh were used to calculate inter-industry linkage indices and multiplier effects. Agroindustry contributes a significant portion of national income and the prospect of employment generation is increasing at the higher extent for the sectors food processing, tanning and leather finishing, leather industry, saw milling a...
Timing characteristics of the VEhU-6 microchannel electron multipliers
International Nuclear Information System (INIS)
Bakhtizin, R.Z.; Yumaguzin, Yu.M.
1982-01-01
The VEhU-6 charnel electron multiplier timing characteristics are experimentally studied. Dependence of monoelectron pulse duration at the VEhU-6 output at different values of channel supply voltage is investigated. The VEhU-6 delay time is measured. Delay time increased from 10 to 30 ns with the increase of channel supply voltage from 2.8 to 3.2 kV (at approximately 10 5 pulse/s loading). Delay time increases with loading decrease
The validity of PPP: evidence from Lagrange multiplier unit root tests for ASEAN countries
Alper ASLAN
2010-01-01
The univariate and panel Lagrange Multiplier (LM) unit root tests with one and two structural breaks proposed by Lee and Strazicich (2003, 2004) which are considerably more powerful than traditional tests are employed to investigate whether the purchasing power parity (PPP) theory holds true for ASEAN countries by using both black market and official exchange rates. We find strong evidence in favour of long-run PPP for six ASEAN countries namely, Indonesia, Malaysia, Myanmar, Philippines, Sin...
Understanding the size of the government spending multiplier: It's in the sign
Barnichon, Régis; Matthes, Christian
2016-01-01
Despite intense scrutiny, estimates of the government spending multiplier remain highly uncertain, with values ranging from 0.5 to 2. While an increase in government spending is generally assumed to have the same (mirror-image) effect as a decrease in government spending, we show that relaxing this assumption is important to understand the effects of fiscal policy. Regardless of whether we identify government spending shocks from (i) a narrative approach, or (ii) a timing restr...
Enhanced fuel production in thorium fusion hybrid blankets utilizing uranium multipliers
International Nuclear Information System (INIS)
Pitulski, R.H.; Chapin, D.L.; Klevans, E.
1979-01-01
The multiplication of 14 MeV D-T fusion neutrons via (n,2n), (n,3n), and fission reactions by 238 U is well known and established. This study consistently evaluates the effectiveness of a depleted (tails) UO 2 multiplier on increasing the production of 233 U and tritium in a thorium/lithium fusion--fission hybrid blanket. Nuclear performance is evaluated as a function of exposure and zone thickness
On the fast response of charnel electron multipliers in coUnting mode operation
International Nuclear Information System (INIS)
Belyaevskij, O.A.; Gladyshev, I.L.; Korobochko, Yu.S.; Mineev, V.I.
1983-01-01
Dependences of amplitude distribution of pulses at the outlet of channel electron multipliers (CEM) and effectiveness of monitoring on counting rate at different supply voltages are determined. It is shown that the maximUm counting rate of CEM runs into 6x10 5 s -1 at short-term and 10 5 s -1 at long-term operation using monitoring eqUipment with operation threshold of 2.5 mV
Charge exchange and ionization in atom-multiply-charged ion collisions
International Nuclear Information System (INIS)
Presnyakov, L.P.; Uskov, D.B.
1988-01-01
This study investigates one-electron transitions to the continuous and discrete spectra induced by a collision of atom A and multiply-charged ion B +Z with nuclear charge Z > 3. An analytical method is developed the charge-exchange reaction; this method is a generalization of the decay model and the approximation of nonadiabatic coupling of two states that are used as limiting cases in the proposed approach
Influence of capture to excited states of multiply charged ion beams colliding with small molecules
International Nuclear Information System (INIS)
Montenegro, P; Monti, J M; Fojón, O A; Hanssen, J; Rivarola, R D
2015-01-01
Electron capture by multiply charged ions impacting on small molecules is theoretically investigated. Particular attention is paid to the case of biological targets. The interest is focused on the importance of the transition to excited final states which can play a dominant role on the total capture cross sections. Projectiles at intermediate and high collision energies are considered. Comparison with existing experimental data is shown. (paper)
Multiplier effects and government assistance to energy megaprojects: An application to Hibernia
International Nuclear Information System (INIS)
Feehan, J.P.; Locke, L.W.
1993-01-01
Energy megaprojects typically require several years to construct and entail substantial costs. These costs, in the forms of employment, capital equipment and material inputs, are sometimes viewed as benefits. Moreover, the expenditures on these inputs can induce further increases in employment and income. On the basis of these project-specific and induced effects, government assistance is sometimes sought. The very limiting circumstances under which such government aid is justified are described. Multiplier effects only become relevant if private expenditure would not otherwise occur in some form in the economy. There are contractionary multiplier effects associated with the imposition of taxes to finance the project, and so the two opposing forces may be largely offsetting. Government assistance can only be justified in the presence of unemployment, and where the multiplier effects are large. When these criteria are applied to the Hibernia project, it is found that the project does not generate employment and income effects that are large relative to the total expenditure, or even relative to the level of federal government assistance. The job creation argument for the justification of government assistance to the Hibernia project is very weak. 18 refs., 1 tab
Inference on the reliability of Weibull distribution with multiply Type-I censored data
International Nuclear Information System (INIS)
Jia, Xiang; Wang, Dong; Jiang, Ping; Guo, Bo
2016-01-01
In this paper, we focus on the reliability of Weibull distribution under multiply Type-I censoring, which is a general form of Type-I censoring. In multiply Type-I censoring in this study, all units in the life testing experiment are terminated at different times. Reliability estimation with the maximum likelihood estimate of Weibull parameters is conducted. With the delta method and Fisher information, we propose a confidence interval for reliability and compare it with the bias-corrected and accelerated bootstrap confidence interval. Furthermore, a scenario involving a few expert judgments of reliability is considered. A method is developed to generate extended estimations of reliability according to the original judgments and transform them to estimations of Weibull parameters. With Bayes theory and the Monte Carlo Markov Chain method, a posterior sample is obtained to compute the Bayes estimate and credible interval for reliability. Monte Carlo simulation demonstrates that the proposed confidence interval outperforms the bootstrap one. The Bayes estimate and credible interval for reliability are both satisfactory. Finally, a real example is analyzed to illustrate the application of the proposed methods. - Highlights: • We focus on reliability of Weibull distribution under multiply Type-I censoring. • The proposed confidence interval for the reliability is superior after comparison. • The Bayes estimates with a few expert judgements on reliability are satisfactory. • We specify the cases where the MLEs do not exist and present methods to remedy it. • The distribution of estimate of reliability should be used for accurate estimate.
Design, implementation and performance comparison of multiplier topologies in power-delay space
Directory of Open Access Journals (Sweden)
Mansi Jhamb
2016-03-01
Full Text Available With the advancements in the semiconductor industry, designing a high performance processor is a prime concern. Multiplier is one of the most crucial parts in almost every digital signal processing applications. This paper addresses the implementation of an 8-bit multiplier design employing CMOS full adder, full adder using Double Pass Transistor (DPL and multioutput carry Lookahead logic (CLA. DPL adder avoids the noise margin problem and speed degradation at low value of supply voltages associated with complementary pass transistor (CPL logic circuits. Multioutput carry lookahead adder leads to significant improvement in the speed of the overall circuitry. The investigation is carried out with simulation runs on HSPICE environment using 90 nm process technology at 25 °C. Finally, the design guidelines are derived to select the most suitable topology for the desired applications. Investigation reveals that multiplier design using multioutput carry lookahead adder proves to be more speed efficient in comparison with the other two considered design strategies.
Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.
Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo
2018-01-01
The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Modeling Photo-multiplier Gain and Regenerating Pulse Height Data for Application Development
Aspinall, Michael D.; Jones, Ashley R.
2018-01-01
Systems that adopt organic scintillation detector arrays often require a calibration process prior to the intended measurement campaign to correct for significant performance variances between detectors within the array. These differences exist because of low tolerances associated with photo-multiplier tube technology and environmental influences. Differences in detector response can be corrected for by adjusting the supplied photo-multiplier tube voltage to control its gain and the effect that this has on the pulse height spectra from a gamma-only calibration source with a defined photo-peak. Automated methods that analyze these spectra and adjust the photo-multiplier tube bias accordingly are emerging for hardware that integrate acquisition electronics and high voltage control. However, development of such algorithms require access to the hardware, multiple detectors and calibration source for prolonged periods, all with associated constraints and risks. In this work, we report on a software function and related models developed to rescale and regenerate pulse height data acquired from a single scintillation detector. Such a function could be used to generate significant and varied pulse height data that can be used to integration-test algorithms that are capable of automatically response matching multiple detectors using pulse height spectra analysis. Furthermore, a function of this sort removes the dependence on multiple detectors, digital analyzers and calibration source. Results show a good match between the real and regenerated pulse height data. The function has also been used successfully to develop auto-calibration algorithms.
Lower-Order Compensation Chain Threshold-Reduction Technique for Multi-Stage Voltage Multipliers
Directory of Open Access Journals (Sweden)
Francesco Dell’ Anna
2018-04-01
Full Text Available This paper presents a novel threshold-compensation technique for multi-stage voltage multipliers employed in low power applications such as passive and autonomous wireless sensing nodes (WSNs powered by energy harvesters. The proposed threshold-reduction technique enables a topological design methodology which, through an optimum control of the trade-off among transistor conductivity and leakage losses, is aimed at maximizing the voltage conversion efficiency (VCE for a given ac input signal and physical chip area occupation. The conducted simulations positively assert the validity of the proposed design methodology, emphasizing the exploitable design space yielded by the transistor connection scheme in the voltage multiplier chain. An experimental validation and comparison of threshold-compensation techniques was performed, adopting 2N5247 N-channel junction field effect transistors (JFETs for the realization of the voltage multiplier prototypes. The attained measurements clearly support the effectiveness of the proposed threshold-reduction approach, which can significantly reduce the chip area occupation for a given target output performance and ac input signal.
Formation of molecules in interstellar clouds from singly and multiply ionized atoms
International Nuclear Information System (INIS)
Langer, W.D.; and NASA, Institute for Space Studies, Goddard Space Flight Center, New York)
1978-01-01
Soft X-ray and cosmic rays produce multiply ionized atoms which may initiate molecule production in interstellar clouds. This molecule production can occur via ion-molecule reactions with H 2 , either directly from the multiply ionized atom (e.g.,C ++ + H 2 →CH + + H + ), or indirectly from the singly ionized atoms (e.g., N + + H 2 →NH + + H) that are formed from the recombination or charge transfer of the highly ionized atom (e.g., N ++ + e→N + + hv). We investigate the contribution of these reactions to the abundances of carbon-, nitrogen-, and oxygen-bearing molecules in isobaric models of diffuse clouds. In the presence of the average flux estimated for the diffuse soft X-ray background, multiply ionized atoms contribute only minimally (a few percent) to carbon-bearing molecules such as CH. In the neighborhood of diffuse structures or discrete sources, however, where the X-ray flux is enhanced, multiple ionization is considerably more important for molecule production
Lower-Order Compensation Chain Threshold-Reduction Technique for Multi-Stage Voltage Multipliers.
Dell' Anna, Francesco; Dong, Tao; Li, Ping; Wen, Yumei; Azadmehr, Mehdi; Casu, Mario; Berg, Yngvar
2018-04-17
This paper presents a novel threshold-compensation technique for multi-stage voltage multipliers employed in low power applications such as passive and autonomous wireless sensing nodes (WSNs) powered by energy harvesters. The proposed threshold-reduction technique enables a topological design methodology which, through an optimum control of the trade-off among transistor conductivity and leakage losses, is aimed at maximizing the voltage conversion efficiency (VCE) for a given ac input signal and physical chip area occupation. The conducted simulations positively assert the validity of the proposed design methodology, emphasizing the exploitable design space yielded by the transistor connection scheme in the voltage multiplier chain. An experimental validation and comparison of threshold-compensation techniques was performed, adopting 2N5247 N-channel junction field effect transistors (JFETs) for the realization of the voltage multiplier prototypes. The attained measurements clearly support the effectiveness of the proposed threshold-reduction approach, which can significantly reduce the chip area occupation for a given target output performance and ac input signal.
Low-order-mode harmonic multiplying gyrotron traveling-wave amplifier in W band
International Nuclear Information System (INIS)
Yeh, Y. S.; Chen, C. H.; Yang, S. J.; Lai, C. H.; Lin, T. Y.; Lo, Y. C.; Hong, J. W.; Hung, C. L.; Chang, T. H.
2012-01-01
Harmonic multiplying gyrotron traveling-wave amplifiers (gyro-TWAs) allow for magnetic field reduction and frequency multiplication. To avoid absolute instabilities, this work proposes a W-band harmonic multiplying gyro-TWA operating at low-order modes. By amplifying a fundamental harmonic TE 11 drive wave, the second harmonic component of the beam current initiates a TE 21 wave to be amplified. Absolute instabilities in the gyro-TWA are suppressed by shortening the interaction circuit and increasing wall losses. Simulation results reveal that compared with Ka-band gyro-TWTs, the lower wall losses effectively suppress absolute instabilities in the W-band gyro-TWA. However, a global reflective oscillation occurs as the wall losses decrease. Increasing the length or resistivity of the lossy section can reduce the feedback of the oscillation to stabilize the amplifier. The W-band harmonic multiplying gyro-TWA is predicted to yield a peak output power of 111 kW at 98 GHz with an efficiency of 25%, a saturated gain of 26 dB, and a bandwidth of 1.6 GHz for a 60 kV, 7.5 A electron beam with an axial velocity spread of 8%.
Kaplan, David; Su, Dan
2016-01-01
This article presents findings on the consequences of matrix sampling of context questionnaires for the generation of plausible values in large-scale assessments. Three studies are conducted. Study 1 uses data from PISA 2012 to examine several different forms of missing data imputation within the chained equations framework: predictive mean…
K. Estrada Gil (Karol); A. Abuseiris (Anis); F.G. Grosveld (Frank); A.G. Uitterlinden (André); T.A. Knoch (Tobias); F. Rivadeneira Ramirez (Fernando)
2009-01-01
textabstractThe current fast growth of genome-wide association studies (GWAS) combined with now common computationally expensive imputation requires the online access of large user groups to high-performance computing resources capable of analyzing rapidly and efficiently millions of genetic
Directory of Open Access Journals (Sweden)
CARLOS ALBERTO SILVA
Full Text Available ABSTRACT Accurate forest inventory is of great economic importance to optimize the entire supply chain management in pulp and paper companies. The aim of this study was to estimate stand dominate and mean heights (HD and HM and tree density (TD of Pinus taeda plantations located in South Brazil using in-situ measurements, airborne Light Detection and Ranging (LiDAR data and the non- k-nearest neighbor (k-NN imputation. Forest inventory attributes and LiDAR derived metrics were calculated at 53 regular sample plots and we used imputation models to retrieve the forest attributes at plot and landscape-levels. The best LiDAR-derived metrics to predict HD, HM and TD were H99TH, HSD, SKE and HMIN. The Imputation model using the selected metrics was more effective for retrieving height than tree density. The model coefficients of determination (adj.R2 and a root mean squared difference (RMSD for HD, HM and TD were 0.90, 0.94, 0.38m and 6.99, 5.70, 12.92%, respectively. Our results show that LiDAR and k-NN imputation can be used to predict stand heights with high accuracy in Pinus taeda. However, furthers studies need to be realized to improve the accuracy prediction of TD and to evaluate and compare the cost of acquisition and processing of LiDAR data against the conventional inventory procedures.
Silva, Carlos Alberto; Klauberg, Carine; Hudak, Andrew T; Vierling, Lee A; Liesenberg, Veraldo; Bernett, Luiz G; Scheraiber, Clewerson F; Schoeninger, Emerson R
2018-01-01
Accurate forest inventory is of great economic importance to optimize the entire supply chain management in pulp and paper companies. The aim of this study was to estimate stand dominate and mean heights (HD and HM) and tree density (TD) of Pinus taeda plantations located in South Brazil using in-situ measurements, airborne Light Detection and Ranging (LiDAR) data and the non- k-nearest neighbor (k-NN) imputation. Forest inventory attributes and LiDAR derived metrics were calculated at 53 regular sample plots and we used imputation models to retrieve the forest attributes at plot and landscape-levels. The best LiDAR-derived metrics to predict HD, HM and TD were H99TH, HSD, SKE and HMIN. The Imputation model using the selected metrics was more effective for retrieving height than tree density. The model coefficients of determination (adj.R2) and a root mean squared difference (RMSD) for HD, HM and TD were 0.90, 0.94, 0.38m and 6.99, 5.70, 12.92%, respectively. Our results show that LiDAR and k-NN imputation can be used to predict stand heights with high accuracy in Pinus taeda. However, furthers studies need to be realized to improve the accuracy prediction of TD and to evaluate and compare the cost of acquisition and processing of LiDAR data against the conventional inventory procedures.
Y.J. Kim (Young Jin); J. Lee (Juyoung); B.-J. Kim (Bong-Jo); T. Park (Taesung); G.R. Abecasis (Gonçalo); M.A.A. De Almeida (Marcio); D. Altshuler (David); J.L. Asimit (Jennifer L.); G. Atzmon (Gil); M. Barber (Mathew); A. Barzilai (Ari); N.L. Beer (Nicola L.); G.I. Bell (Graeme I.); J. Below (Jennifer); T. Blackwell (Tom); J. Blangero (John); M. Boehnke (Michael); D.W. Bowden (Donald W.); N.P. Burtt (Noël); J.C. Chambers (John); H. Chen (Han); P. Chen (Ping); P.S. Chines (Peter); S. Choi (Sungkyoung); C. Churchhouse (Claire); P. Cingolani (Pablo); B.K. Cornes (Belinda); N.J. Cox (Nancy); A.G. Day-Williams (Aaron); A. Duggirala (Aparna); J. Dupuis (Josée); T. Dyer (Thomas); S. Feng (Shuang); J. Fernandez-Tajes (Juan); T. Ferreira (Teresa); T.E. Fingerlin (Tasha E.); J. Flannick (Jason); J.C. Florez (Jose); P. Fontanillas (Pierre); T.M. Frayling (Timothy); C. Fuchsberger (Christian); E. Gamazon (Eric); K. Gaulton (Kyle); S. Ghosh (Saurabh); B. Glaser (Benjamin); A.L. Gloyn (Anna); R.L. Grossman (Robert L.); J. Grundstad (Jason); C. Hanis (Craig); A. Heath (Allison); H. Highland (Heather); M. Horikoshi (Momoko); I.-S. Huh (Ik-Soo); J.R. Huyghe (Jeroen R.); M.K. Ikram (Kamran); K.A. Jablonski (Kathleen); Y. Jun (Yang); N. Kato (Norihiro); J. Kim (Jayoun); Y.J. Kim (Young Jin); B.-J. Kim (Bong-Jo); J. Lee (Juyoung); C.R. King (C. Ryan); J.S. Kooner (Jaspal S.); M.-S. Kwon (Min-Seok); H.K. Im (Hae Kyung); M. Laakso (Markku); K.K.-Y. Lam (Kevin Koi-Yau); J. Lee (Jaehoon); S. Lee (Selyeong); S. Lee (Sungyoung); D.M. Lehman (Donna M.); H. Li (Heng); C.M. Lindgren (Cecilia); X. Liu (Xuanyao); O.E. Livne (Oren E.); A.E. Locke (Adam E.); A. Mahajan (Anubha); J.B. Maller (Julian B.); A.K. Manning (Alisa K.); T.J. Maxwell (Taylor J.); A. Mazoure (Alexander); M.I. McCarthy (Mark); J.B. Meigs (James B.); B. Min (Byungju); K.L. Mohlke (Karen); A.P. Morris (Andrew); S. Musani (Solomon); Y. Nagai (Yoshihiko); M.C.Y. Ng (Maggie C.Y.); D. Nicolae (Dan); S. Oh (Sohee); N.D. Palmer (Nicholette); T. Park (Taesung); T.I. Pollin (Toni I.); I. Prokopenko (Inga); D. Reich (David); M.A. Rivas (Manuel); L.J. Scott (Laura); M. Seielstad (Mark); Y.S. Cho (Yoon Shin); X. Sim (Xueling); R. Sladek (Rob); P. Smith (Philip); I. Tachmazidou (Ioanna); E.S. Tai (Shyong); Y.Y. Teo (Yik Ying); T.M. Teslovich (Tanya M.); J. Torres (Jason); V. Trubetskoy (Vasily); S.M. Willems (Sara); A.L. Williams (Amy L.); J.G. Wilson (James); S. Wiltshire (Steven); S. Won (Sungho); A.R. Wood (Andrew); W. Xu (Wang); J. Yoon (Joon); M. Zawistowski (Matthew); E. Zeggini (Eleftheria); W. Zhang (Weihua); S. Zöllner (Sebastian)
2015-01-01
textabstractBackground: Rare variants have gathered increasing attention as a possible alternative source of missing heritability. Since next generation sequencing technology is not yet cost-effective for large-scale genomic studies, a widely used alternative approach is imputation. However, the
Office of Personnel Management — A press release, news release, media release, press statement is written communication directed at members of the news media for the purpose of announcing programs...
van Leeuwen, Elisabeth M.; Karssen, Lennart C.; Deelen, Joris; Isaacs, Aaron; Medina-Gomez, Carolina; Mbarek, Hamdi; Kanterakis, Alexandros; Trompet, Stella; Postmus, Iris; Verweij, Niek; van Enckevort, David J.; Huffman, Jennifer E.; White, Charles C.; Feitosa, Mary F.; Bartz, Traci M.; Manichaikul, Ani; Joshi, Peter K.; Peloso, Gina M.; Deelen, Patrick; van Dijk, Freerk; Willemsen, Gonneke; de Geus, Eco J.; Milaneschi, Yuri; Penninx, Brenda W.J.H.; Francioli, Laurent C.; Menelaou, Androniki; Pulit, Sara L.; Rivadeneira, Fernando; Hofman, Albert; Oostra, Ben A.; Franco, Oscar H.; Leach, Irene Mateo; Beekman, Marian; de Craen, Anton J.M.; Uh, Hae-Won; Trochet, Holly; Hocking, Lynne J.; Porteous, David J.; Sattar, Naveed; Packard, Chris J.; Buckley, Brendan M.; Brody, Jennifer A.; Bis, Joshua C.; Rotter, Jerome I.; Mychaleckyj, Josyf C.; Campbell, Harry; Duan, Qing; Lange, Leslie A.; Wilson, James F.; Hayward, Caroline; Polasek, Ozren; Vitart, Veronique; Rudan, Igor; Wright, Alan F.; Rich, Stephen S.; Psaty, Bruce M.; Borecki, Ingrid B.; Kearney, Patricia M.; Stott, David J.; Adrienne Cupples, L.; Neerincx, Pieter B.T.; Elbers, Clara C.; Francesco Palamara, Pier; Pe'er, Itsik; Abdellaoui, Abdel; Kloosterman, Wigard P.; van Oven, Mannis; Vermaat, Martijn; Li, Mingkun; Laros, Jeroen F.J.; Stoneking, Mark; de Knijff, Peter; Kayser, Manfred; Veldink, Jan H.; van den Berg, Leonard H.; Byelas, Heorhiy; den Dunnen, Johan T.; Dijkstra, Martijn; Amin, Najaf; Joeri van der Velde, K.; van Setten, Jessica; Kattenberg, Mathijs; van Schaik, Barbera D.C.; Bot, Jan; Nijman, Isaäc J.; Mei, Hailiang; Koval, Vyacheslav; Ye, Kai; Lameijer, Eric-Wubbo; Moed, Matthijs H.; Hehir-Kwa, Jayne Y.; Handsaker, Robert E.; Sunyaev, Shamil R.; Sohail, Mashaal; Hormozdiari, Fereydoun; Marschall, Tobias; Schönhuth, Alexander; Guryev, Victor; Suchiman, H. Eka D.; Wolffenbuttel, Bruce H.; Platteel, Mathieu; Pitts, Steven J.; Potluri, Shobha; Cox, David R.; Li, Qibin; Li, Yingrui; Du, Yuanping; Chen, Ruoyan; Cao, Hongzhi; Li, Ning; Cao, Sujie; Wang, Jun; Bovenberg, Jasper A.; Jukema, J. Wouter; van der Harst, Pim; Sijbrands, Eric J.; Hottenga, Jouke-Jan; Uitterlinden, Andre G.; Swertz, Morris A.; van Ommen, Gert-Jan B.; de Bakker, Paul I.W.; Eline Slagboom, P.; Boomsma, Dorret I.; Wijmenga, Cisca; van Duijn, Cornelia M.
2015-01-01
Variants associated with blood lipid levels may be population-specific. To identify low-frequency variants associated with this phenotype, population-specific reference panels may be used. Here we impute nine large Dutch biobanks (~35,000 samples) with the population-specific reference panel created by the Genome of the Netherlands Project and perform association testing with blood lipid levels. We report the discovery of five novel associations at four loci (P value <6.61 × 10−4), including a rare missense variant in ABCA6 (rs77542162, p.Cys1359Arg, frequency 0.034), which is predicted to be deleterious. The frequency of this ABCA6 variant is 3.65-fold increased in the Dutch and its effect (βLDL-C=0.135, βTC=0.140) is estimated to be very similar to those observed for single variants in well-known lipid genes, such as LDLR. PMID:25751400
Vastaranta, Mikko; Kankare, Ville; Holopainen, Markus; Yu, Xiaowei; Hyyppä, Juha; Hyyppä, Hannu
2012-01-01
The two main approaches to deriving forest variables from laser-scanning data are the statistical area-based approach (ABA) and individual tree detection (ITD). With ITD it is feasible to acquire single tree information, as in field measurements. Here, ITD was used for measuring training data for the ABA. In addition to automatic ITD (ITD auto), we tested a combination of ITD auto and visual interpretation (ITD visual). ITD visual had two stages: in the first, ITD auto was carried out and in the second, the results of the ITD auto were visually corrected by interpreting three-dimensional laser point clouds. The field data comprised 509 circular plots ( r = 10 m) that were divided equally for testing and training. ITD-derived forest variables were used for training the ABA and the accuracies of the k-most similar neighbor ( k-MSN) imputations were evaluated and compared with the ABA trained with traditional measurements. The root-mean-squared error (RMSE) in the mean volume was 24.8%, 25.9%, and 27.2% with the ABA trained with field measurements, ITD auto, and ITD visual, respectively. When ITD methods were applied in acquiring training data, the mean volume, basal area, and basal area-weighted mean diameter were underestimated in the ABA by 2.7-9.2%. This project constituted a pilot study for using ITD measurements as training data for the ABA. Further studies are needed to reduce the bias and to determine the accuracy obtained in imputation of species-specific variables. The method could be applied in areas with sparse road networks or when the costs of fieldwork must be minimized.
Directory of Open Access Journals (Sweden)
Jörgen Wallerman
2013-04-01
Full Text Available Individual tree crowns may be delineated from airborne laser scanning (ALS data by segmentation of surface models or by 3D analysis. Segmentation of surface models benefits from using a priori knowledge about the proportions of tree crowns, which has not yet been utilized for 3D analysis to any great extent. In this study, an existing surface segmentation method was used as a basis for a new tree model 3D clustering method applied to ALS returns in 104 circular field plots with 12 m radius in pine-dominated boreal forest (64°14'N, 19°50'E. For each cluster below the tallest canopy layer, a parabolic surface was fitted to model a tree crown. The tree model clustering identified more trees than segmentation of the surface model, especially smaller trees below the tallest canopy layer. Stem attributes were estimated with k-Most Similar Neighbours (k-MSN imputation of the clusters based on field-measured trees. The accuracy at plot level from the k-MSN imputation (stem density root mean square error or RMSE 32.7%; stem volume RMSE 28.3% was similar to the corresponding results from the surface model (stem density RMSE 33.6%; stem volume RMSE 26.1% with leave-one-out cross-validation for one field plot at a time. Three-dimensional analysis of ALS data should also be evaluated in multi-layered forests since it identified a larger number of small trees below the tallest canopy layer.
Directory of Open Access Journals (Sweden)
Momoko Horikoshi
2015-07-01
Full Text Available Reference panels from the 1000 Genomes (1000G Project Consortium provide near complete coverage of common and low-frequency genetic variation with minor allele frequency ≥0.5% across European ancestry populations. Within the European Network for Genetic and Genomic Epidemiology (ENGAGE Consortium, we have undertaken the first large-scale meta-analysis of genome-wide association studies (GWAS, supplemented by 1000G imputation, for four quantitative glycaemic and obesity-related traits, in up to 87,048 individuals of European ancestry. We identified two loci for body mass index (BMI at genome-wide significance, and two for fasting glucose (FG, none of which has been previously reported in larger meta-analysis efforts to combine GWAS of European ancestry. Through conditional analysis, we also detected multiple distinct signals of association mapping to established loci for waist-hip ratio adjusted for BMI (RSPO3 and FG (GCK and G6PC2. The index variant for one association signal at the G6PC2 locus is a low-frequency coding allele, H177Y, which has recently been demonstrated to have a functional role in glucose regulation. Fine-mapping analyses revealed that the non-coding variants most likely to drive association signals at established and novel loci were enriched for overlap with enhancer elements, which for FG mapped to promoter and transcription factor binding sites in pancreatic islets, in particular. Our study demonstrates that 1000G imputation and genetic fine-mapping of common and low-frequency variant association signals at GWAS loci, integrated with genomic annotation in relevant tissues, can provide insight into the functional and regulatory mechanisms through which their effects on glycaemic and obesity-related traits are mediated.
Horikoshi, Momoko; Mӓgi, Reedik; van de Bunt, Martijn; Surakka, Ida; Sarin, Antti-Pekka; Mahajan, Anubha; Marullo, Letizia; Thorleifsson, Gudmar; Hӓgg, Sara; Hottenga, Jouke-Jan; Ladenvall, Claes; Ried, Janina S; Winkler, Thomas W; Willems, Sara M; Pervjakova, Natalia; Esko, Tõnu; Beekman, Marian; Nelson, Christopher P; Willenborg, Christina; Wiltshire, Steven; Ferreira, Teresa; Fernandez, Juan; Gaulton, Kyle J; Steinthorsdottir, Valgerdur; Hamsten, Anders; Magnusson, Patrik K E; Willemsen, Gonneke; Milaneschi, Yuri; Robertson, Neil R; Groves, Christopher J; Bennett, Amanda J; Lehtimӓki, Terho; Viikari, Jorma S; Rung, Johan; Lyssenko, Valeriya; Perola, Markus; Heid, Iris M; Herder, Christian; Grallert, Harald; Müller-Nurasyid, Martina; Roden, Michael; Hypponen, Elina; Isaacs, Aaron; van Leeuwen, Elisabeth M; Karssen, Lennart C; Mihailov, Evelin; Houwing-Duistermaat, Jeanine J; de Craen, Anton J M; Deelen, Joris; Havulinna, Aki S; Blades, Matthew; Hengstenberg, Christian; Erdmann, Jeanette; Schunkert, Heribert; Kaprio, Jaakko; Tobin, Martin D; Samani, Nilesh J; Lind, Lars; Salomaa, Veikko; Lindgren, Cecilia M; Slagboom, P Eline; Metspalu, Andres; van Duijn, Cornelia M; Eriksson, Johan G; Peters, Annette; Gieger, Christian; Jula, Antti; Groop, Leif; Raitakari, Olli T; Power, Chris; Penninx, Brenda W J H; de Geus, Eco; Smit, Johannes H; Boomsma, Dorret I; Pedersen, Nancy L; Ingelsson, Erik; Thorsteinsdottir, Unnur; Stefansson, Kari; Ripatti, Samuli; Prokopenko, Inga; McCarthy, Mark I; Morris, Andrew P
2015-07-01
Reference panels from the 1000 Genomes (1000G) Project Consortium provide near complete coverage of common and low-frequency genetic variation with minor allele frequency ≥0.5% across European ancestry populations. Within the European Network for Genetic and Genomic Epidemiology (ENGAGE) Consortium, we have undertaken the first large-scale meta-analysis of genome-wide association studies (GWAS), supplemented by 1000G imputation, for four quantitative glycaemic and obesity-related traits, in up to 87,048 individuals of European ancestry. We identified two loci for body mass index (BMI) at genome-wide significance, and two for fasting glucose (FG), none of which has been previously reported in larger meta-analysis efforts to combine GWAS of European ancestry. Through conditional analysis, we also detected multiple distinct signals of association mapping to established loci for waist-hip ratio adjusted for BMI (RSPO3) and FG (GCK and G6PC2). The index variant for one association signal at the G6PC2 locus is a low-frequency coding allele, H177Y, which has recently been demonstrated to have a functional role in glucose regulation. Fine-mapping analyses revealed that the non-coding variants most likely to drive association signals at established and novel loci were enriched for overlap with enhancer elements, which for FG mapped to promoter and transcription factor binding sites in pancreatic islets, in particular. Our study demonstrates that 1000G imputation and genetic fine-mapping of common and low-frequency variant association signals at GWAS loci, integrated with genomic annotation in relevant tissues, can provide insight into the functional and regulatory mechanisms through which their effects on glycaemic and obesity-related traits are mediated.
International Nuclear Information System (INIS)
Singh, Manpreet; Singh, Gurvinderjit; Singh, Bhajan; Sandhu, B.S.
2007-01-01
An inverse response matrix converts the observed pulse-height distribution of a NaI(Tl) scintillation detector to a photon spectrum. This also results in extraction of intensity distribution of multiply scattered events originating from interactions of 0.279 MeV photons with thick targets of soldering material. The observed pulse-height distributions are a composite of singly and multiply scattered events in addition to bremmstrahlung-and Rayleigh-scattered events. To evaluate the contribution of multiply scattered events, the spectrum of singly scattered events contributing to inelastic Compton peak is reconstructed analytically. The optimum thickness (saturation depth), at which the number of multiply scattered events saturates, has been measured. Monte Carlo calculations also support the present results
Shyu, H. C.; Reed, I. S.; Truong, T. K.; Hsu, I. S.; Chang, J. J.
1987-01-01
A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-Pw technology.
International Nuclear Information System (INIS)
Zavodnik, L.B.; Buko, V.U.
2009-01-01
The aim of the work was the studying the effect of multiply low doses of gamma-irradiation in a total doze 1 and 2 Gy on processes lipid peroxidation and xenobiotics metabolizing in rat liver. It was shown the multiply irradiation causes the expressed activation of lipid peroxidation, by increase of TBARS level and dien conjugates. The system of microsomal oxidations was broken at the same time. (authors)
Kumar Kailasa, Suresh; Hasan, Nazim; Wu, Hui-Fen
2012-08-15
The development of liquid nitrogen assisted spray ionization mass spectrometry (LNASI MS) for the analysis of multiply charged proteins (insulin, ubiquitin, cytochrome c, α-lactalbumin, myoglobin and BSA), peptides (glutathione, HW6, angiotensin-II and valinomycin) and amino acid (arginine) clusters is described. The charged droplets are formed by liquid nitrogen assisted sample spray through a stainless steel nebulizer and transported into mass analyzer for the identification of multiply charged protein ions. The effects of acids and modifier volumes for the efficient ionization of the above analytes in LNASI MS were carefully investigated. Multiply charged proteins and amino acid clusters were effectively identified by LNASI MS. The present approach can effectively detect the multiply charged states of cytochrome c at 400 nM. A comparison between LNASI and ESI, CSI, SSI and V-EASI methods on instrumental conditions, applied temperature and observed charge states for the multiply charged proteins, shows that the LNASI method produces the good quality spectra of amino acid clusters at ambient conditions without applied any electric field and heat. To date, we believe that the LNASI method is the most simple, low cost and provided an alternative paradigm for production of multiply charged ions by LNASI MS, just as ESI-like ions yet no need for applying any electrical field and it could be operated at low temperature for generation of highly charged protein/peptide ions. Copyright © 2012 Elsevier B.V. All rights reserved.
Toxic chemical considerations for tank farm releases. Revision 1
Energy Technology Data Exchange (ETDEWEB)
Van Keuren, J.C.
1995-11-01
This document provides a method of determining the toxicological consequences of accidental releases from Hanford Tank Farms. A determination was made of the most restrictive toxic chemicals that are expected to be present in the tanks. Concentrations were estimated based on the maximum sample data for each analyte in all the tanks in the composite. Composite evaluated were liquids and solids from single shell tanks, double shell tanks, flammable gas watch list tanks, as well as all solids, all liquids, head space gases, and 241-C-106 solids. A sum of fractions of the health effects was computed for each composite for unit releases based emergency response planning guidelines (ERPGs). Where ERPGs were not available for chemical compounds of interest, surrogate guidelines were established. The calculation method in this report can be applied to actual release scenarios by multiplying the sum of fractions by the release rate for continuous releases, or the release amount for puff releases. Risk guidelines are met if the product is less than for equal to one.