Progress on Nuclear Data Covariances: AFCI-1.2 Covariance Library
International Nuclear Information System (INIS)
Oblozinsky, P.; Oblozinsky, P.; Mattoon, C.M.; Herman, M.; Mughabghab, S.F.; Pigni, M.T.; Talou, P.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Young, P.G
2009-01-01
Improved neutron cross section covariances were produced for 110 materials including 12 light nuclei (coolants and moderators), 78 structural materials and fission products, and 20 actinides. Improved covariances were organized into AFCI-1.2 covariance library in 33-energy groups, from 10 -5 eV to 19.6 MeV. BNL contributed improved covariance data for the following materials: 23 Na and 55 Mn where more detailed evaluation was done; improvements in major structural materials 52 Cr, 56 Fe and 58 Ni; improved estimates for remaining structural materials and fission products; improved covariances for 14 minor actinides, and estimates of mubar covariances for 23 Na and 56 Fe. LANL contributed improved covariance data for 235 U and 239 Pu including prompt neutron fission spectra and completely new evaluation for 240 Pu. New R-matrix evaluation for 16 O including mubar covariances is under completion. BNL assembled the library and performed basic testing using improved procedures including inspection of uncertainty and correlation plots for each material. The AFCI-1.2 library was released to ANL and INL in August 2009.
AFCI-2.0 Neutron Cross Section Covariance Library
Energy Technology Data Exchange (ETDEWEB)
Herman, M.; Herman, M; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.
2011-03-01
The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R&D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78 structural
AFCI-2.0 Neutron Cross Section Covariance Library
International Nuclear Information System (INIS)
Herman, M.; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.
2011-01-01
The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R and D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78
ORACLE: an adjusted cross-section and covariance library for fast-reactor analysis
International Nuclear Information System (INIS)
Yeivin, Y.; Marable, J.H.; Weisbin, C.R.; Wagschal, J.J.
1980-01-01
Benchmark integral-experiment values from six fast critical-reactor assemblies and two standard neutron fields are combined with corresponding calculations using group cross sections based on ENDF/B-V in a least-squares data adjustment using evaluated covariances from ENDF/B-V and supporting covariance evaluations. Purpose is to produce an adjusted cross-section and covariance library which is based on well-documented data and methods and which is suitable for fast-reactor design. By use of such a library, data- and methods-related biases of calculated performance parameters should be reduced and uncertainties of the calculated values minimized. Consistency of the extensive data base is analyzed using the chi-square test. This adjusted library ORACLE will be available shortly
COVFILS: 30-group covariance library based on ENDF/B-V
International Nuclear Information System (INIS)
Muir, D.W.; LaBauve, R.J.
1981-03-01
A library of 30-group cross sections and covariances called COVFILS has been prepared from ENDF/B-V data using the NJOY code system. COVFILS includes data on the total cross section, scattering cross sections, and the most important absorption cross sections for 1 H, 10 B, C, 16 O, Cr, Fe, Ni, Cu, and Pb. This report contains detailed descriptions of various features of the library, a listing of a FORTRAN retrieval program, and 143 plots of the multigroup cross-section uncertainties and their correlations
International Nuclear Information System (INIS)
Kodeli, Ivan-Alexander
2005-01-01
The new cross-section covariance matrix library ZZ-VITAMIN-J/COVA/EFF3 intended to simplify and encourage sensitivity and uncertainty analysis was prepared and is available from the NEA Data Bank. The library is organised in a ready-to-use form including both the covariance matrix data as well as processing tools:-Cross-section covariance matrices from the EFF-3 evaluation for five materials: 9 Be, 28 Si, 56 Fe, 58 Ni and 60 Ni. Other data will be included when available. -FORTRAN program ANGELO-2 to extrapolate/interpolate the covariance matrices to a users' defined energy group structure. -FORTRAN program LAMBDA to verify the mathematical properties of the covariance matrices, like symmetry, positive definiteness, etc. The preparation, testing and use of the covariance matrix library are presented. The uncertainties based on the cross-section covariance data were compared with those based on other evaluations, like ENDF/B-VI. The collapsing procedure used in the ANGELO-2 code was compared and validated with the one used in the NJOY system
Development of web-based user interface for evaluated covariance data files
International Nuclear Information System (INIS)
Togashi, Tomoaki; Kato, Kiyoshi; Suzuki, Ryusuke; Otuka, Naohiko
2010-01-01
We develop a web-based interface which visualizes cross sections with their covariance compiled in the ENDF format in order to support evaluated covariance data users who do not have experience of NJOY calculation. A package of programs has been constructed without aid of any existing program libraries. (author)
ZZ COVFILS, 30-Group Covariance Library from ENDF/B-5 for Sensitivity Studies
International Nuclear Information System (INIS)
Muir, D.W.
1997-01-01
1 - Description of program or function: Format: ENDB/F; Number of groups: 30-Group Covariance Library; Nuclides: H-1, B-10, C, O-16, Cr, Fe, Ni, Cu, Pb. Origin: ENDF/B-V. COVFILS is a 30-Group Covariance Library. It contains neutron cross sections, and their uncertainties and correlation in multigroup form. These data can be used, in conjunction with sensitivity information, to estimate the data-related uncertainty in calculated integral quantities such as radiation-damage or heating. 2 - Method of solution: COVFILS was obtained by processing evaluations from ENDF/B-V with ERRORR module of the NJOY nuclear data processing system (LA-9303-M, Vols. 1).The group structure is the Los Alamos 30-group structure which is listed in 'File 1' of each multigroup data set in the library
Covariance and sensitivity data generation at ORNL
International Nuclear Information System (INIS)
Leal, L. C.; Derrien, H.; Larson, N. M.; Alpan, A.
2005-01-01
Covariance data are required to assess uncertainties in design parameters in several nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the US Evaluated Nuclear Data Library, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. In this paper we address the generation of covariance data in the resonance region done with the computer code SAMMY. SAMMY is used in the evaluation of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on the generalised least-squares formalism (Bayesian theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, it provides the resonance parameter covariances. For resonance parameter evaluations where there are no resonance parameter covariance data available, the alternative is to use an approach called the 'retroactive' resonance parameter covariance generation. In this paper, we describe the application of the retroactive covariance generation approach for the gadolinium isotopes. (authors)
International Nuclear Information System (INIS)
LaBauve, R.J.; Muir, D.W.
1978-01-01
A library of 30-group multigroup covariance data was prepared from preliminary ENDF/B-V data with the NJOY code. Data for Fe, Cr, Ni, 10 B, C, Cu, H, and Pb are included in this library. Reactions include total cross sections, elastic and inelastic scattering cross sections, and the most important absorption cross sections. Typical data from the file are shown. 3 tables
International Nuclear Information System (INIS)
Kawano, Toshihiko; Shibata, Keiichi.
1997-09-01
A covariance evaluation system for the evaluated nuclear data library was established. The parameter estimation method and the least squares method with a spline function are used to generate the covariance data. Uncertainties of nuclear reaction model parameters are estimated from experimental data uncertainties, then the covariance of the evaluated cross sections is calculated by means of error propagation. Computer programs ELIESE-3, EGNASH4, ECIS, and CASTHY are used. Covariances of 238 U reaction cross sections were calculated with this system. (author)
International Nuclear Information System (INIS)
Plevnik, Lucijan; Žerovnik, Gašper
2016-01-01
Highlights: • Methods for random sampling of correlated parameters. • Link to open-source code for sampling of resonance parameters in ENDF-6 format. • Validation of the code on realistic and artificial data. • Validation of covariances in three major contemporary nuclear data libraries. - Abstract: Methods for random sampling of correlated parameters are presented. The methods are implemented for sampling of resonance parameters in ENDF-6 format and a link to the open-source code ENDSAM is given. The code has been validated on realistic data. Additionally, consistency of covariances of resonance parameters of three major contemporary nuclear data libraries (JEFF-3.2, ENDF/B-VII.1 and JENDL-4.0u2) has been checked.
Error estimation for ADS nuclear properties by using nuclear data covariances
International Nuclear Information System (INIS)
Tsujimoto, Kazufumi
2005-01-01
Error for nuclear properties of accelerator-driven subcritical system by the uncertainties of nuclear data was performed. An uncertainty analysis was done using the sensitivity coefficients based on the generalized perturbation theory and the variance matrix data. For major actinides and structural material, the covariance data in JENDL-3.3 library were used. For MA, newly evaluated covariance data was used since there had been no reliable data in all libraries. (author)
Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z
2015-11-01
Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.
A New Approach for Nuclear Data Covariance and Sensitivity Generation
International Nuclear Information System (INIS)
Leal, L.C.; Larson, N.M.; Derrien, H.; Kawano, T.; Chadwick, M.B.
2005-01-01
Covariance data are required to correctly assess uncertainties in design parameters in nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the U.S. Evaluated Nuclear Data File, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. The computer code SAMMY is used in the analysis of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on generalized least-squares formalism (Bayes' theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance-parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, in addition, it also provides the resonance-parameter covariances. For existing resonance-parameter evaluations where no resonance-parameter covariance data are available, the alternative is to use an approach called the 'retroactive' resonance-parameter covariance generation. In the high-energy region the methodology for generating covariance data consists of least-squares fitting and model parameter adjustment. The least-squares fitting method calculates covariances directly from experimental data. The parameter adjustment method employs a nuclear model calculation such as the optical model and the Hauser-Feshbach model, and estimates a covariance for the nuclear model parameters. In this paper we describe the application of the retroactive method and the parameter adjustment method to generate covariance data for the gadolinium isotopes
Parametric Covariance Model for Horizon-Based Optical Navigation
Hikes, Jacob; Liounis, Andrew J.; Christian, John A.
2016-01-01
This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.
Preparation of covariance data for the fast reactor. 2
International Nuclear Information System (INIS)
Shibata, Keiichi; Hasagawa, Akira
1998-03-01
For some isotopes important for core analysis of the fast reactor, covariance data of neutron nuclear data in the evaluated nuclear data library (JENDL-3.2) were presumed to file. Objected isotopes were 10-B, 11-B, 55-Mn, 240-Pu and 241-Pu. Physical amounts presumed on covariance were cross section, isolated and unisolated resonance parameters and first order Legendre coefficient of elastic scattering angle distribution. Presumption of the covariance was conducted in accordance with the data estimation method of JENDL-3.2 as possible. In other ward, when the estimated value was based on the experimental one, error of the experimental value was calculated, and when based on the calculated value, error of the calculated one was obtained. Their estimated results were prepared with ENDF-6 format. (G.K.)
Updated Covariance Processing Capabilities in the AMPX Code System
International Nuclear Information System (INIS)
Wiarda, Dorothea; Dunn, Michael E.
2007-01-01
A concerted effort is in progress within the nuclear data community to provide new cross-section covariance data evaluations to support sensitivity/uncertainty analyses of fissionable systems. The objective of this work is to update processing capabilities of the AMPX library to process the latest Evaluated Nuclear Data File (ENDF)/B formats to generate covariance data libraries for radiation transport software such as SCALE. The module PUFF-IV was updated to allow processing of new ENDF covariance formats in the resolved resonance region. In the resolved resonance region, covariance matrices are given in terms of resonance parameters, which need to be processed into covariance matrices with respect to the group-averaged cross-section data. The parameter covariance matrix can be quite large if the evaluation has many resonances. The PUFF-IV code has recently been used to process an evaluation of 235U, which was prepared in collaboration between Oak Ridge National Laboratory and Los Alamos National Laboratory.
Heteroscedasticity resistant robust covariance matrix estimator
Czech Academy of Sciences Publication Activity Database
Víšek, Jan Ámos
2010-01-01
Roč. 17, č. 27 (2010), s. 33-49 ISSN 1212-074X Grant - others:GA UK(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10750506 Keywords : Regression * Covariance matrix * Heteroscedasticity * Resistant Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/SI/visek-heteroscedasticity resistant robust covariance matrix estimator.pdf
International Nuclear Information System (INIS)
Fiorito, L.; Diez, C.J.; Cabellos, O.; Stankovskiy, A.; Van den Eynde, G.; Labeau, P.E.
2014-01-01
Highlights: • Fission yield data and uncertainty comparison between major nuclear data libraries. • Fission yield covariance generation through Bayesian technique. • Study of the effect of fission yield correlations on decay heat calculations. • Covariance information contribute to reduce fission pulse decay heat uncertainty. - Abstract: Fission product yields are fundamental parameters in burnup/activation calculations and the impact of their uncertainties was widely studied in the past. Evaluations of these uncertainties were released, still without covariance data. Therefore, the nuclear community expressed the need of full fission yield covariance matrices to be able to produce inventory calculation results that take into account the complete uncertainty data. State-of-the-art fission yield data and methodologies for fission yield covariance generation were researched in this work. Covariance matrices were generated and compared to the original data stored in the library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235 U. Calculations were carried out using different libraries and codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the libraries. The uncertainty quantification was performed first with Monte Carlo sampling and then compared with linear perturbation. Indeed, correlations between fission yields strongly affect the uncertainty of decay heat. Eventually, a sensitivity analysis of fission product yields to fission pulse decay heat was performed in order to provide a full set of the most sensitive nuclides for such a calculation
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2004-01-01
This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing...... the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities....
International Nuclear Information System (INIS)
Weisbin, C.R.; Marable, J.H.; Collins, P.J.; Cowan, C.L.; Peelle, R.W.; Salvatores, M.
1979-06-01
The present work proposes a specific plan of cross section library adjustment for fast reactor core physics analysis using information from fast reactor and dosimetry integral experiments and from differential data evaluations. This detailed exposition of the proposed approach is intended mainly to elicit review and criticism from scientists and engineers in the research, development, and design fields. This major attempt to develop useful adjusted libraries is based on the established benchmark integral data, accurate and well documented analysis techniques, sensitivities, and quantified uncertainties for nuclear data, integral experiment measurements, and calculational methodology. The adjustments to be obtained using these specifications are intended to produce an overall improvement in the least-squares sense in the quality of the data libraries, so that calculations of other similar systems using the adjusted data base with any credible method will produce results without much data-related bias. The adjustments obtained should provide specific recommendations to the data evaluation program to be weighed in the light of newer measurements, and also a vehicle for observing how the evaluation process is converging. This report specifies the calculational methodology to be used, the integral experiments to be employed initially, and the methods and integral experiment biases and uncertainties to be used. The sources of sensitivity coefficients, as well as the cross sections to be adjusted, are detailed. The formulae for sensitivity coefficients for fission spectral parameters are developed. A mathematical formulation of the least-square adjustment problem is given including biases and uncertainties in methods
Covariance data evaluation of some experimental data for n + 65,63,NatCu
International Nuclear Information System (INIS)
Jia Min; Liu Jianfeng; Liu Tingjin
2003-01-01
The evaluation of covariance data for 65,63,Nat Cu in the energy range from 99.5 keV to 20 MeV was carried out using EXPCOV and SPC code based on the experimental data available. The data can be as a part of the covariance file 33 in the evaluated library in ENDF/B6 format for the corresponding nuclides, and also can be used as the basis of theoretical calculation concerned. (authors)
ERRORJ. Covariance processing code system for JENDL. Version 2
International Nuclear Information System (INIS)
Chiba, Gou
2003-09-01
ERRORJ is the covariance processing code system for Japanese Evaluated Nuclear Data Library (JENDL) that can produce group-averaged covariance data to apply it to the uncertainty analysis of nuclear characteristics. ERRORJ can treat the covariance data for cross sections including resonance parameters as well as angular distributions and energy distributions of secondary neutrons which could not be dealt with by former covariance processing codes. In addition, ERRORJ can treat various forms of multi-group cross section and produce multi-group covariance file with various formats. This document describes an outline of ERRORJ and how to use it. (author)
Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances
Energy Technology Data Exchange (ETDEWEB)
Sigeti, David Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, D. Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-18
Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances do not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.
Impact of the 235U Covariance Data in Benchmark Calculations
International Nuclear Information System (INIS)
Leal, Luiz C.; Mueller, D.; Arbanas, G.; Wiarda, D.; Derrien, H.
2008-01-01
The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems
Impact of the 235U covariance data in benchmark calculations
International Nuclear Information System (INIS)
Leal, Luiz; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve
2008-01-01
The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes' method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235 U. The resulting 235 U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235 U covariance data in calculations of critical benchmark systems. (authors)
On the Methodology to Calculate the Covariance of Estimated Resonance Parameters
International Nuclear Information System (INIS)
Becker, B.; Kopecky, S.; Schillebeeckx, P.
2015-01-01
Principles to determine resonance parameters and their covariance from experimental data are discussed. Different methods to propagate the covariance of experimental parameters are compared. A full Bayesian statistical analysis reveals that the level to which the initial uncertainty of the experimental parameters propagates, strongly depends on the experimental conditions. For high precision data the initial uncertainties of experimental parameters, like a normalization factor, has almost no impact on the covariance of the parameters in case of thick sample measurements and conventional uncertainty propagation or full Bayesian analysis. The covariances derived from a full Bayesian analysis and least-squares fit are derived under the condition that the model describing the experimental observables is perfect. When the quality of the model can not be verified a more conservative method based on a renormalization of the covariance matrix is recommended to propagate fully the uncertainty of experimental systematic effects. Finally, neutron resonance transmission analysis is proposed as an accurate method to validate evaluated data libraries in the resolved resonance region
Propagation of nuclear data uncertainty: Exact or with covariances
Directory of Open Access Journals (Sweden)
van Veen D.
2010-10-01
Full Text Available Two distinct methods of propagation for basic nuclear data uncertainties to large scale systems will be presented and compared. The “Total Monte Carlo” method is using a statistical ensemble of nuclear data libraries randomly generated by means of a Monte Carlo approach with the TALYS system. These libraries are then directly used in a large number of reactor calculations (for instance with MCNP after which the exact probability distribution for the reactor parameter is obtained. The second method makes use of available covariance files and can be done in a single reactor calculation (by using the perturbation method. In this exercise, both methods are using consistent sets of data files, which implies that covariance files used in the second method are directly obtained from the randomly generated nuclear data libraries from the first method. This is a unique and straightforward comparison allowing to directly apprehend advantages and drawbacks of each method. Comparisons for different reactions and criticality-safety benchmarks from 19F to actinides will be presented. We can thus conclude whether current methods for using covariance data are good enough or not.
Neutron cross section and covariance data evaluation of experimental data for 27Al
International Nuclear Information System (INIS)
Li Chunjuan; Liu Jianfeng; Liu Tingjin
2006-01-01
The evaluation of neutron cross section and covariance data for 27 Al in the energy range from 210 keV to 20 MeV was carried out on the basis of the experimental data mainly taken from EXFOR library. After the experimental data and their errors were analyzed, selected and corrected, SPCC code was used to fit the data and merge the covariance matrix. The evaluated neutron cross section data and covariance matrix for 27 Al given can be collected for the evaluated library and also can be used as the basis of theoretical calculation concerned. (authors)
Matérn-based nonstationary cross-covariance models for global processes
Jun, Mikyoung
2014-01-01
-covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters
ANGELO-LAMBDA, Covariance matrix interpolation and mathematical verification
International Nuclear Information System (INIS)
Kodeli, Ivo
2007-01-01
1 - Description of program or function: The codes ANGELO-2.3 and LAMBDA-2.3 are used for the interpolation of the cross section covariance data from the original to a user defined energy group structure, and for the mathematical tests of the matrices, respectively. The LAMBDA-2.3 code calculates the eigenvalues of the matrices (both for the original or the converted) and lists them accordingly into positive and negative matrices. This verification is strongly recommended before using any covariance matrices. These versions of the two codes are the extended versions of the previous codes available in the Packages NEA-1264 - ZZ-VITAMIN-J/COVA. They were specifically developed for the purposes of the OECD LWR UAM benchmark, in particular for the processing of the ZZ-SCALE5.1/COVA-44G cross section covariance matrix library retrieved from the SCALE-5.1 package. Either the original SCALE-5.1 libraries or the libraries separated into several files by Nuclides can be (in principle) processed by ANGELO/LAMBDA codes, but the use of the one-nuclide data is strongly recommended. Due to large deviations of the correlation matrix terms from unity observed in some SCALE5.1 covariance matrices, the previous more severe acceptance condition in the ANGELO2.3 code was released. In case the correlation coefficients exceed 1.0, only a warning message is issued, and coefficients are replaced by 1.0. 2 - Methods: ANGELO-2.3 interpolates the covariance matrices to a union grid using flat weighting. LAMBDA-2.3 code includes the mathematical routines to calculate the eigenvalues of the covariance matrices. 3 - Restrictions on the complexity of the problem: The algorithm used in ANGELO is relatively simple, therefore the interpolations involving energy group structure which are very different from the original (e.g. large difference in the number of energy groups) may not be accurate. In particular in the case of the MT=1018 data (fission spectra covariances) the algorithm may not be
Spatio-Temporal Audio Enhancement Based on IAA Noise Covariance Matrix Estimates
DEFF Research Database (Denmark)
Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
A method for estimating the noise covariance matrix in a mul- tichannel setup is proposed. The method is based on the iter- ative adaptive approach (IAA), which only needs short seg- ments of data to estimate the covariance matrix. Therefore, the method can be used for fast varying signals....... The method is based on an assumption of the desired signal being harmonic, which is used for estimating the noise covariance matrix from the covariance matrix of the observed signal. The noise co- variance estimate is used in the linearly constrained minimum variance (LCMV) filter and compared...
Neutron cross section and covariance data evaluation of experimental data for {sup 27}Al
Energy Technology Data Exchange (ETDEWEB)
Chunjuan, Li; Jianfeng, Liu [Physics Department , Zhengzhou Univ., Zhengzhou (China); Tingjin, Liu [China Nuclear Data Center, China Inst. of Atomic Energy, Beijing (China)
2006-07-15
The evaluation of neutron cross section and covariance data for {sup 27}Al in the energy range from 210 keV to 20 MeV was carried out on the basis of the experimental data mainly taken from EXFOR library. After the experimental data and their errors were analyzed, selected and corrected, SPCC code was used to fit the data and merge the covariance matrix. The evaluated neutron cross section data and covariance matrix for {sup 27}Al given can be collected for the evaluated library and also can be used as the basis of theoretical calculation concerned. (authors)
Are your covariates under control? How normalization can re-introduce covariate effects.
Pain, Oliver; Dudbridge, Frank; Ronald, Angelica
2018-04-30
Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.
Cross-covariance functions for multivariate random fields based on latent dimensions
Apanasovich, T. V.
2010-02-16
The problem of constructing valid parametric cross-covariance functions is challenging. We propose a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable and computationally feasible classes of cross-covariance functions in closed form. We focus on spatio-temporal cross-covariance functions that can be nonseparable, asymmetric and can have different covariance structures, for instance different smoothness parameters, in each component. We discuss estimation of these models and perform a small simulation study to demonstrate our approach. We illustrate our methodology on a trivariate spatio-temporal pollution dataset from California and demonstrate that our cross-covariance performs better than other competing models. © 2010 Biometrika Trust.
Cross-covariance based global dynamic sensitivity analysis
Shi, Yan; Lu, Zhenzhou; Li, Zhao; Wu, Mengmeng
2018-02-01
For identifying the cross-covariance source of dynamic output at each time instant for structural system involving both input random variables and stochastic processes, a global dynamic sensitivity (GDS) technique is proposed. The GDS considers the effect of time history inputs on the dynamic output. In the GDS, the cross-covariance decomposition is firstly developed to measure the contribution of the inputs to the output at different time instant, and an integration of the cross-covariance change over the specific time interval is employed to measure the whole contribution of the input to the cross-covariance of output. Then, the GDS main effect indices and the GDS total effect indices can be easily defined after the integration, and they are effective in identifying the important inputs and the non-influential inputs on the cross-covariance of output at each time instant, respectively. The established GDS analysis model has the same form with the classical ANOVA when it degenerates to the static case. After degeneration, the first order partial effect can reflect the individual effects of inputs to the output variance, and the second order partial effect can reflect the interaction effects to the output variance, which illustrates the consistency of the proposed GDS indices and the classical variance-based sensitivity indices. The MCS procedure and the Kriging surrogate method are developed to solve the proposed GDS indices. Several examples are introduced to illustrate the significance of the proposed GDS analysis technique and the effectiveness of the proposed solution.
Cross-covariance functions for multivariate random fields based on latent dimensions
Apanasovich, T. V.; Genton, M. G.
2010-01-01
The problem of constructing valid parametric cross-covariance functions is challenging. We propose a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable
Impacts of data covariances on the calculated breeding ratio for CRBRP
International Nuclear Information System (INIS)
Liaw, J.R.; Collins, P.J.; Henryson, H. II; Shenter, R.E.
1983-01-01
In order to establish confidence on the data adjustment methodology as applied to LMFBR design, and to estimate the importance of data correlations in that respect, an investigation was initiated on the impacts of data covariances on the calculated reactor performance parameters. This paper summarizes the results and findings of such an effort specifically related to the calculation of breeding ratio for CRBRP as an illustration. Thirty-nine integral parameters and their covariances, including k/sub eff/ and various capture and fission reaction rate ratios, from the ZEBRA-8 series and four ZPR physics benchmark assemblies were used in the least-squares fitting processes. Multigroup differential data and the sensitivity coefficients of those 39 integral parameters were generated by standard 2-D diffusion theory neutronic calculational modules at ANL. Three differential data covariance libraries, all based on ENDF/B-V evaluations, were tested in this study
Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C
2017-08-01
One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.
Video based object representation and classification using multiple covariance matrices.
Zhang, Yurong; Liu, Quan
2017-01-01
Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.
Matérn-based nonstationary cross-covariance models for global processes
Jun, Mikyoung
2014-07-01
Many spatial processes in environmental applications, such as climate variables and climate model errors on a global scale, exhibit complex nonstationary dependence structure, in not only their marginal covariance but also their cross-covariance. Flexible cross-covariance models for processes on a global scale are critical for an accurate description of each spatial process as well as the cross-dependences between them and also for improved predictions. We propose various ways to produce cross-covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters vary over space, coupled with a differential operators approach for modeling large-scale nonstationarity. We compare their performance to the performance of some existing models in terms of the aic and spatial predictions in two applications: joint modeling of surface temperature and precipitation, and joint modeling of errors in climate model ensembles. © 2014 Elsevier Inc.
Covariance matrices for nuclear cross sections derived from nuclear model calculations
International Nuclear Information System (INIS)
Smith, D. L.
2005-01-01
The growing need for covariance information to accompany the evaluated cross section data libraries utilized in contemporary nuclear applications is spurring the development of new methods to provide this information. Many of the current general purpose libraries of evaluated nuclear data used in applications are derived either almost entirely from nuclear model calculations or from nuclear model calculations benchmarked by available experimental data. Consequently, a consistent method for generating covariance information under these circumstances is required. This report discusses a new approach to producing covariance matrices for cross sections calculated using nuclear models. The present method involves establishing uncertainty information for the underlying parameters of nuclear models used in the calculations and then propagating these uncertainties through to the derived cross sections and related nuclear quantities by means of a Monte Carlo technique rather than the more conventional matrix error propagation approach used in some alternative methods. The formalism to be used in such analyses is discussed in this report along with various issues and caveats that need to be considered in order to proceed with a practical implementation of the methodology
Structural covariance in the hallucinating brain: a voxel-based morphometry study
Modinos, Gemma; Vercammen, Ans; Mechelli, Andrea; Knegtering, Henderikus; McGuire, Philip K.; Aleman, André
2009-01-01
Background Neuroimaging studies have indicated that a number of cortical regions express altered patterns of structural covariance in schizophrenia. The relation between these alterations and specific psychotic symptoms is yet to be investigated. We used voxel-based morphometry to examine regional grey matter volumes and structural covariance associated with severity of auditory verbal hallucinations. Methods We applied optimized voxel-based morphometry to volumetric magnetic resonance imaging data from 26 patients with medication-resistant auditory verbal hallucinations (AVHs); statistical inferences were made at p < 0.05 after correction for multiple comparisons. Results Grey matter volume in the left inferior frontal gyrus was positively correlated with severity of AVHs. Hallucination severity influenced the pattern of structural covariance between this region and the left superior/middle temporal gyri, the right inferior frontal gyrus and hippocampus, and the insula bilaterally. Limitations The results are based on self-reported severity of auditory hallucinations. Complementing with a clinician-based instrument could have made the findings more compelling. Future studies would benefit from including a measure to control for other symptoms that may covary with AVHs and for the effects of antipsychotic medication. Conclusion The results revealed that overall severity of AVHs modulated cortical intercorrelations between frontotemporal regions involved in language production and verbal monitoring, supporting the critical role of this network in the pathophysiology of hallucinations. PMID:19949723
The Cost of Library Services: Activity-Based Costing in an Australian Academic Library.
Robinson, Peter; Ellis-Newman, Jennifer
1998-01-01
Explains activity-based costing (ABC), discusses the benefits of ABC to library managers, and describes the steps involved in implementing ABC in an Australian academic library. Discusses the budgeting process in universities, and considers benefits to the library. (Author/LRW)
Covariance Based Pre-Filters and Screening Criteria for Conjunction Analysis
George, E., Chan, K.
2012-09-01
Several relationships are developed relating object size, initial covariance and range at closest approach to probability of collision. These relationships address the following questions: - Given the objects' initial covariance and combined hard body size, what is the maximum possible value of the probability of collision (Pc)? - Given the objects' initial covariance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the combined hard body radius, what is the minimum miss distance for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the miss distance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? The first relationship above allows the elimination of object pairs from conjunction analysis (CA) on the basis of the initial covariance and hard-body sizes of the objects. The application of this pre-filter to present day catalogs with estimated covariance results in the elimination of approximately 35% of object pairs as unable to ever conjunct with a probability of collision exceeding 1x10-6. Because Pc is directly proportional to object size and inversely proportional to covariance size, this pre-filter will have a significantly larger impact on future catalogs, which are expected to contain a much larger fraction of small debris tracked only by a limited subset of available sensors. This relationship also provides a mathematically rigorous basis for eliminating objects from analysis entirely based on element set age or quality - a practice commonly done by rough rules of thumb today. Further, these relations can be used to determine the required geometric screening radius for all objects. This analysis reveals the screening volumes for small objects are much larger than needed, while the screening volumes for
Evaluated Nuclear Data Covariances: The Journey From ENDF/B-VII.0 to ENDF/B-VII.1
International Nuclear Information System (INIS)
Smith, Donald L.
2011-01-01
Recent interest from data users on applications that utilize the uncertainties of evaluated nuclear reaction data has stimulated the data evaluation community to focus on producing covariance data to a far greater extent than ever before. Although some uncertainty information has been available in the ENDF/B libraries since the 1970's, this content has been fairly limited in scope, the quality quite variable, and the use of covariance data confined to only a few application areas. Today, covariance data are more widely and extensively utilized than ever before in neutron dosimetry, in advanced fission reactor design studies, in nuclear criticality safety assessments, in national security applications, and even in certain fusion energy applications. The main problem that now faces the ENDF/B evaluator community is that of providing covariances that are adequate both in quantity and quality to meet the requirements of contemporary nuclear data users in a timely manner. In broad terms, the approach pursued during the past several years has been to purge any legacy covariance information contained in ENDF/B-VI.8 that was judged to be subpar, to include in ENDF/B-VII.0 (released in 2006) only those covariance data deemed then to be of reasonable quality for contemporary applications, and to subsequently devote as much effort as the available time and resources allowed to producing additional covariance data of suitable scope and quality for inclusion in ENDF/B-VII.1. Considerable attention has also been devoted during the five years since the release of ENDF/B-VII.0 to examining and improving the methods used to produce covariance data from thermal energies up to the highest energies addressed in the ENDF/B library, to processing these data in a robust fashion so that they can be utilized readily in contemporary nuclear applications, and to developing convenient covariance data visualization capabilities. Other papers included in this issue discuss in considerable
Library-Based Learning in an Information Society.
Breivik, Patricia Senn
1986-01-01
The average academic library has great potential for quality nonclassroom learning benefiting students, faculty, alumni, and the local business community. The major detriments are the limited perceptions about libraries and librarians among campus administrators and faculty. Library-based learning should be planned to be assimilated into overall…
Experimental OAI-Based Digital Library Systems
Nelson, Michael L. (Editor); Maly, Kurt (Editor); Zubair, Mohammad (Editor); Rusch-Feja, Diann (Editor)
2002-01-01
The objective of Open Archives Initiative (OAI) is to develop a simple, lightweight framework to facilitate the discovery of content in distributed archives (http://www.openarchives.org). The focus of the workshop held at the 5th European Conference on Research and Advanced Technology for Digital Libraries (ECDL 2001) was to bring researchers in the area of digital libraries who are building OAI based systems so as to share their experiences, problems they are facing, and approaches they are taking to address them. The workshop consisted of invited talks from well-established researchers working in building OAI based digital library system along with short paper presentations.
Curriculum-based neurosurgery digital library.
Langevin, Jean-Philippe; Dang, Thai; Kon, David; Sapo, Monica; Batzdorf, Ulrich; Martin, Neil
2010-11-01
Recent work-hour restrictions and the constantly evolving body of knowledge are challenging the current ways of teaching neurosurgery residents. To develop a curriculum-based digital library of multimedia content to face the challenges in neurosurgery education. We used the residency program curriculum developed by the Congress of Neurological Surgeons to structure the library and Microsoft Sharepoint as the user interface. This project led to the creation of a user-friendly and searchable digital library that could be accessed remotely and throughout the hospital, including the operating rooms. The electronic format allows standardization of the content and transformation of the operating room into a classroom. This in turn facilitates the implementation of a curriculum within the training program and improves teaching efficiency. Future work will focus on evaluating the efficacy of the library as a teaching tool for residents.
International Nuclear Information System (INIS)
Roussin, R.W.; Drischler, J.D.; Marable, J.H.
1980-01-01
In recent years multigroup sensitivity profiles and covariance matrices have been added to the Radiation Shielding Information Center's Data Library Collection (DLC). Sensitivity profiles are available in a single package. DLC-45/SENPRO, and covariance matrices are found in two packages, DLC-44/COVERX and DLC-77/COVERV. The contents of these packages are described and their availability is discussed
Energy Technology Data Exchange (ETDEWEB)
Zhao Xuejing [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); School of mathematics and statistics, Lanzhou University, Lanzhou 730000 (China); Fouladirad, Mitra, E-mail: mitra.fouladirad@utt.f [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Berenguer, Christophe [Universite de Technologie de Troyes, Institut Charles Delaunay and STMR UMR CNRS 6279, 12 rue Marie Curie, 10010 Troyes (France); Bordes, Laurent [Universite de Pau et des Pays de l' Adour, LMA UMR CNRS 5142, 64013 PAU Cedex (France)
2010-08-15
The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.
International Nuclear Information System (INIS)
Zhao Xuejing; Fouladirad, Mitra; Berenguer, Christophe; Bordes, Laurent
2010-01-01
The aim of this paper is to discuss the problem of modelling and optimising condition-based maintenance policies for a deteriorating system in presence of covariates. The deterioration is modelled by a non-monotone stochastic process. The covariates process is assumed to be a time-homogenous Markov chain with finite state space. A model similar to the proportional hazards model is used to show the influence of covariates on the deterioration. In the framework of the system under consideration, an appropriate inspection/replacement policy which minimises the expected average maintenance cost is derived. The average cost under different conditions of covariates and different maintenance policies is analysed through simulation experiments to compare the policies performances.
Boulton, Stephen; Selvaratnam, Rajeevan; Ahmed, Rashik; Melacini, Giuseppe
2018-01-01
Mapping allosteric sites is emerging as one of the central challenges in physiology, pathology, and pharmacology. Nuclear Magnetic Resonance (NMR) spectroscopy is ideally suited to map allosteric sites, given its ability to sense at atomic resolution the dynamics underlying allostery. Here, we focus specifically on the NMR CHEmical Shift Covariance Analysis (CHESCA), in which allosteric systems are interrogated through a targeted library of perturbations (e.g., mutations and/or analogs of the allosteric effector ligand). The atomic resolution readout for the response to such perturbation library is provided by NMR chemical shifts. These are then subject to statistical correlation and covariance analyses resulting in clusters of allosterically coupled residues that exhibit concerted responses to the common set of perturbations. This chapter provides a description of how each step in the CHESCA is implemented, starting from the selection of the perturbation library and ending with an overview of different clustering options.
Testing power-law cross-correlations: Rescaled covariance test
Czech Academy of Sciences Publication Activity Database
Krištoufek, Ladislav
2013-01-01
Roč. 86, č. 10 (2013), 418-1-418-15 ISSN 1434-6028 R&D Projects: GA ČR GA402/09/0965 Institutional support: RVO:67985556 Keywords : power-law cross-correlations * testing * long-term memory Subject RIV: AH - Economics Impact factor: 1.463, year: 2013 http://library.utia.cas.cz/separaty/2013/E/kristoufek-testing power-law cross-correlations rescaled covariance test.pdf
Managing distance and covariate information with point-based clustering
Directory of Open Access Journals (Sweden)
Peter A. Whigham
2016-09-01
Full Text Available Abstract Background Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley’s K and applied to the problem of clustering with deliberate self-harm (DSH, is presented. Methods Point-based Monte-Carlo simulation of Ripley’s K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years’ emergency hospital presentations (n = 136 in a New Zealand town (population ~50,000. Study area was defined by residential (housing land parcels representing a finite set of possible point addresses. Results Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Conclusions Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley’s K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for
Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms
Directory of Open Access Journals (Sweden)
Fernando A. Auat Cheein
2013-01-01
Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.
Zero-base budgeting and the library.
Sargent, C W
1978-01-01
This paper describes the application of zero-base budgeting to libraries and the procedures involved in setting up this type of budget. It describes the "decision packages" necessary when this systmem is employed, as well as how to rank the packages and the problems which are related to the process. Zero-base budgeting involves the entire staff of a library, and the incentive engendered makes for a better and more realistic budget. The paper concludes with the problems which one might encounter in zero-base budgeting and the major benefits of the system. PMID:626795
Litvinenko, Alexander
2017-09-26
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Litvinenko, Alexander
2017-09-24
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\mathcal{H}$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu
2017-01-01
This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).
Chang, Yi-Pin; Chu, Yen-Ho
2014-05-16
The design, synthesis and screening of diversity-oriented peptide libraries using a "libraries from libraries" strategy for the development of inhibitors of α1-antitrypsin deficiency are described. The major buttress of the biochemical approach presented here is the use of well-established solid-phase split-and-mix method for the generation of mixture-based libraries. The combinatorial technique iterative deconvolution was employed for library screening. While molecular diversity is the general consideration of combinatorial libraries, exquisite design through systematic screening of small individual libraries is a prerequisite for effective library screening and can avoid potential problems in some cases. This review will also illustrate how large peptide libraries were designed, as well as how a conformation-sensitive assay was developed based on the mechanism of the conformational disease. Finally, the combinatorially selected peptide inhibitor capable of blocking abnormal protein aggregation will be characterized by biophysical, cellular and computational methods.
Parcellation of the human orbitofrontal cortex based on gray matter volume covariance.
Liu, Huaigui; Qin, Wen; Qi, Haotian; Jiang, Tianzi; Yu, Chunshui
2015-02-01
The human orbitofrontal cortex (OFC) is an enigmatic brain region that cannot be parcellated reliably using diffusional and functional magnetic resonance imaging (fMRI) because there is signal dropout that results from an inherent defect in imaging techniques. We hypothesise that the OFC can be reliably parcellated into subregions based on gray matter volume (GMV) covariance patterns that are derived from artefact-free structural images. A total of 321 healthy young subjects were examined by high-resolution structural MRI. The OFC was parcellated into subregions-based GMV covariance patterns; and then sex and laterality differences in GMV covariance pattern of each OFC subregion were compared. The human OFC was parcellated into the anterior (OFCa), medial (OFCm), posterior (OFCp), intermediate (OFCi), and lateral (OFCl) subregions. This parcellation scheme was validated by the same analyses of the left OFC and the bilateral OFCs in male and female subjects. Both visual observation and quantitative comparisons indicated a unique GMV covariance pattern for each OFC subregion. These OFC subregions mainly covaried with the prefrontal and temporal cortices, cingulate cortex and amygdala. In addition, GMV correlations of most OFC subregions were similar across sex and laterality except for significant laterality difference in the OFCl. The right OFCl had stronger GMV correlation with the right inferior frontal cortex. Using high-resolution structural images, we established a reliable parcellation scheme for the human OFC, which may provide an in vivo guide for subregion-level studies of this region and improve our understanding of the human OFC at subregional levels. © 2014 Wiley Periodicals, Inc.
Robust entry guidance using linear covariance-based model predictive control
Directory of Open Access Journals (Sweden)
Jianjun Luo
2017-02-01
Full Text Available For atmospheric entry vehicles, guidance design can be accomplished by solving an optimal issue using optimal control theories. However, traditional design methods generally focus on the nominal performance and do not include considerations of the robustness in the design process. This paper proposes a linear covariance-based model predictive control method for robust entry guidance design. Firstly, linear covariance analysis is employed to directly incorporate the robustness into the guidance design. The closed-loop covariance with the feedback updated control command is initially formulated to provide the expected errors of the nominal state variables in the presence of uncertainties. Then, the closed-loop covariance is innovatively used as a component of the cost function to guarantee the robustness to reduce its sensitivity to uncertainties. After that, the models predictive control is used to solve the optimal problem, and the control commands (bank angles are calculated. Finally, a series of simulations for different missions have been completed to demonstrate the high performance in precision and the robustness with respect to initial perturbations as well as uncertainties in the entry process. The 3σ confidence region results in the presence of uncertainties which show that the robustness of the guidance has been improved, and the errors of the state variables are decreased by approximately 35%.
Information provision in medical libraries: An evidence based ...
African Journals Online (AJOL)
The paper examined information provision in special libraries such as medical libraries. It provides an overview of evidence based practice as a concept for information provision by librarians. It specifically proffers meaning to the term evidence as used in evidence based practice and to evidence based medicine from where ...
Web-Based Instruction: A Guide for Libraries, Third Edition
Smith, Susan Sharpless
2010-01-01
Expanding on the popular, practical how-to guide for public, academic, school, and special libraries, technology expert Susan Sharpless Smith offers library instructors the confidence to take Web-based instruction into their own hands. Smith has thoroughly updated "Web-Based Instruction: A Guide for Libraries" to include new tools and trends,…
Evidence Based Management as a Tool for Special Libraries
Directory of Open Access Journals (Sweden)
Bill Fisher
2007-12-01
Full Text Available Objective ‐ To examine the evidence based management literature, as an example of evidence based practice, and determine how applicable evidence based management might be in the special library environment. Methods ‐ Recent general management literature and the subject‐focused literature of evidence based management were reviewed; likewise recent library/information science management literature and the subject‐focused literature of evidence based librarianshipwere reviewed to identify relevant examples of the introduction and use of evidence based practice in organizations. Searches were conducted in major business/management databases, major library/information science databases, and relevant Web sites, blogs and wikis. Citation searches on key articles and follow‐up searches on cited references were also conducted. Analysis of the retrieved literature was conducted to find similarities and/or differences between the management literature and the library/information scienceliterature, especially as it related to special libraries.Results ‐ The barriers to introducing evidence based management into most organizations were found to apply to many special libraries and are similar to issues involved with evidence based practice in librarianship in general. Despite these barriers, a set of resources to assist special librarians in accessing research‐based information to help them use principles of evidence based management is identified.Conclusion ‐ While most special librarians are faced with a number of barriers to using evidence based management, resources do exist to help overcome these obstacles.
Robust Covariance Estimators Based on Information Divergences and Riemannian Manifold
Directory of Open Access Journals (Sweden)
Xiaoqiang Hua
2018-03-01
Full Text Available This paper proposes a class of covariance estimators based on information divergences in heterogeneous environments. In particular, the problem of covariance estimation is reformulated on the Riemannian manifold of Hermitian positive-definite (HPD matrices. The means associated with information divergences are derived and used as the estimators. Without resorting to the complete knowledge of the probability distribution of the sample data, the geometry of the Riemannian manifold of HPD matrices is considered in mean estimators. Moreover, the robustness of mean estimators is analyzed using the influence function. Simulation results indicate the robustness and superiority of an adaptive normalized matched filter with our proposed estimators compared with the existing alternatives.
Pozsgay, Victor; Hirsch, Flavien; Branciard, Cyril; Brunner, Nicolas
2017-12-01
We introduce Bell inequalities based on covariance, one of the most common measures of correlation. Explicit examples are discussed, and violations in quantum theory are demonstrated. A crucial feature of these covariance Bell inequalities is their nonlinearity; this has nontrivial consequences for the derivation of their local bound, which is not reached by deterministic local correlations. For our simplest inequality, we derive analytically tight bounds for both local and quantum correlations. An interesting application of covariance Bell inequalities is that they can act as "shared randomness witnesses": specifically, the value of the Bell expression gives device-independent lower bounds on both the dimension and the entropy of the shared random variable in a local model.
On patterns of conditional independences and covariance signs among binary variables
Czech Academy of Sciences Publication Activity Database
Matúš, František
Roč. 154, č. 2 ( 2018 ), s. 511-524 ISSN 0236-5294 R&D Projects: GA ČR(CZ) GA16-12010S Institutional support: RVO:67985556 Keywords : conditional independence * covariance * correlation Subject RIV: BA - General Mathematics OBOR OECD: Statistics and probability Impact factor: 0.583, year: 2016 http://library.utia.cas.cz/separaty/ 2018 /MTR/matus-0488279.pdf
Analysis of valve failures from the NUCLARR data base
International Nuclear Information System (INIS)
Moore, L.M.
1997-11-01
The Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) contains data on component failures with categorical and qualifying information such as component design, normal operating state, system application and safety grade information which is important to the development of risk-based component surveillance testing requirements. This report presents descriptions and results of analyses of valve component failure data and covariate information available in the document Nuclear Computerized Library for Assessing Reactor Reliability Data Manual, Part 3: Hardware Component Failure Data (NUCLARR Data Manual). Although there are substantial records on valve performance, there are many categories of the corresponding descriptors and qualifying information for which specific values are missing. Consequently, this limits the data available for analysis of covariate effects. This report presents cross tabulations by different covariate categories and limited modeling of covariate effects for data subsets with substantive non-missing covariate information
Status of the JEFF data library
International Nuclear Information System (INIS)
Nordborg, C.
2006-01-01
A new improved version of the OECD Nuclear Energy Agency (NEA) co-ordinated Joint Evaluated Fission and Fusion (JEFF) data library, JEFF-3.1, was released in May 2005. It comprises a general purpose library and the following five special purpose libraries: activation; thermal scattering law; radioactive decay; fission yield; and proton library. The objective of the previous version of the library (JEFF-2.2) was to achieve improved performance for existing reactors and fuel cycles. In addition to this objective, the JEFF-3.1 library aims to provide users with data for a wider range of applications. These include innovative reactor concepts, transmutation of radioactive waste, fusion, and various other energy and non-energy related industrial applications. Initial benchmark testing has confirmed the expected very good performance of the JEFF-3.1 library. Additional benchmarking of the libraries is underway, both for the general purpose and for the special purpose libraries. A new three-year mandate to continue developing the JEFF library was recently granted by the NEA. For the next version of the library, JEFF-3.2, it is foreseen to put more effort into fission product and minor actinide evaluations, as well as the inclusion of more covariance data. (authors)
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Ole E. Barndorff-Nielsen; Neil Shephard
2002-01-01
This paper analyses multivariate high frequency financial data using realised covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis and covariance. It will be based on a fixed interval of time (e.g. a day or week), allowing the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions and covariances change through time. In particular w...
SG39 Deliverables. Comments on Covariance Data
International Nuclear Information System (INIS)
Yokoyama, Kenji
2015-01-01
The covariance matrix of a scattered data set, x_i (i=1,n), must be symmetric and positive-definite. As one of WPEC/SG39 contributions to the SG40/CIELO project, several comments or recommendations on the covariance data are described here from the viewpoint of nuclear-data users. To make the comments concrete and useful for nuclear-data evaluators, the covariance data of the latest evaluated nuclear data library, JENDL-4.0 and ENDF/B-VII.1 are treated here as the representative materials. The surveyed nuclides are five isotopes that are most important for fast reactor application. The nuclides, reactions and energy regions dealt with are followings: Pu-239: fission (2.5∼10 keV) and capture (2.5∼10 keV), U-235: fission (500 eV∼10 keV) and capture (500 eV∼30 keV), U-238: fission (1∼10 MeV), capture (below 20 keV, 20∼150 keV), inelastic (above 100 keV) and elastic (above 20 keV), Fe-56: elastic (below 850 keV) and average scattering cosine (above 10 keV), and, Na-23: capture (600 eV∼600 keV), inelastic (above 1 MeV) and elastic (around 2 keV)
The Performance Analysis Based on SAR Sample Covariance Matrix
Directory of Open Access Journals (Sweden)
Esra Erten
2012-03-01
Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.
Fee-based services in sci-tech libraries
Mount, Ellis
2013-01-01
This timely and important book explores how fee-based services have developed in various types of sci-tech libraries. The authoritative contributors focus on the current changing financial aspects of the sci-tech library operation and clarify for the reader how these changes have brought about conditions in which traditional methods of funding are no longer adequate. What new options are open and how they are best being applied in today's sci-tech libraries is fully and clearly explained and illustrated. Topics explored include cost allocation and cost recovery, fees for computer searching, an
[Progress in the spectral library based protein identification strategy].
Yu, Derui; Ma, Jie; Xie, Zengyan; Bai, Mingze; Zhu, Yunping; Shu, Kunxian
2018-04-25
Exponential growth of the mass spectrometry (MS) data is exhibited when the mass spectrometry-based proteomics has been developing rapidly. It is a great challenge to develop some quick, accurate and repeatable methods to identify peptides and proteins. Nowadays, the spectral library searching has become a mature strategy for tandem mass spectra based proteins identification in proteomics, which searches the experiment spectra against a collection of confidently identified MS/MS spectra that have been observed previously, and fully utilizes the abundance in the spectrum, peaks from non-canonical fragment ions, and other features. This review provides an overview of the implement of spectral library search strategy, and two key steps, spectral library construction and spectral library searching comprehensively, and discusses the progress and challenge of the library search strategy.
AUTOMATION BASED LIBRARY MANAGEMENT IN DEPOK PUBLIC LIBRARY IN THE CONTEXT OF RITUAL PERFORMANCE
Directory of Open Access Journals (Sweden)
Rafiqa Maulidia
2017-06-01
Full Text Available Library management using manual system is no longer adequate to handle the workload in the library routines, librarians must use application of library automation. To provide a good working performance, librarians use strategy, competences and certain habits, which are referred to as a ritual performance. The performance of the ritual is the demonstration of competence spontaneously by individuals in dealing with individuals, groups and organizations, which contains elements of personal ritual, the work ritual, social ritual, and organization ritual. The research focuses in the automation based library management in the context of the performance of the ritual. This study used a qualitative approach with case study method. The findings suggest that the personal ritual shows the personal habits of librarians to do their tasks, ritual librarian's work show responsibility towards their duties, social rituals strengthen the emotional connection between librarians and leaders, as well as ritual organizations suggest the involvement of librarians in giving their contribution in decision making. Conclusions of this study shows that the performance of rituals librarian at Depok Public Library gives librarians the skills to implement automation systems in the library management, and reflect the values of responsibility, mutual trust, and mutual respect. Key words : Library Management, Library Automation, Ritual Performance, Ritual Performance Value
Quality Quantification of Evaluated Cross Section Covariances
International Nuclear Information System (INIS)
Varet, S.; Dossantos-Uzarralde, P.; Vayatis, N.
2015-01-01
Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the 85 Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations
Adaptive learning with covariate shift-detection for motor imagery-based brain–computer interface
Raza, H; Cecotti, H; Li, Y; Prasad, G
2015-01-01
A common assumption in traditional supervised learning is the similar probability distribution of data between the training phase and the testing/operating phase. When transitioning from the training to testing phase, a shift in the probability distribution of input data is known as a covariate shift. Covariate shifts commonly arise in a wide range of real-world systems such as electroencephalogram-based brain–computer interfaces (BCIs). In such systems, there is a necessity for continuous mo...
NParCov3: A SAS/IML Macro for Nonparametric Randomization-Based Analysis of Covariance
Directory of Open Access Journals (Sweden)
Richard C. Zink
2012-07-01
Full Text Available Analysis of covariance serves two important purposes in a randomized clinical trial. First, there is a reduction of variance for the treatment effect which provides more powerful statistical tests and more precise confidence intervals. Second, it provides estimates of the treatment effect which are adjusted for random imbalances of covariates between the treatment groups. The nonparametric analysis of covariance method of Koch, Tangen, Jung, and Amara (1998 defines a very general methodology using weighted least-squares to generate covariate-adjusted treatment effects with minimal assumptions. This methodology is general in its applicability to a variety of outcomes, whether continuous, binary, ordinal, incidence density or time-to-event. Further, its use has been illustrated in many clinical trial settings, such as multi-center, dose-response and non-inferiority trials.NParCov3 is a SAS/IML macro written to conduct the nonparametric randomization-based covariance analyses of Koch et al. (1998. The software can analyze a variety of outcomes and can account for stratification. Data from multiple clinical trials will be used for illustration.
Activity-Based Costing in User Services of an Academic Library.
Ellis-Newman, Jennifer
2003-01-01
The rationale for using Activity-Based Costing (ABC) in a library is to allocate indirect costs to products and services based on the factors that most influence them. This paper discusses the benefits of ABC to library managers and explains the steps involved in implementing ABC in the user services area of an Australian academic library.…
General Galilei Covariant Gaussian Maps
Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo
2017-09-01
We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].
Proofs of Contracted Length Non-covariance
International Nuclear Information System (INIS)
Strel'tsov, V.N.
1994-01-01
Different proofs of contracted length non covariance are discussed. The way based on the establishment of interval inconstancy (dependence on velocity) seems to be the most convincing one. It is stressed that the known non covariance of the electromagnetic field energy and momentum of a moving charge ('the problem 4/3') is a direct consequence of contracted length non covariance. 8 refs
Dendrimer-based dynamic combinatorial libraries
Chang, T.; Meijer, E.W.
2005-01-01
The aim of this project is to create water-sol. dynamic combinatorial libraries based upon dendrimer-guest complexes. The guest mols. are designed to bind to dendrimers using multiple secondary interactions, such as electrostatics and hydrogen bonding. We have been able to incorporate various guest
ISSUES IN NEUTRON CROSS SECTION COVARIANCES
Energy Technology Data Exchange (ETDEWEB)
Mattoon, C.M.; Oblozinsky,P.
2010-04-30
We review neutron cross section covariances in both the resonance and fast neutron regions with the goal to identify existing issues in evaluation methods and their impact on covariances. We also outline ideas for suitable covariance quality assurance procedures.We show that the topic of covariance data remains controversial, the evaluation methodologies are not fully established and covariances produced by different approaches have unacceptable spread. The main controversy is in very low uncertainties generated by rigorous evaluation methods and much larger uncertainties based on simple estimates from experimental data. Since the evaluators tend to trust the former, while the users tend to trust the latter, this controversy has considerable practical implications. Dedicated effort is needed to arrive at covariance evaluation methods that would resolve this issue and produce results accepted internationally both by evaluators and users.
Views From the Pacific--Military Base Hospital Libraries in Hawaii and Guam.
Stephenson, Priscilla L; Trafford, Mabel A; Hadley, Alice E
2016-01-01
Hospital libraries serving military bases offer a different perspective on library services. Two libraries located on islands in the Pacific Ocean provide services to active duty service men and women, including those deployed to other regions of the world. In addition, these hospital libraries serve service members' families living on the base, and often citizens from the surrounding communities.
Automation Based Library Management in Depok Public Library In The Context of Ritual Performance
Directory of Open Access Journals (Sweden)
Rafiqa Maulidia
2018-01-01
Full Text Available Library management using manual system is no longer adequate to handle the workload in the library routines, librarians must use application of library automation. To provide a good working performance, librarians use strategy, competences and certain habits, which are referred to as a ritual performance. The performance of the ritual is the demonstration of competence spontaneously by individuals in dealing with individuals, groups and organizations, which contains elements of personal ritual, the work ritual, social ritual, and organization ritual. The research focuses in the automation based library management in the context of the performance of the ritual. This study used a qualitative approach with case study method. The findings suggest that the personal ritual shows the personal habits of librarians to do their tasks, ritual librarian's work show responsibility towards their duties, social rituals strengthen the emotional connection between librarians and leaders, as well as ritual organizations suggest the involvement of librarians in giving their contribution in decision making. Conclusions of this study shows that the performance of rituals librarian at Depok Public Library gives librarians the skills to implement automation systems in the library management, and reflect the values of responsibility, mutual trust, and mutual respect.
Covariant canonical quantization of fields and Bohmian mechanics
International Nuclear Information System (INIS)
Nikolic, H.
2005-01-01
We propose a manifestly covariant canonical method of field quantization based on the classical De Donder-Weyl covariant canonical formulation of field theory. Owing to covariance, the space and time arguments of fields are treated on an equal footing. To achieve both covariance and consistency with standard non-covariant canonical quantization of fields in Minkowski spacetime, it is necessary to adopt a covariant Bohmian formulation of quantum field theory. A preferred foliation of spacetime emerges dynamically owing to a purely quantum effect. The application to a simple time-reparametrization invariant system and quantum gravity is discussed and compared with the conventional non-covariant Wheeler-DeWitt approach. (orig.)
Directory of Open Access Journals (Sweden)
Zahra Naseri
2016-08-01
Full Text Available Current study aims to investigate the status of user interfaces of non-Iranian digital libraries’ based on social bookmarking capabilities and characteristics to use by Iranian digital libraries. This research studies the characteristics and capabilities of top digital libraries’ user interfaces in the world based on social bookmarking used by library users. This capability facilitates producing, identifying, organizing, and sharing contents using tags. Survey method was used with descriptive-analytical approach in this study. Populations include non-Iranian digital libraries interfaces. Top ten digital libraries’ interfaces were selected as the sample. A researcher-made checklist prepared based on literature review and investigating four distinguished websites (Library Thing, Delicious, Amazon, and Google Book. Faced validity evaluated by 10 experts’ viewpoints, then reliability calculated 0.87.Findings of this study are important because of two reasons: first, it provides a comprehensive and an unambiguous vision for recognizing user interfaces’ basic capabilities and characteristics based on social bookmarking. Second, it can provide a base for designing digital libraries in Iran. The results showed that the majority of digital libraries around the world had not used web 2.0 characteristics such as producing, identifying, organizing, and sharing contents except two digital libraries (Google Books, and Ibiblio.
Covariate analysis of bivariate survival data
Energy Technology Data Exchange (ETDEWEB)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.
Nuclear energy and astrophysics applications of ENDF/B-VII.1 evaluated nuclear library
International Nuclear Information System (INIS)
Pritychenko, B.
2012-01-01
Recently released ENDF/B-VII.1 evaluated nuclear library contains the most up-to-date evaluated neutron cross section and covariance data. These data provide new opportunities for nuclear science and astrophysics application development. The improvements in neutron cross section evaluations and more extensive utilization of covariance files, by the Cross Section Evaluation Working Group (CSEWG) collaboration, allowed users to produce neutron thermal cross sections, Westcott factors, resonance integrals, Maxwellian-averaged cross sections and astrophysical reaction rates, and provide additional insights on the currently available neutron-induced reaction data. Nuclear reaction calculations using the ENDF/B-VII.1 library and current computer technologies will be discussed and new results will be presented
SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data
International Nuclear Information System (INIS)
Williams, Mark L.; Rearden, Bradley T.
2008-01-01
Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.
Structural covariance and cortical reorganisation in schizophrenia: a MRI-based morphometric study.
Palaniyappan, Lena; Hodgson, Olha; Balain, Vijender; Iwabuchi, Sarina; Gowland, Penny; Liddle, Peter
2018-05-06
In patients with schizophrenia, distributed abnormalities are observed in grey matter volume. A recent hypothesis posits that these distributed changes are indicative of a plastic reorganisation process occurring in response to a functional defect in neuronal information transmission. We investigated the structural covariance across various brain regions in early-stage schizophrenia to determine if indeed the observed patterns of volumetric loss conform to a coordinated pattern of structural reorganisation. Structural magnetic resonance imaging scans were obtained from 40 healthy adults and 41 age, gender and parental socioeconomic status matched patients with schizophrenia. Volumes of grey matter tissue were estimated at the regional level across 90 atlas-based parcellations. Group-level structural covariance was studied using a graph theoretical framework. Patients had distributed reduction in grey matter volume, with high degree of localised covariance (clustering) compared with controls. Patients with schizophrenia had reduced centrality of anterior cingulate and insula but increased centrality of the fusiform cortex, compared with controls. Simulating targeted removal of highly central nodes resulted in significant loss of the overall covariance patterns in patients compared with controls. Regional volumetric deficits in schizophrenia are not a result of random, mutually independent processes. Our observations support the occurrence of a spatially interconnected reorganisation with the systematic de-escalation of conventional 'hub' regions. This raises the question of whether the morphological architecture in schizophrenia is primed for compensatory functions, albeit with a high risk of inefficiency.
America's Star Libraries, 2010: Top-Rated Libraries
Lyons, Ray; Lance, Keith Curry
2010-01-01
The "LJ" Index of Public Library Service 2010, "Library Journal"'s national rating of public libraries, identifies 258 "star" libraries. Created by Ray Lyons and Keith Curry Lance, and based on 2008 data from the IMLS, it rates 7,407 public libraries. The top libraries in each group get five, four, or three stars. All included libraries, stars or…
Application of Data Mining in Library-Based Personalized Learning
Directory of Open Access Journals (Sweden)
Lin Luo
2017-12-01
Full Text Available this paper expounds to mine up data with the DBSCAN algorithm in order to help teachers and students find which books they expect in the sea of library. In the first place, the model that DBSCAN algorithm applies in library data miner is proposed, followed by the DBSCAN algorithm improved on demands. In the end, an experiment is cited herein to validate this algorithm. The results show that the book price and the inventory level in the library produce a less impact on the resultant aggregation than the classification of books and the frequency of book borrowings. Library procurers should therefore purchase and subscribe data based on the results from cluster analysis thereby to improve hierarchies and structure distribution of library resources, forging on the library resources to be more scientific and reasonable, while it is also conducive to arousing readers' borrowing interest.
Use of internet library based services by the students of Imo State ...
African Journals Online (AJOL)
Findings show that students utilizes internet based library services in their academic work, for their intellectual development as well as in communicating to their lecturers and other relations on their day to day information needs. It is recommended that University libraries should provide and offer internet based library ...
Covariance descriptor fusion for target detection
Cukur, Huseyin; Binol, Hamidullah; Bal, Abdullah; Yavuz, Fatih
2016-05-01
Target detection is one of the most important topics for military or civilian applications. In order to address such detection tasks, hyperspectral imaging sensors provide useful images data containing both spatial and spectral information. Target detection has various challenging scenarios for hyperspectral images. To overcome these challenges, covariance descriptor presents many advantages. Detection capability of the conventional covariance descriptor technique can be improved by fusion methods. In this paper, hyperspectral bands are clustered according to inter-bands correlation. Target detection is then realized by fusion of covariance descriptor results based on the band clusters. The proposed combination technique is denoted Covariance Descriptor Fusion (CDF). The efficiency of the CDF is evaluated by applying to hyperspectral imagery to detect man-made objects. The obtained results show that the CDF presents better performance than the conventional covariance descriptor.
Meric, Ilker; Johansen, Geir A.; Holstad, Marie B.; Mattingly, John; Gardner, Robin P.
2012-05-01
Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately.
International Nuclear Information System (INIS)
Meric, Ilker; Johansen, Geir A; Holstad, Marie B; Mattingly, John; Gardner, Robin P
2012-01-01
Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately. (paper)
Data depth and rank-based tests for covariance and spectral density matrices
Chau, Joris
2017-06-26
In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.
Data depth and rank-based tests for covariance and spectral density matrices
Chau, Joris; Ombao, Hernando; Sachs, Rainer von
2017-01-01
In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.
Evaluation of computer-based library services at Kenneth Dike ...
African Journals Online (AJOL)
This study evaluated computer-based library services/routines at Kenneth Dike Library, University of Ibadan. Four research questions were developed and answered. A survey research design was adopted; using questionnaire as the instrument for data collection. A total of 200 respondents randomly selected from 10 ...
Forecasting Covariance Matrices: A Mixed Frequency Approach
DEFF Research Database (Denmark)
Halbleib, Roxana; Voev, Valeri
This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexi......This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance...
Directory of Open Access Journals (Sweden)
Lei Qin
2014-05-01
Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.
Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka
2015-01-01
The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…
Structure-Based Virtual Screening of Commercially Available Compound Libraries.
Kireev, Dmitri
2016-01-01
Virtual screening (VS) is an efficient hit-finding tool. Its distinctive strength is that it allows one to screen compound libraries that are not available in the lab. Moreover, structure-based (SB) VS also enables an understanding of how the hit compounds bind the protein target, thus laying ground work for the rational hit-to-lead progression. SBVS requires a very limited experimental effort and is particularly well suited for academic labs and small biotech companies that, unlike pharmaceutical companies, do not have physical access to quality small-molecule libraries. Here, we describe SBVS of commercial compound libraries for Mer kinase inhibitors. The screening protocol relies on the docking algorithm Glide complemented by a post-docking filter based on structural protein-ligand interaction fingerprints (SPLIF).
New perspective in covariance evaluation for nuclear data
International Nuclear Information System (INIS)
Kanda, Y.
1992-01-01
Methods of nuclear data evaluation have been highly developed during the past decade, especially after introducing the concept of covariance. This makes it utmost important how to evaluate covariance matrices for nuclear data. It can be said that covariance evaluation is just the nuclear data evaluation, because the covariance matrix has quantitatively decisive function in current evaluation methods. The covariance primarily represents experimental uncertainties. However, correlation of individual uncertainties between different data must be taken into account and it can not be conducted without detailed physical considerations on experimental conditions. This procedure depends on the evaluator and the estimated covariance does also. The mathematical properties of the covariance have been intensively discussed. Their physical properties should be studied to apply it to the nuclear data evaluation, and then, in this report, are reviewed to give the base for further development of the covariance application. (orig.)
Contributions to Large Covariance and Inverse Covariance Matrices Estimation
Kang, Xiaoning
2016-01-01
Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimat...
Few group collapsing of covariance matrix data based on a conservation principle
International Nuclear Information System (INIS)
Hiruta, H.; Palmiotti, G.; Salvatores, M.; Arcilla, R. Jr.; Oblozinsky, P.; McKnight, R.D.
2008-01-01
A new algorithm for a rigorous collapsing of covariance data is proposed, derived, implemented, and tested. The method is based on a conservation principle that allows preserving at a broad energy group structure the uncertainty calculated in a fine group energy structure for a specific integral parameter, using as weights the associated sensitivity coefficients
Directory of Open Access Journals (Sweden)
Eisuke Chikayama
2016-10-01
Full Text Available Foods from agriculture and fishery products are processed using various technologies. Molecular mixture analysis during food processing has the potential to help us understand the molecular mechanisms involved, thus enabling better cooking of the analyzed foods. To date, there has been no web-based tool focusing on accumulating Nuclear Magnetic Resonance (NMR spectra from various types of food processing. Therefore, we have developed a novel web-based tool, FoodPro, that includes a food NMR spectrum database and computes covariance and correlation spectra to tasting and hardness. As a result, FoodPro has accumulated 236 aqueous (extracted in D2O and 131 hydrophobic (extracted in CDCl3 experimental bench-top 60-MHz NMR spectra, 1753 tastings scored by volunteers, and 139 hardness measurements recorded by a penetrometer, all placed into a core database. The database content was roughly classified into fish and vegetable groups from the viewpoint of different spectrum patterns. FoodPro can query a user food NMR spectrum, search similar NMR spectra with a specified similarity threshold, and then compute estimated tasting and hardness, covariance, and correlation spectra to tasting and hardness. Querying fish spectra exemplified specific covariance spectra to tasting and hardness, giving positive covariance for tasting at 1.31 ppm for lactate and 3.47 ppm for glucose and a positive covariance for hardness at 3.26 ppm for trimethylamine N-oxide.
Chikayama, Eisuke; Yamashina, Ryo; Komatsu, Keiko; Tsuboi, Yuuri; Sakata, Kenji; Kikuchi, Jun; Sekiyama, Yasuyo
2016-10-19
Foods from agriculture and fishery products are processed using various technologies. Molecular mixture analysis during food processing has the potential to help us understand the molecular mechanisms involved, thus enabling better cooking of the analyzed foods. To date, there has been no web-based tool focusing on accumulating Nuclear Magnetic Resonance (NMR) spectra from various types of food processing. Therefore, we have developed a novel web-based tool, FoodPro, that includes a food NMR spectrum database and computes covariance and correlation spectra to tasting and hardness. As a result, FoodPro has accumulated 236 aqueous (extracted in D₂O) and 131 hydrophobic (extracted in CDCl₃) experimental bench-top 60-MHz NMR spectra, 1753 tastings scored by volunteers, and 139 hardness measurements recorded by a penetrometer, all placed into a core database. The database content was roughly classified into fish and vegetable groups from the viewpoint of different spectrum patterns. FoodPro can query a user food NMR spectrum, search similar NMR spectra with a specified similarity threshold, and then compute estimated tasting and hardness, covariance, and correlation spectra to tasting and hardness. Querying fish spectra exemplified specific covariance spectra to tasting and hardness, giving positive covariance for tasting at 1.31 ppm for lactate and 3.47 ppm for glucose and a positive covariance for hardness at 3.26 ppm for trimethylamine N -oxide.
Kisil, Vladimir V.
2010-01-01
The paper develops theory of covariant transform, which is inspired by the wavelet construction. It was observed that many interesting types of wavelets (or coherent states) arise from group representations which are not square integrable or vacuum vectors which are not admissible. Covariant transform extends an applicability of the popular wavelets construction to classic examples like the Hardy space H_2, Banach spaces, covariant functional calculus and many others. Keywords: Wavelets, cohe...
Earth Observation System Flight Dynamics System Covariance Realism
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
Analysis of stock investment selection based on CAPM using covariance and genetic algorithm approach
Sukono; Susanti, D.; Najmia, M.; Lesmana, E.; Napitupulu, H.; Supian, S.; Putra, A. S.
2018-03-01
Investment is one of the economic growth factors of countries, especially in Indonesia. Stocks is a form of investment, which is liquid. In determining the stock investment decisions which need to be considered by investors is to choose stocks that can generate maximum returns with a minimum risk level. Therefore, we need to know how to allocate the capital which may give the optimal benefit. This study discusses the issue of stock investment based on CAPM which is estimated using covariance and Genetic Algorithm approach. It is assumed that the stocks analyzed follow the CAPM model. To do the estimation of beta parameter on CAPM equation is done by two approach, first is to be represented by covariance approach, and second with genetic algorithm optimization. As a numerical illustration, in this paper analyzed ten stocks traded on the capital market in Indonesia. The results of the analysis show that estimation of beta parameters using covariance and genetic algorithm approach, give the same decision, that is, six underpriced stocks with buying decision, and four overpriced stocks with a sales decision. Based on the analysis, it can be concluded that the results can be used as a consideration for investors buying six under-priced stocks, and selling four overpriced stocks.
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...
Introduction to covariant formulation of superstring (field) theory
International Nuclear Information System (INIS)
Anon.
1987-01-01
The author discusses covariant formulation of superstring theories based on BRS invariance. New formulation of superstring was constructed by Green and Schwarz in the light-cone gauge first and then a covariant action was discovered. The covariant action has some interesting geometrical interpretation, however, covariant quantizations are difficult to perform because of existence of local supersymmetries. Introducing extra variables into the action, a modified action has been proposed. However, it would be difficult to prescribe constraints to define a physical subspace, or to reproduce the correct physical spectrum. Hence the old formulation, i.e., the Neveu-Schwarz-Ramond (NSR) model for covariant quantization is used. The author begins by quantizing the NSR model in a covariant way using BRS charges. Then the author discusses the field theory of (free) superstring
Identification of toxic cyclopeptides based on mass spectral library matching
Directory of Open Access Journals (Sweden)
Boris L. Milman
2014-08-01
Full Text Available To gain perspective on the use of tandem mass spectral libraries for identification of toxic cyclic peptides, the new library was built from 263 mass spectra (mainly MS2 spectra of 59 compounds of that group, such as microcystins, amatoxins, and some related compounds. Mass spectra were extracted from the literature or specially acquired on ESI-Q-ToF and MALDI-ToF/ToF tandem instruments. ESI-MS2 product-ion mass spectra appeared to be rather close to MALDI-ToF/ToF fragment spectra which are uncommon for mass spectral libraries. Testing of the library was based on searches where reference spectra were in turn cross-compared. The percentage of 1st rank correct identifications (true positives was 70% in a general case and 88–91% without including knowingly defective (‘one-dimension’ spectra as test ones. The percentage of 88–91% is the principal estimate for the overall performance of this library that can be used in a method of choice for identification of individual cyclopeptides and also for group recognition of individual classes of such peptides. The approach to identification of cyclopeptides based on mass spectral library matching proved to be the most effective for abundant toxins. That was confirmed by analysis of extracts from two cyanobacterial strains.
Development of libraries for ORIGEN2 code based on JENDL-3.2
Energy Technology Data Exchange (ETDEWEB)
Suyama, Kenya; Katakura, Jun-ichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishikawa, Makoto; Ohkawachi, Yasushi
1998-03-01
The working Group of JNDC `Nuclide Generation Evaluation` has launched a project to make libraries for ORIGEN2 code based on the latest nuclear data library `JENDL-3.2` for current design of LWR and FBR fuels. Many of these libraries are under validation. (author)
Directory of Open Access Journals (Sweden)
Mihai DOINEA
2014-01-01
Full Text Available As in any domain that involves the use of software, the library information systems take advantages of cloud computing. The paper highlights the main aspect of cloud based systems, describing some public solutions provided by the most important players on the market. Topics related to content security in cloud based services are tackled in order to emphasize the requirements that must be met by these types of systems. A cloud based implementation of an Information Library System is presented and some adjacent tools that are used together with it to provide digital content and metadata links are described. In a cloud based Information Library System security is approached by means of ontologies. Aspects such as content security in terms of digital rights are presented and a methodology for security optimization is proposed.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
Evaluated 182,183,184,186W Neutron Cross Sections and Covariances in the Resolved Resonance Region
International Nuclear Information System (INIS)
Pigni, Marco T; Leal, Luiz C
2015-01-01
Oak Ridge National Laboratory (ORNL) has recently completed the resonance parameter evaluation of four tungsten isotopes, i.e., 182,183,184,186 W, in the neutron energy range of thermal up to several keV. This nuclear data work was performed with support from the US Nuclear Criticality Safety Program (NCSP) in an effort to provide improved tungsten cross section and covariance data for criticality safety analyses. The evaluation methodology uses the Reich-Moore approximation of the R-matrix formalism of the code SAMMY to fit high-resolution measurements performed in 2010 and 2012 at the Geel linear accelerator facility (GELINA), as well as other experimental data sets on natural tungsten available in the EXFOR library. In the analyzed energy range, this work nearly doubles the resolved resonance region (RRR) present in the latest US nuclear data library ENDF/B-VII.1. In view of the interest in tungsten for distinct types of nuclear applications and the relatively homogeneous distribution of the isotopic tungsten - namely, 182 W(26.5%), 183 W(14.31%), 184 W(30.64%), and 186 W(28.43%) - the completion of these four evaluations represents a significant contribution to the improvement of the ENDF library. This paper presents an overview of the evaluated resonance parameters and related covariances for total and capture cross sections on the four tungsten isotopes.
Development of covariance date for fast reactor cores. 3
International Nuclear Information System (INIS)
Shibata, Keiichi; Hasegawa, Akira
1999-03-01
Covariances have been estimated for nuclear data contained in JENDL-3.2. As for Cr and Ni, the physical quantities for which covariances are deduced are cross sections and the first order Legendre-polynomial coefficient for the angular distribution of elastically scattered neutrons. The covariances were estimated by using the same methodology that had been used in the JENDL-3.2 evaluation in order to keep a consistency between mean values and their covariances. In a case where evaluated data were based on experimental data, the covariances were estimated from the same experimental data. For cross section that had been evaluated by nuclear model calculations, the same model was applied to generate the covariances. The covariances obtained were compiled into ENDF-6 format files. The covariances, which had been prepared by the previous fiscal year, were re-examined, and some improvements were performed. Parts of Fe and 235 U covariances were updated. Covariances of nu-p and nu-d for 241 Pu and of fission neutron spectra for 233,235,238 U and 239,240 Pu were newly added to data files. (author)
Status of the JEFF nuclear data library
International Nuclear Information System (INIS)
Koning, A.J.; Bauge, E.; Dean, C.J.; Dupont, E.; Nordborg, C.; Rugama, Y.; Fischer, U.; Forrest, R.A.; Kellett, M.A.; Jacqmin, R.; Leeb, H.; Mills, R.W.; Pescarini, M.; Rullhusen, P.
2011-01-01
The status of the Joint Evaluated Fission and Fusion file (JEFF) is described. Recently, the JEFF-3.1.1 nuclear data library was released and shortly after adopted by the French nuclear power industry for inclusion in their production and analysis codes. Recent updates include actinide evaluations, materials evaluations that have emerged from various European nuclear data projects, the activation library, the decay data and fission yield sub-libraries, and fusion-related data files from the European F4E project. The revisions were motivated by the availability of new measurements, modelling capabilities and trends from integral experiments. Validations have been performed, mainly for criticality, reactivity temperature coefficients, fuel inventory, decay heat and shielding of thermal and fast systems. The next release of the library, JEFF-3.2, will be discussed. This will contain among others a significant increase of covariance data evaluations, modern evaluations for various structural materials, a larger emphasis on minor actinides and addition of high-quality gamma production data for many fission products. (authors)
ECNJEFI. A JEFI based 219-group neutron cross-section library: User's manual
International Nuclear Information System (INIS)
Stad, R.C.L. van der; Gruppelaar, H.
1992-07-01
This manual describes the contents of the ECNJEF1 library. The ECNJEF1 library is a JEF1.1 based 219-group AMPX-Master library for reactor calculations with the AMPX/SCALE-system, e.g. the PASC-3 system as implemented at the Netherlands Energy Research Foundation in Petten, Netherlands. The group cross-section data were generated with NJOY and NPTXS/XLACS-2 from the AMPX system. The data on the ECNJEF1 library allows resolved-resonance treatment by NITAWL and/or unresolved resonance self-shielding by BONAMI. These codes are based upon the Nordheim and Bondarenko methods, respectively. (author). 10 refs., 7 tabs
Evidence-based medicine and the development of medical libraries in China.
Huang, Michael Bailou; Cheng, Aijun; Ma, Lu
2009-07-01
This article elaborates on the opportunities and challenges that evidence-based medicine (EBM) has posed to the development of medical libraries and summarizes the research in the field of evidence-based medicine and achievements of EBM practice in Chinese medical libraries. Issues such as building collections of information resources, transformation of information services models, human resources management, and training of medical librarians, clinicians, and EBM users are addressed. In view of problems encountered in EBM research and practice, several suggestions are made about important roles medical libraries can play in the future development of EBM in China.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
On the algebraic structure of covariant anomalies and covariant Schwinger terms
International Nuclear Information System (INIS)
Kelnhofer, G.
1992-01-01
A cohomological characterization of covariant anomalies and covariant Schwinger terms in an anomalous Yang-Mills theory is formulated and w ill be geometrically interpreted. The BRS and anti-BRS transformations are defined as purely differential geometric objects. Finally the covariant descent equations are formulated within this context. (author)
Yang, Yang; DeGruttola, Victor
2012-06-22
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.
A class of covariate-dependent spatiotemporal covariance functions
Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M.
2014-01-01
In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States. PMID:24772199
PUFF-IV, Code System to Generate Multigroup Covariance Matrices from ENDF/B-VI Uncertainty Files
International Nuclear Information System (INIS)
2007-01-01
1 - Description of program or function: The PUFF-IV code system processes ENDF/B-VI formatted nuclear cross section covariance data into multigroup covariance matrices. PUFF-IV is the newest release in this series of codes used to process ENDF uncertainty information and to generate the desired multi-group correlation matrix for the evaluation of interest. This version includes corrections and enhancements over previous versions. It is written in Fortran 90 and allows for a more modular design, thus facilitating future upgrades. PUFF-IV enhances support for resonance parameter covariance formats described in the ENDF standard and now handles almost all resonance parameter covariance information in the resolved region, with the exception of the long range covariance sub-subsections. PUFF-IV is normally used in conjunction with an AMPX master library containing group averaged cross section data. Two utility modules are included in this package to facilitate the data interface. The module SMILER allows one to use NJOY generated GENDF files containing group averaged cross section data in conjunction with PUFF-IV. The module COVCOMP allows one to compare two files written in COVERX format. 2 - Methods: Cross section and flux values on a 'super energy grid,' consisting of the union of the required energy group structure and the energy data points in the ENDF/B-V file, are interpolated from the input cross sections and fluxes. Covariance matrices are calculated for this grid and then collapsed to the required group structure. 3 - Restrictions on the complexity of the problem: PUFF-IV cannot process covariance information for energy and angular distributions of secondary particles. PUFF-IV does not process covariance information in Files 34 and 35; nor does it process covariance information in File 40. These new formats will be addressed in a future version of PUFF
Directory of Open Access Journals (Sweden)
B. Franklin
2004-01-01
Full Text Available This paper examines the methodology and results from Web-based surveys of more than 15,000 networked electronic services users in the United States between July 1998 and June 2003 at four academic health sciences libraries and two large main campus libraries serving a variety of disciplines. A statistically valid methodology for administering simultaneous Web-based and print-based surveys using the random moments sampling technique is discussed and implemented. Results from the Web-based surveys showed that at the four academic health sciences libraries, there were approximately four remote networked electronic services users for each in-house user. This ratio was even higher for faculty, staff, and research fellows at the academic health sciences libraries, where more than five remote users for each in-house user were recorded. At the two main libraries, there were approximately 1.3 remote users for each in-house user of electronic information. Sponsored research (grant funded research accounted for approximately 32% of the networked electronic services activity at the health sciences libraries and 16% at the main campus libraries. Sponsored researchers at the health sciences libraries appeared to use networked electronic services most intensively from on-campus, but not from in the library. The purpose of use for networked electronic resources by patrons within the library is different from the purpose of use of those resources by patrons using the resources remotely. The implications of these results on how librarians reach decisions about networked electronic resources and services are discussed.
ENDF-6 File 30: Data covariances obtained from parameter covariances and sensitivities
International Nuclear Information System (INIS)
Muir, D.W.
1989-01-01
File 30 is provided as a means of describing the covariances of tabulated cross sections, multiplicities, and energy-angle distributions that result from propagating the covariances of a set of underlying parameters (for example, the input parameters of a nuclear-model code), using an evaluator-supplied set of parameter covariances and sensitivities. Whenever nuclear data are evaluated primarily through the application of nuclear models, the covariances of the resulting data can be described very adequately, and compactly, by specifying the covariance matrix for the underlying nuclear parameters, along with a set of sensitivity coefficients giving the rate of change of each nuclear datum of interest with respect to each of the model parameters. Although motivated primarily by these applications of nuclear theory, use of File 30 is not restricted to any one particular evaluation methodology. It can be used to describe data covariances of any origin, so long as they can be formally separated into a set of parameters with specified covariances and a set of data sensitivities
Towards Knowledge-Based Digital Libraries
Feng, L.; Jeusfeld, M.A.; Hoppenbrouwers, J.
From the standpoint of satisfying human's information needs, the current digital library (DL) systems suffer from the following two shortcomings: (i) inadequate high-level cognition support; (ii) inadequate knowledge sharing facilities. In this article, we introduce a two-layered digital library
Directory of Open Access Journals (Sweden)
Sysіuk Svitlana V.
2017-05-01
Full Text Available The article is aimed at highlighting features of the provision of the fee-based services by library institutions, identifying problems related to the legal and regulatory framework for their calculation, and the methods to implement this. The objective of the study is to develop recommendations to improve the calculation of the fee-based library services. The theoretical foundations have been systematized, the need to develop a Provision for the procedure of the fee-based services by library institutions has been substantiated. Such a Provision would protect library institution from errors in fixing the fee for a paid service and would be an informational source of its explicability. The appropriateness of applying the market pricing law based on demand and supply has been substantiated. The development and improvement of accounting and calculation, taking into consideration both industry-specific and market-based conditions, would optimize the costs and revenues generated by the provision of the fee-based services. In addition, the complex combination of calculation leverages with development of the system of internal accounting together with use of its methodology – provides another equally efficient way of improving the efficiency of library institutions’ activity.
Directory of Open Access Journals (Sweden)
Suwicha Jirayucharoensak
2014-01-01
Full Text Available Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers.
Covariance Function for Nearshore Wave Assimilation Systems
2018-01-30
which is applicable for any spectral wave model. The four dimensional variational (4DVar) assimilation methods are based on the mathematical ...covariance can be modeled by a parameterized Gaussian function, for nearshore wave assimilation applications , the covariance function depends primarily on...SPECTRAL ACTION DENSITY, RESPECTIVELY. ............................ 5 FIGURE 2. TOP ROW: STATISTICAL ANALYSIS OF THE WAVE-FIELD PROPERTIES AT THE
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Directory of Open Access Journals (Sweden)
Daniel Bartz
Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
ORIGEN2 libraries based on JENDL-3.2 for LWR-MOX fuels
Energy Technology Data Exchange (ETDEWEB)
Suyama, Kenya; Katakura, Jun-ichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Onoue, Masaaki; Matsumoto, Hideki [Mitsubishi Heavy Industries Ltd., Tokyo (Japan); Sasahara, Akihiro [Central Research Inst. of Electric Power Industry, Tokyo (Japan)
2000-11-01
A set of ORIGEN2 libraries for LWR MOX fuels was developed based on JENDL-3.2. The libraries were compiled with SWAT using the specification of MOX fuels that will be used in nuclear power reactors in Japan. The verification of the libraries were performed by the analyses of post irradiation examinations for the fuels from European PWR. By the analysis of PIE data from PWR in United States, the comparison was made between calculation and experimental results in the case of that parameters for making the libraries are different from irradiation conditions. These new libraries for LWR MOX fuels are packaged in ORLIBJ32, the libraries released in 1999. (author)
Székely, Gábor J.; Rizzo, Maria L.
2010-01-01
Distance correlation is a new class of multivariate dependence coefficients applicable to random vectors of arbitrary and not necessarily equal dimension. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but generalize and extend these classical bivariate measures of dependence. Distance correlation characterizes independence: it is zero if and only if the random vectors are independent. The notion of covariance with...
Collection evaluation in University libraries (II. Methods based on collection use
Directory of Open Access Journals (Sweden)
Àngels Massísimo i Sánchez de Boado
2004-01-01
Full Text Available This is our second paper devoted to the collection evaluation in the university libraries. Seven methods are described, based on collection use. Their advantages and disadvantages are discussed, as well as their usefulness for a range of library types
Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices
Lan, Shiwei; Holbrook, Andrew; Fortin, Norbert J.; Ombao, Hernando; Shahbaba, Babak
2017-01-01
Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix
Real-time probabilistic covariance tracking with efficient model update.
Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li
2012-05-01
The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.
Alterations in Anatomical Covariance in the Prematurely Born.
Scheinost, Dustin; Kwon, Soo Hyun; Lacadie, Cheryl; Vohr, Betty R; Schneider, Karen C; Papademetris, Xenophon; Constable, R Todd; Ment, Laura R
2017-01-01
Preterm (PT) birth results in long-term alterations in functional and structural connectivity, but the related changes in anatomical covariance are just beginning to be explored. To test the hypothesis that PT birth alters patterns of anatomical covariance, we investigated brain volumes of 25 PTs and 22 terms at young adulthood using magnetic resonance imaging. Using regional volumetrics, seed-based analyses, and whole brain graphs, we show that PT birth is associated with reduced volume in bilateral temporal and inferior frontal lobes, left caudate, left fusiform, and posterior cingulate for prematurely born subjects at young adulthood. Seed-based analyses demonstrate altered patterns of anatomical covariance for PTs compared with terms. PTs exhibit reduced covariance with R Brodmann area (BA) 47, Broca's area, and L BA 21, Wernicke's area, and white matter volume in the left prefrontal lobe, but increased covariance with R BA 47 and left cerebellum. Graph theory analyses demonstrate that measures of network complexity are significantly less robust in PTs compared with term controls. Volumes in regions showing group differences are significantly correlated with phonological awareness, the fundamental basis for reading acquisition, for the PTs. These data suggest both long-lasting and clinically significant alterations in the covariance in the PTs at young adulthood. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Graphical representation of covariant-contravariant modal formulae
Directory of Open Access Journals (Sweden)
Miguel Palomino
2011-08-01
Full Text Available Covariant-contravariant simulation is a combination of standard (covariant simulation, its contravariant counterpart and bisimulation. We have previously studied its logical characterization by means of the covariant-contravariant modal logic. Moreover, we have investigated the relationships between this model and that of modal transition systems, where two kinds of transitions (the so-called may and must transitions were combined in order to obtain a simple framework to express a notion of refinement over state-transition models. In a classic paper, Boudol and Larsen established a precise connection between the graphical approach, by means of modal transition systems, and the logical approach, based on Hennessy-Milner logic without negation, to system specification. They obtained a (graphical representation theorem proving that a formula can be represented by a term if, and only if, it is consistent and prime. We show in this paper that the formulae from the covariant-contravariant modal logic that admit a "graphical" representation by means of processes, modulo the covariant-contravariant simulation preorder, are also the consistent and prime ones. In order to obtain the desired graphical representation result, we first restrict ourselves to the case of covariant-contravariant systems without bivariant actions. Bivariant actions can be incorporated later by means of an encoding that splits each bivariant action into its covariant and its contravariant parts.
Cloud-Based DDoS HTTP Attack Detection Using Covariance Matrix Approach
Directory of Open Access Journals (Sweden)
Abdulaziz Aborujilah
2017-01-01
Full Text Available In this era of technology, cloud computing technology has become essential part of the IT services used the daily life. In this regard, website hosting services are gradually moving to the cloud. This adds new valued feature to the cloud-based websites and at the same time introduces new threats for such services. DDoS attack is one such serious threat. Covariance matrix approach is used in this article to detect such attacks. The results were encouraging, according to confusion matrix and ROC descriptors.
The value of Web-based library services at Cedars-Sinai Health System.
Halub, L P
1999-07-01
Cedars-Sinai Medical Library/Information Center has maintained Web-based services since 1995 on the Cedars-Sinai Health System network. In that time, the librarians have found the provision of Web-based services to be a very worthwhile endeavor. Library users value the services that they access from their desktops because the services save time. They also appreciate being able to access services at their convenience, without restriction by the library's hours of operation. The library values its Web site because it brings increased visibility within the health system, and it enables library staff to expand services when budget restrictions have forced reduced hours of operation. In creating and maintaining the information center Web site, the librarians have learned the following lessons: consider the design carefully; offer what services you can, but weigh the advantages of providing the services against the time required to maintain them; make the content as accessible as possible; promote your Web site; and make friends in other departments, especially information services.
Covariant representations of nuclear *-algebras
International Nuclear Information System (INIS)
Moore, S.M.
1978-01-01
Extensions of the Csup(*)-algebra theory for covariant representations to nuclear *-algebra are considered. Irreducible covariant representations are essentially unique, an invariant state produces a covariant representation with stable vacuum, and the usual relation between ergodic states and covariant representations holds. There exist construction and decomposition theorems and a possible relation between derivations and covariant representations
Earth Observing System Covariance Realism
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Schroedinger covariance states in anisotropic waveguides
International Nuclear Information System (INIS)
Angelow, A.; Trifonov, D.
1995-03-01
In this paper Squeezed and Covariance States based on Schroedinger inequality and their connection with other nonclassical states are considered for particular case of anisotropic waveguide in LiNiO 3 . Here, the problem of photon creation and generation of squeezed and Schroedinger covariance states in optical waveguides is solved in two steps: 1. Quantization of electromagnetic field is provided in the presence of dielectric waveguide using normal-mode expansion. The photon creation and annihilation operators are introduced, expanding the solution A-vector(r-vector,t) in a series in terms of the Sturm - Liouville mode-functions. 2. In terms of these operators the Hamiltonian of the field in a nonlinear waveguide is derived. For such Hamiltonian we construct the covariance states as stable (with nonzero covariance), which minimize the Schroedinger uncertainty relation. The evolutions of the three second momenta of q-circumflex j and p-circumflex j are calculated. For this Hamiltonian all three momenta are expressed in terms of one real parameters s only. It is found out how covariance, via this parameter s, depends on the waveguide profile n(x,y), on the mode-distributions u-vector j (x,y), and on the waveguide phase mismatching Δβ. (author). 37 refs
International Nuclear Information System (INIS)
Ford, W.E. III; Arwood, J.W.; Greene, N.M.; Moses, D.L.; Petrie, L.M.; Primm, R.T. III; Slater, C.O.; Westfall, R.M.; Wright, R.Q.
1990-09-01
Pseudo-problem-independent, multigroup cross-section libraries were generated to support Advanced Neutron Source (ANS) Reactor design studies. The ANS is a proposed reactor which would be fueled with highly enriched uranium and cooled with heavy water. The libraries, designated ANSL-V (Advanced Neutron Source Cross Section Libraries based on ENDF/B-V), are data bases in AMPX master format for subsequent generation of problem-dependent cross-sections for use with codes such as KENO, ANISN, XSDRNPM, VENTURE, DOT, DORT, TORT, and MORSE. Included in ANSL-V are 99-group and 39-group neutron, 39-neutron-group 44-gamma-ray-group secondary gamma-ray production (SGRP), 44-group gamma-ray interaction (GRI), and coupled, 39-neutron group 44-gamma-ray group (CNG) cross-section libraries. The neutron and SGRP libraries were generated primarily from ENDF/B-V data; the GRI library was generated from DLC-99/HUGO data, which is recognized as the ENDF/B-V photon interaction data. Modules from the AMPX and NJOY systems were used to process the multigroup data. Validity of selected data from the fine- and broad-group neutron libraries was satisfactorily tested in performance parameter calculations
Portfolio management using realized covariances: Evidence from Brazil
Directory of Open Access Journals (Sweden)
João F. Caldeira
2017-09-01
Full Text Available It is often argued that intraday returns can be used to construct covariance estimates that are more accurate than those based on daily returns. However, it is still unclear whether high frequency data provide more precise covariance estimates in markets more contaminated from microstructure noise such as higher bid-ask spreads and lower liquidity. We address this question by investigating the benefits of using high frequency data in the Brazilian equities market to construct optimal minimum variance portfolios. We implement alternative realized covariance estimators based on intraday returns sampled at alternative frequencies and obtain their dynamic versions using a multivariate GARCH framework. Our evidence based on a high-dimensional data set suggests that realized covariance estimators performed significantly better from an economic point of view in comparison to standard estimators based on low-frequency (close-to-close data as they delivered less risky portfolios. Resumo: Argumenta-se frequentemente que retornos intradiários podem ser usados para construir estimativas de covariâncias mais precisas em relação àquelas obtidas com retornos diários. No entanto, ainda não está claro se os dados de alta freqüência fornecem estimativas de covariância mais precisas em mercados mais contaminados pelo ruído da microestrutura, como maiores spreads entre ofertas de compra e venda e baixa liquidez. Abordamos essa questão investigando os benefícios do uso de dados de alta freqüência no mercado de ações brasileiro através da construção de portfólios ótimos de variância mínima. Implementamos diversos estimadores de covariâncias realizadas com base em retornos intradiários amostrados em diferentes frequências e obtemos suas versões dinâmicas usando uma estrutura GARCH multivariada. Nossa evidência baseada em um conjunto de dados de alta dimensão sugere que os estimadores de covariâncias realizadas obtiveram um desempenho
Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs
International Nuclear Information System (INIS)
Arbanas, G.; Dunn, M.E.; Wiarda, D.
2011-01-01
Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)
Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs
Energy Technology Data Exchange (ETDEWEB)
Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail: arbanasg@ornl.gov, E-mail: dunnme@ornl.gov, E-mail: wiardada@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)
2011-07-01
Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)
Optimization of reference library used in content-based medical image retrieval scheme
International Nuclear Information System (INIS)
Park, Sang Cheol; Sukthankar, Rahul; Mummert, Lily; Satyanarayanan, Mahadev; Zheng Bin
2007-01-01
Building an optimal image reference library is a critical step in developing the interactive computer-aided detection and diagnosis (I-CAD) systems of medical images using content-based image retrieval (CBIR) schemes. In this study, the authors conducted two experiments to investigate (1) the relationship between I-CAD performance and size of reference library and (2) a new reference selection strategy to optimize the library and improve I-CAD performance. The authors assembled a reference library that includes 3153 regions of interest (ROI) depicting either malignant masses (1592) or CAD-cued false-positive regions (1561) and an independent testing data set including 200 masses and 200 false-positive regions. A CBIR scheme using a distance-weighted K-nearest neighbor algorithm is applied to retrieve references that are considered similar to the testing sample from the library. The area under receiver operating characteristic curve (A z ) is used as an index to evaluate the I-CAD performance. In the first experiment, the authors systematically increased reference library size and tested I-CAD performance. The result indicates that scheme performance improves initially from A z =0.715 to 0.874 and then plateaus when the library size reaches approximately half of its maximum capacity. In the second experiment, based on the hypothesis that a ROI should be removed if it performs poorly compared to a group of similar ROIs in a large and diverse reference library, the authors applied a new strategy to identify 'poorly effective' references. By removing 174 identified ROIs from the reference library, I-CAD performance significantly increases to A z =0.914 (p<0.01). The study demonstrates that increasing reference library size and removing poorly effective references can significantly improve I-CAD performance
JDATAVIEWER – JAVA-Based Charting Library
Kruk, G
2009-01-01
The JDataViewer is a Java-based charting library developed at CERN, with powerful, extensible and easy to use function editing capabilities. Function edition is heavily used in Control System applications, but poorly supported in products available on the market. The JDataViewer enables adding, removing and modifying function points graphically (using a mouse) or by editing a table of values. Custom edition strategies are supported: developer can specify an algorithm that reacts to the modification of a given point in the function by automatically adapting all other points. The library provides all typical 2D plotting types (scatter, polyline, area, bar, HiLo, contour), as well as data point annotations and data indicators. It also supports common interactors to zoom and move the visible view, or to select and highlight function segments. A clear API is provided to configure and customize all chart elements (colors, fonts, data ranges ...) programmatically, and to integrate non-standard rendering types, inter...
MT71x: Multi-Temperature Library Based on ENDF/B-VII.1
Energy Technology Data Exchange (ETDEWEB)
Conlin, Jeremy Lloyd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gray, Mark Girard [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lee, Mary Beth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); White, Morgan Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-12-16
The Nuclear Data Team has released a multitemperature transport library, MT71x, based upon ENDF/B-VII.1 with a few modifications as well as additional evaluations for a total of 427 isotope tables. The library was processed using NJOY2012.39 into 23 temperatures. MT71x consists of two sub-libraries; MT71xMG for multigroup energy representation data and MT71xCE for continuous energy representation data. These sub-libraries are suitable for deterministic transport and Monte Carlo transport applications, respectively. The SZAs used are the same for the two sub-libraries; that is, the same SZA can be used for both libraries. This makes comparisons between the two libraries and between deterministic and Monte Carlo codes straightforward. Both the multigroup energy and continuous energy libraries were verified and validated with our checking codes checkmg and checkace (multigroup and continuous energy, respectively) Then an expanded suite of tests was used for additional verification and, finally, verified using an extensive suite of critical benchmark models. We feel that this library is suitable for all calculations and is particularly useful for calculations sensitive to temperature effects.
Covariant field equations in supergravity
Energy Technology Data Exchange (ETDEWEB)
Vanhecke, Bram [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium); Ghent University, Faculty of Physics, Gent (Belgium); Proeyen, Antoine van [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium)
2017-12-15
Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Covariant field equations in supergravity
International Nuclear Information System (INIS)
Vanhecke, Bram; Proeyen, Antoine van
2017-01-01
Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
A problem-based learning curriculum in transition: the emerging role of the library.
Eldredge, J D
1993-07-01
This case study describes library education programs that serve the University of New Mexico School of Medicine, known for its innovative problem-based learning (PBL) curricular track. The paper outlines the specific library instruction techniques that are integrated into the curriculum. The adaptation of library instruction to a PBL mode of medical education, including the use of case studies, is discussed in detail. Also addressed are the planning processes for the new PBL curriculum scheduled for implementation in 1993, including the activities of library faculty and staff and the probable new role of the library in the new curriculum.
Pappas, Marjorie L.
2003-01-01
Virtual library? Electronic library? Digital library? Online information network? These all apply to the growing number of Web-based resource collections managed by consortiums of state library entities. Some, like "INFOhio" and "KYVL" ("Kentucky Virtual Library"), have been available for a few years, but others are just starting. Searching for…
Abnormalities in structural covariance of cortical gyrification in schizophrenia.
Palaniyappan, Lena; Park, Bert; Balain, Vijender; Dangi, Raj; Liddle, Peter
2015-07-01
The highly convoluted shape of the adult human brain results from several well-coordinated maturational events that start from embryonic development and extend through the adult life span. Disturbances in these maturational events can result in various neurological and psychiatric disorders, resulting in abnormal patterns of morphological relationship among cortical structures (structural covariance). Structural covariance can be studied using graph theory-based approaches that evaluate topological properties of brain networks. Covariance-based graph metrics allow cross-sectional study of coordinated maturational relationship among brain regions. Disrupted gyrification of focal brain regions is a consistent feature of schizophrenia. However, it is unclear if these localized disturbances result from a failure of coordinated development of brain regions in schizophrenia. We studied the structural covariance of gyrification in a sample of 41 patients with schizophrenia and 40 healthy controls by constructing gyrification-based networks using a 3-dimensional index. We found that several key regions including anterior insula and dorsolateral prefrontal cortex show increased segregation in schizophrenia, alongside reduced segregation in somato-sensory and occipital regions. Patients also showed a lack of prominence of the distributed covariance (hubness) of cingulate cortex. The abnormal segregated folding pattern in the right peri-sylvian regions (insula and fronto-temporal cortex) was associated with greater severity of illness. The study of structural covariance in cortical folding supports the presence of subtle deviation in the coordinated development of cortical convolutions in schizophrenia. The heterogeneity in the severity of schizophrenia could be explained in part by aberrant trajectories of neurodevelopment.
Bouchoucha, Taha
2017-01-23
In multiple-input multiple-out (MIMO) radar, for desired transmit beampatterns, appropriate correlated waveforms are designed. To design such waveforms, conventional MIMO radar methods use two steps. In the first step, the waveforms covariance matrix, R, is synthesized to achieve the desired beampattern. While in the second step, to realize the synthesized covariance matrix, actual waveforms are designed. Most of the existing methods use iterative algorithms to solve these constrained optimization problems. The computational complexity of these algorithms is very high, which makes them difficult to use in practice. In this paper, to achieve the desired beampattern, a low complexity discrete-Fourier-transform based closed-form covariance matrix design technique is introduced for a MIMO radar. The designed covariance matrix is then exploited to derive a novel closed-form algorithm to directly design the finite-alphabet constant-envelope waveforms for the desired beampattern. The proposed technique can be used to design waveforms for large antenna array to change the beampattern in real time. It is also shown that the number of transmitted symbols from each antenna depends on the beampattern and is less than the total number of transmit antenna elements.
Directory of Open Access Journals (Sweden)
Kate-Riin Kont
2011-01-01
Full Text Available Objective – This article provides an overview of how university libraries research and adapt new cost accounting models, such as “activity-based costing” (ABC and “time-driven activity-based costing” (TDABC, focusing on the strengths and weaknesses of both methods to determine which of these two is suitable for application in university libraries.Methods – This paper reviews and summarizes the literature on cost accounting and costing practices of university libraries. A brief overview of the history of cost accounting, costing, and time and motion studies in libraries is also provided. The ABC and the TDABC method, designed as a revised and easier version of the ABC by Kaplan and Anderson (Kaplan & Anderson 2004 at the beginning of the 21st century, as well as the adoption and adaptation of these methods by university libraries are described, and their strengths and weaknesses, as well as their suitability for university libraries, are analyzed. Results – Cost accounting and costing studies in libraries have a long history, the first of these dating back to 1877. The development of cost accounting and time and motion studies can be seen as a natural evolution of techniques which were created to solve management problems. The ABC method is the best-known management accounting innovation of the last 20 years, and is already widely used in university libraries around the world. However, setting up an ABC system can be very costly, and the system needs to be regularly updated, which further increases its costs. The TDABC system can not only be implemented more quickly (and thus more cheaply, but also can be updated more easily than the traditional ABC, which makes the TDABC the more suitable method for university libraries.Conclusion – Both methods are suitable for university libraries. However, the ABC method can only be implemented in collaboration with an accounting department. The TDABC method can be tested and implemented by
Large Covariance Estimation by Thresholding Principal Orthogonal Complements.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2013-09-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.
Torres, Efren
2017-01-01
This study assessed the book collection of five selected medical libraries in the Philippines, based on Doodys' Essential Purchase List for basic sciences and clinical medicine, to compare the match and non-match titles among libraries, to determine the strong and weak disciplines of each library, and to explore the factors that contributed to the percentage of match and non-match titles. List checking was employed as the method of research. Among the medical libraries, De La Salle Health Sciences Institute and University of Santo Tomas had the highest percentage of match titles, whereas Ateneo School of Medicine and Public Health had the lowest percentage of match titles. University of the Philippines Manila had the highest percentage of near-match titles. De La Salle Health Sciences Institute and University of Santo Tomas had sound medical collections based on Doody's Core Titles. Collectively, the medical libraries shared common collection development priorities, as evidenced by similarities in strong areas. Library budget and the role of the library director in book selection were among the factors that could contribute to a high percentage of match titles.
A special covariance structure for random coefficient models with both between and within covariates
International Nuclear Information System (INIS)
Riedel, K.S.
1990-07-01
We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)
ORIGEN-2 libraries based on JENDL-3.2 for PWR-MOX fuel
Energy Technology Data Exchange (ETDEWEB)
Matsumoto, Hideki; Onoue, Masaaki; Tahara, Yoshihisa [Mitsubishi Heavy Industries Ltd., Tokyo (Japan)
2001-08-01
A set of ORIGEN-2 libraries for PWR MOX fuel was developed based on JENDL-3.2 in the Working Group on Evaluation of Nuclide Production, Japanese Nuclear Data Committee. The calculational model generating ORIGEN-2 libraries of PWR MOX is explained here in detail. The ORIGEN-2 calculation with the new ORIGEN-2 MOX library can predict the nuclides contents within 10% for U and Pu isotopes and 20% for both minor actinides and main FPs. (author)
Generation and validation of the WIMS-D5 library based on JENDL-3.2
International Nuclear Information System (INIS)
Gil, Choong-Sup; Kim, Jung-Do
2002-01-01
A WIMS-D5 library based on JENDL-3.2 has been prepared and is being tested against benchmark problems. Several sensitivity calculations for stability confirmation of the library were carried out, such as fission spectrum dependency and the self-shielding effects of the elastic scattering cross sections and the self-shielding effects 240 Pu and 242 Pu capture cross sections below 4.0 eV. The results of benchmark calculations with the libraries based on JENDL-3.2, ENDF/B-VI.5, JEF-2.2, and the 1986 WIMS-D library were intercompared. The multiplication factors for the thermal lattices are slightly underpredicted by all libraries (up to 1% with ENDF/B-VI.5). The k eff values with the library based on JENDL-3.2 are slightly higher than those of ENDF/B-VI.5 and JEF-2.2. The spectral indices for the lattices with JENDL-3.2 agree with the measured quantities within the uncertainties of the experiments. The calculated amounts of some isotopes such as 149 Sm, 237 Np, 238 Pu, 242 Cm and 243 Cm show large differences from the measured or reference values. (author)
Evidence-based Practice in libraries - Principles and discussions
DEFF Research Database (Denmark)
Johannsen, Carl Gustav
2012-01-01
The article examines problems concerning the introduction and future implementation of evidence-based practice (EBP) in libraries. It includes important conceptual distinctions and definitions, and it reviews the more controversial aspects of EBP, primarely based on experiences from Denmark. The ....... The purpose of the article is both to qualify existing scepticism and reservations and - maybe - to clarify misunderstandings and objections through the presentation of arguments and data....
Bergshoeff, E.; Pope, C.N.; Stelle, K.S.
1990-01-01
We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.
An automated procedure for covariation-based detection of RNA structure
International Nuclear Information System (INIS)
Winker, S.; Overbeek, R.; Woese, C.R.; Olsen, G.J.; Pfluger, N.
1989-12-01
This paper summarizes our investigations into the computational detection of secondary and tertiary structure of ribosomal RNA. We have developed a new automated procedure that not only identifies potential bondings of secondary and tertiary structure, but also provides the covariation evidence that supports the proposed bondings, and any counter-evidence that can be detected in the known sequences. A small number of previously unknown bondings have been detected in individual RNA molecules (16S rRNA and 7S RNA) through the use of our automated procedure. Currently, we are systematically studying mitochondrial rRNA. Our goal is to detect tertiary structure within 16S rRNA and quaternary structure between 16S and 23S rRNA. Our ultimate hope is that automated covariation analysis will contribute significantly to a refined picture of ribosome structure. Our colleagues in biology have begun experiments to test certain hypotheses suggested by an examination of our program's output. These experiments involve sequencing key portions of the 23S ribosomal RNA for species in which the known 16S ribosomal RNA exhibits variation (from the dominant pattern) at the site of a proposed bonding. The hope is that the 23S ribosomal RNA of these species will exhibit corresponding complementary variation or generalized covariation. 24 refs
An automated procedure for covariation-based detection of RNA structure
Energy Technology Data Exchange (ETDEWEB)
Winker, S.; Overbeek, R.; Woese, C.R.; Olsen, G.J.; Pfluger, N.
1989-12-01
This paper summarizes our investigations into the computational detection of secondary and tertiary structure of ribosomal RNA. We have developed a new automated procedure that not only identifies potential bondings of secondary and tertiary structure, but also provides the covariation evidence that supports the proposed bondings, and any counter-evidence that can be detected in the known sequences. A small number of previously unknown bondings have been detected in individual RNA molecules (16S rRNA and 7S RNA) through the use of our automated procedure. Currently, we are systematically studying mitochondrial rRNA. Our goal is to detect tertiary structure within 16S rRNA and quaternary structure between 16S and 23S rRNA. Our ultimate hope is that automated covariation analysis will contribute significantly to a refined picture of ribosome structure. Our colleagues in biology have begun experiments to test certain hypotheses suggested by an examination of our program's output. These experiments involve sequencing key portions of the 23S ribosomal RNA for species in which the known 16S ribosomal RNA exhibits variation (from the dominant pattern) at the site of a proposed bonding. The hope is that the 23S ribosomal RNA of these species will exhibit corresponding complementary variation or generalized covariation. 24 refs.
Curriculum-based library instruction from cultivating faculty relationships to assessment
Blevins, Amy
2014-01-01
Curriculum-Based Library Instruction: From Cultivating Faculty Relationships to Assessment highlights the movement beyond one-shot instruction sessions, specifically focusing on situations where academic librarians have developed curriculum based sessions and/or become involved in curriculum committees.
Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.
2018-05-01
A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.
Bayes Factor Covariance Testing in Item Response Models.
Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip
2017-12-01
Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.
Evidence-Based Practice and School Libraries: Interconnections of Evidence, Advocacy, and Actions
Todd, Ross J.
2015-01-01
This author states that a professional focus on evidence based practice (EBP) for school libraries emerged from the International Association of School Librarianship conference when he presented the concept. He challenged the school library profession to actively engage in professional and reflective practices that chart, measure, document, and…
Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.
Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin
2016-10-10
We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.
Development of the adjusted nuclear cross-section library based on JENDL-3.2 for large FBR
International Nuclear Information System (INIS)
Yokoyama, Kenji; Ishikawa, Makoto; Numata, Kazuyuki
1999-04-01
JNC (and PNC) had developed the adjusted nuclear cross-section library in which the results of the JUPITER experiments were reflected. Using this adjusted library, the distinct improvement of the accuracy in nuclear design of FBR cores had been achieved. As a recent research, JNC develops a database of other integral data in addition to the JUPITER experiments, aiming at further improvement for accuracy and reliability. In 1991, the adjusted library based on JENDL-2, JFS-3-J2 (ADJ91R), was developed, and it has been used on the design research for FBR. As an evaluated nuclear library, however, JENDL-3.2 is recently used. Therefore, the authors developed an adjusted library based on JENDL-3.2 which is called JFS-3-J3.2(ADJ98). It is known that the adjusted library based on JENDL-2 overestimated the sodium void reactivity worth by 10-20%. It is expected that the adjusted library based on JENDL-3.2 solve the problem. The adjusted library JFS-3-J3.2(ADJ98) was produced with the same method as the adjusted library JFS-3-J2(ADJ91R) and used more integral parameters of JUPITER experiments than the adjusted library JFS-3-J2(ADJ91R). This report also describes the design accuracy estimation on a 600 MWe class FBR with the adjusted library JFS-3-J3.2(ADJ98). Its main nuclear design parameters (multiplication factor, burn-up reactivity loss, breeding ratio, etc.) except the sodium void reactivity worth which are calculated with the adjusted library JFS-3-J3.2(ADJ98) are almost the same as those predicted with JFS-3-J2(ADJ91R). As for the sodium void reactivity, the adjusted library JFS-3-J3.2(ADJ98) estimates about 4% smaller than the JFS-3-J2(ADJ91R) because of the change of the basic nuclear library from JENDL-2 to JENDL-3.2. (author)
Continuous Covariate Imbalance and Conditional Power for Clinical Trial Interim Analyses
Ciolino, Jody D.; Martin, Renee' H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.
2014-01-01
Oftentimes valid statistical analyses for clinical trials involve adjustment for known influential covariates, regardless of imbalance observed in these covariates at baseline across treatment groups. Thus, it must be the case that valid interim analyses also properly adjust for these covariates. There are situations, however, in which covariate adjustment is not possible, not planned, or simply carries less merit as it makes inferences less generalizable and less intuitive. In this case, covariate imbalance between treatment groups can have a substantial effect on both interim and final primary outcome analyses. This paper illustrates the effect of influential continuous baseline covariate imbalance on unadjusted conditional power (CP), and thus, on trial decisions based on futility stopping bounds. The robustness of the relationship is illustrated for normal, skewed, and bimodal continuous baseline covariates that are related to a normally distributed primary outcome. Results suggest that unadjusted CP calculations in the presence of influential covariate imbalance require careful interpretation and evaluation. PMID:24607294
A Web-Based Electronic Book (e-book) Library: The netLibrary Model.
Connaway, Lynn Silipigni
2001-01-01
Identifies elements that are important for academic libraries to use in evaluating electronic books, including content; acquisition and collection development; software and hardware standards and protocols; digital rights management; access; archiving; privacy; the market and pricing; and enhancements and ideal features. Describes netLibrary, a…
Evaluation and processing of covariance data
International Nuclear Information System (INIS)
Wagner, M.
1993-01-01
These proceedings of a specialists'meeting on evaluation and processing of covariance data is divided into 4 parts bearing on: part 1- Needs for evaluated covariance data (2 Papers), part 2- generation of covariance data (15 Papers), part 3- Processing of covariance files (2 Papers), part 4-Experience in the use of evaluated covariance data (2 Papers)
Multiple feature fusion via covariance matrix for visual tracking
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui
2018-04-01
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
Large Covariance Estimation by Thresholding Principal Orthogonal Complements
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088
Covariant Quantization with Extended BRST Symmetry
Geyer, B.; Gitman, D. M.; Lavrov, P. M.
1999-01-01
A short rewiev of covariant quantization methods based on BRST-antiBRST symmetry is given. In particular problems of correct definition of Sp(2) symmetric quantization scheme known as triplectic quantization are considered.
Covariance data processing code. ERRORJ
International Nuclear Information System (INIS)
Kosako, Kazuaki
2001-01-01
The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)
Robust adaptive multichannel SAR processing based on covariance matrix reconstruction
Tan, Zhen-ya; He, Feng
2018-04-01
With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.
Herdiansyah, Herdis; Satriya Utama, Andre; Safruddin; Hidayat, Heri; Gema Zuliana Irawan, Angga; Immanuel Tjandra Muliawan, R.; Mutia Pratiwi, Diana
2017-10-01
One of the factor that influenced the development of science is the existence of the library, which in this case is the college libraries. Library, which is located in the college environment, aims to supply collections of literatures to support research activities as well as educational for students of the college. Conceptually, every library now starts to practice environmental principles. For example, “X” library as a central library claims to be an environmental friendly library for practicing environmental friendly management, but the X library has not inserted the satisfaction and service aspect to the users, including whether it is true that environmental friendly process is perceived by library users. Satisfaction can be seen from the comparison between expectations and reality of library users. This paper analyzes the level of library user satisfaction with library services in the campus area and the gap between expectations and reality felt by the library users. The result of the research shows that there is a disparity between the hope of library management, which is sustainable and environmentally friendly with the reality in the management of the library, so that it has not given satisfaction to the users yet. The gap value of satisfaction that has the biggest difference is in the library collection with the value of 1.57; while for the smallest gap value is in the same service to all students with a value of 0.67.
The impact of computerisation of library operations on library ...
African Journals Online (AJOL)
The use of computer-based systems in libraries and information units is now a vogue. The era of manual system in library operations is on its way to extinction. Recent developments in information world tend towards a globalized information communication technology (ICT). The library as a dynamic institution cannot afford ...
Using machine learning to assess covariate balance in matching studies.
Linden, Ariel; Yarnold, Paul R
2016-12-01
In order to assess the effectiveness of matching approaches in observational studies, investigators typically present summary statistics for each observed pre-intervention covariate, with the objective of showing that matching reduces the difference in means (or proportions) between groups to as close to zero as possible. In this paper, we introduce a new approach to distinguish between study groups based on their distributions of the covariates using a machine-learning algorithm called optimal discriminant analysis (ODA). Assessing covariate balance using ODA as compared with the conventional method has several key advantages: the ability to ascertain how individuals self-select based on optimal (maximum-accuracy) cut-points on the covariates; the application to any variable metric and number of groups; its insensitivity to skewed data or outliers; and the use of accuracy measures that can be widely applied to all analyses. Moreover, ODA accepts analytic weights, thereby extending the assessment of covariate balance to any study design where weights are used for covariate adjustment. By comparing the two approaches using empirical data, we are able to demonstrate that using measures of classification accuracy as balance diagnostics produces highly consistent results to those obtained via the conventional approach (in our matched-pairs example, ODA revealed a weak statistically significant relationship not detected by the conventional approach). Thus, investigators should consider ODA as a robust complement, or perhaps alternative, to the conventional approach for assessing covariate balance in matching studies. © 2016 John Wiley & Sons, Ltd.
Development of covariance capabilities in EMPIRE code
Energy Technology Data Exchange (ETDEWEB)
Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-06-24
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
Bouchoucha, Taha; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2017-01-01
optimization problems. The computational complexity of these algorithms is very high, which makes them difficult to use in practice. In this paper, to achieve the desired beampattern, a low complexity discrete-Fourier-transform based closed-form covariance
Lorentz covariant canonical symplectic algorithms for dynamics of charged particles
Wang, Yulei; Liu, Jian; Qin, Hong
2016-12-01
In this paper, the Lorentz covariance of algorithms is introduced. Under Lorentz transformation, both the form and performance of a Lorentz covariant algorithm are invariant. To acquire the advantages of symplectic algorithms and Lorentz covariance, a general procedure for constructing Lorentz covariant canonical symplectic algorithms (LCCSAs) is provided, based on which an explicit LCCSA for dynamics of relativistic charged particles is built. LCCSA possesses Lorentz invariance as well as long-term numerical accuracy and stability, due to the preservation of a discrete symplectic structure and the Lorentz symmetry of the system. For situations with time-dependent electromagnetic fields, which are difficult to handle in traditional construction procedures of symplectic algorithms, LCCSA provides a perfect explicit canonical symplectic solution by implementing the discretization in 4-spacetime. We also show that LCCSA has built-in energy-based adaptive time steps, which can optimize the computation performance when the Lorentz factor varies.
Pantazes, Robert J; Saraf, Manish C; Maranas, Costas D
2007-08-01
In this paper, we introduce and test two new sequence-based protein scoring systems (i.e. S1, S2) for assessing the likelihood that a given protein hybrid will be functional. By binning together amino acids with similar properties (i.e. volume, hydrophobicity and charge) the scoring systems S1 and S2 allow for the quantification of the severity of mismatched interactions in the hybrids. The S2 scoring system is found to be able to significantly functionally enrich a cytochrome P450 library over other scoring methods. Given this scoring base, we subsequently constructed two separate optimization formulations (i.e. OPTCOMB and OPTOLIGO) for optimally designing protein combinatorial libraries involving recombination or mutations, respectively. Notably, two separate versions of OPTCOMB are generated (i.e. model M1, M2) with the latter allowing for position-dependent parental fragment skipping. Computational benchmarking results demonstrate the efficacy of models OPTCOMB and OPTOLIGO to generate high scoring libraries of a prespecified size.
DEFF Research Database (Denmark)
2018-01-01
The FRDS.Broker library is a teaching oriented implementation of the Broker architectural pattern for distributed remote method invocation. It defines the central roles of the pattern and provides implementations of those roles that are not domain/use case specific. It provides a JSON based (GSon...... library) Requestor implementation, and implementations of the ClientRequestHandler and ServerRequestHandler roles in both a Java socket based and a Http/URI tunneling based variants. The latter us based upon the UniRest and Spark-Java libraries. The Broker pattern and the source code is explained...
Directory of Open Access Journals (Sweden)
Andaine Seguin-Orlando
Full Text Available Ancient DNA extracts consist of a mixture of endogenous molecules and contaminant DNA templates, often originating from environmental microbes. These two populations of templates exhibit different chemical characteristics, with the former showing depurination and cytosine deamination by-products, resulting from post-mortem DNA damage. Such chemical modifications can interfere with the molecular tools used for building second-generation DNA libraries, and limit our ability to fully characterize the true complexity of ancient DNA extracts. In this study, we first use fresh DNA extracts to demonstrate that library preparation based on adapter ligation at AT-overhangs are biased against DNA templates starting with thymine residues, contrarily to blunt-end adapter ligation. We observe the same bias on fresh DNA extracts sheared on Bioruptor, Covaris and nebulizers. This contradicts previous reports suggesting that this bias could originate from the methods used for shearing DNA. This also suggests that AT-overhang adapter ligation efficiency is affected in a sequence-dependent manner and results in an uneven representation of different genomic contexts. We then show how this bias could affect the base composition of ancient DNA libraries prepared following AT-overhang ligation, mainly by limiting the ability to ligate DNA templates starting with thymines and therefore deaminated cytosines. This results in particular nucleotide misincorporation damage patterns, deviating from the signature generally expected for authenticating ancient sequence data. Consequently, we show that models adequate for estimating post-mortem DNA damage levels must be robust to the molecular tools used for building ancient DNA libraries.
Batson, Sarah; Score, Robert; Sutton, Alex J
2017-06-01
The aim of the study was to develop the three-dimensional (3D) evidence network plot system-a novel web-based interactive 3D tool to facilitate the visualization and exploration of covariate distributions and imbalances across evidence networks for network meta-analysis (NMA). We developed the 3D evidence network plot system within an AngularJS environment using a third party JavaScript library (Three.js) to create the 3D element of the application. Data used to enable the creation of the 3D element for a particular topic are inputted via a Microsoft Excel template spreadsheet that has been specifically formatted to hold these data. We display and discuss the findings of applying the tool to two NMA examples considering multiple covariates. These two examples have been previously identified as having potentially important covariate effects and allow us to document the various features of the tool while illustrating how it can be used. The 3D evidence network plot system provides an immediate, intuitive, and accessible way to assess the similarity and differences between the values of covariates for individual studies within and between each treatment contrast in an evidence network. In this way, differences between the studies, which may invalidate the usual assumptions of an NMA, can be identified for further scrutiny. Hence, the tool facilitates NMA feasibility/validity assessments and aids in the interpretation of NMA results. The 3D evidence network plot system is the first tool designed specifically to visualize covariate distributions and imbalances across evidence networks in 3D. This will be of primary interest to systematic review and meta-analysis researchers and, more generally, those assessing the validity and robustness of an NMA to inform reimbursement decisions. Copyright © 2017 Elsevier Inc. All rights reserved.
Organization of GC/MS and LC/MS metabolomics data into chemical libraries
Directory of Open Access Journals (Sweden)
DeHaven Corey D
2010-10-01
Full Text Available Abstract Background Metabolomics experiments involve generating and comparing small molecule (metabolite profiles from complex mixture samples to identify those metabolites that are modulated in altered states (e.g., disease, drug treatment, toxin exposure. One non-targeted metabolomics approach attempts to identify and interrogate all small molecules in a sample using GC or LC separation followed by MS or MSn detection. Analysis of the resulting large, multifaceted data sets to rapidly and accurately identify the metabolites is a challenging task that relies on the availability of chemical libraries of metabolite spectral signatures. A method for analyzing spectrometry data to identify and Quantify Individual Components in a Sample, (QUICS, enables generation of chemical library entries from known standards and, importantly, from unknown metabolites present in experimental samples but without a corresponding library entry. This method accounts for all ions in a sample spectrum, performs library matches, and allows review of the data to quality check library entries. The QUICS method identifies ions related to any given metabolite by correlating ion data across the complete set of experimental samples, thus revealing subtle spectral trends that may not be evident when viewing individual samples and are likely to be indicative of the presence of one or more otherwise obscured metabolites. Results LC-MS/MS or GC-MS data from 33 liver samples were analyzed simultaneously which exploited the inherent biological diversity of the samples and the largely non-covariant chemical nature of the metabolites when viewed over multiple samples. Ions were partitioned by both retention time (RT and covariance which grouped ions from a single common underlying metabolite. This approach benefitted from using mass, time and intensity data in aggregate over the entire sample set to reject outliers and noise thereby producing higher quality chemical identities. The
Distance covariance for stochastic processes
DEFF Research Database (Denmark)
Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady
2017-01-01
The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...
A problem-based learning curriculum in transition: the emerging role of the library.
Eldredge, J D
1993-01-01
This case study describes library education programs that serve the University of New Mexico School of Medicine, known for its innovative problem-based learning (PBL) curricular track. The paper outlines the specific library instruction techniques that are integrated into the curriculum. The adaptation of library instruction to a PBL mode of medical education, including the use of case studies, is discussed in detail. Also addressed are the planning processes for the new PBL curriculum schedu...
Covariance Manipulation for Conjunction Assessment
Hejduk, M. D.
2016-01-01
The manipulation of space object covariances to try to provide additional or improved information to conjunction risk assessment is not an uncommon practice. Types of manipulation include fabricating a covariance when it is missing or unreliable to force the probability of collision (Pc) to a maximum value ('PcMax'), scaling a covariance to try to improve its realism or see the effect of covariance volatility on the calculated Pc, and constructing the equivalent of an epoch covariance at a convenient future point in the event ('covariance forecasting'). In bringing these methods to bear for Conjunction Assessment (CA) operations, however, some do not remain fully consistent with best practices for conducting risk management, some seem to be of relatively low utility, and some require additional information before they can contribute fully to risk analysis. This study describes some basic principles of modern risk management (following the Kaplan construct) and then examines the PcMax and covariance forecasting paradigms for alignment with these principles; it then further examines the expected utility of these methods in the modern CA framework. Both paradigms are found to be not without utility, but only in situations that are somewhat carefully circumscribed.
Process Fragment Libraries for Easier and Faster Development of Process-based Applications
Directory of Open Access Journals (Sweden)
David Schumm
2011-01-01
Full Text Available The term “process fragment” is recently gaining momentum in business process management research. We understand a process fragment as a connected and reusable process structure, which has relaxed completeness and consistency criteria compared to executable processes. We claim that process fragments allow for an easier and faster development of process-based applications. As evidence to this claim we present a process fragment concept and show a sample collection of concrete, real-world process fragments. We present advanced application scenarios for using such fragments in development of process-based applications. Process fragments are typically managed in a repository, forming a process fragment library. On top of a process fragment library from previous work, we discuss the potential impact of using process fragment libraries in cross-enterprise collaboration and application integration.
MCNP4c JEFF-3.1 Based Libraries. Eccolib-Jeff-3.1 libraries
International Nuclear Information System (INIS)
Sublet, J.Ch.
2006-01-01
Continuous-energy and multi-temperatures MCNP Ace types libraries, derived from the Joint European Fusion-Fission JEFF-3.1 evaluations, have been generated using the NJOY-99.111 processing code system. They include the continuous-energy neutron JEFF-3.1/General Purpose, JEFF-3.1/Activation-Dosimetry and thermal S(α,β) JEFF-3.1/Thermal libraries and data tables. The processing steps and features are explained together with the Quality Assurance processes and records linked to the generation of such multipurpose libraries. (author)
Development and verification of a 281-group WIMS-D library based on ENDF/B-VII.1
International Nuclear Information System (INIS)
Dong, Zhengyun; Wu, Jun; Ma, Xubo; Yu, Hui; Chen, Yixue
2016-01-01
Highlights: • A new WIMS-D library based on SHEM 281 energy structures is developed. • The method for calculating the lambda factor is illustrated and parameters are discussed. • The results show the improvements of this library compared with other libraries. - Abstract: The WIMS-D library based on WIMS 69 or XMAS 172 energy group structures is widely used in thermal reactor research. Otherwise, the resonance overlap effect is not taken into account in the two energy group structure, which limits the accuracy of resonance treatment. The SHEM 281 group structure is designed by the French to avoid the resonance overlap effect. In this study, a new WIMS-D library with SHEM 281 mesh is developed by using the NJOY nuclear data processing system based on the latest Evaluated Nuclear Data Library ENDF/B-VII.1. The parameters such as the thermal cut-off energy and lambda factor that depend on group structure are discussed. The lambda factor is calculated by Neutron Resonance Spectrum Calculation System and the effect of this factor is analyzed. The new library is verified through the analysis of various criticality benchmarks by using DRAGON code. The values of multiplication factor are consistent with the experiment data and the results also are improved in comparison with other WIMS libraries.
Promotion: Study of the Library of the department of library and information science and book
Directory of Open Access Journals (Sweden)
Andreja Nagode
2003-01-01
Full Text Available The contribution presents basic information about academic libraries and their promotion. Librarians should have promotion knowledge since they have to promote and market their libraries. The paper presents the definition of academic libraries, their purpose, objectives and goals. Marketing and promotion in academic libraries are defined. The history of academic libraries and their promotion are described. The contribution presents results and the interpretation of the research, based on the study of users of the Library of the Department of Library and Information Science and Book studies. A new promotion plan for libraries based on the analysis of the academic library environment is introduced.
Speech-Based Information Retrieval for Digital Libraries
National Research Council Canada - National Science Library
Oard, Douglas W
1997-01-01
Libraries and archives collect recorded speech and multimedia objects that contain recorded speech, and such material may comprise a substantial portion of the collection in future digital libraries...
Library Standards: Evidence of Library Effectiveness and Accreditation.
Ebbinghouse, Carol
1999-01-01
Discusses accreditation standards for libraries based on experiences in an academic law library. Highlights include the accreditation process; the impact of distance education and remote technologies on accreditation; and a list of Internet sources of standards and information. (LRW)
Blind Source Separation Based on Covariance Ratio and Artificial Bee Colony Algorithm
Directory of Open Access Journals (Sweden)
Lei Chen
2014-01-01
Full Text Available The computation amount in blind source separation based on bioinspired intelligence optimization is high. In order to solve this problem, we propose an effective blind source separation algorithm based on the artificial bee colony algorithm. In the proposed algorithm, the covariance ratio of the signals is utilized as the objective function and the artificial bee colony algorithm is used to solve it. The source signal component which is separated out, is then wiped off from mixtures using the deflation method. All the source signals can be recovered successfully by repeating the separation process. Simulation experiments demonstrate that significant improvement of the computation amount and the quality of signal separation is achieved by the proposed algorithm when compared to previous algorithms.
Assessing high affinity binding to HLA-DQ2.5 by a novel peptide library based approach
DEFF Research Database (Denmark)
Jüse, Ulrike; Arntzen, Magnus; Højrup, Peter
2011-01-01
Here we report on a novel peptide library based method for HLA class II binding motif identification. The approach is based on water soluble HLA class II molecules and soluble dedicated peptide libraries. A high number of different synthetic peptides are competing to interact with a limited amount...... library. The eluted sequences fit very well with the previously described HLA-DQ2.5 peptide binding motif. This novel method, limited by library complexity and sensitivity of mass spectrometry, allows the analysis of several thousand synthetic sequences concomitantly in a simple water soluble format....
Optimal covariance selection for estimation using graphical models
Vichik, Sergey; Oshman, Yaakov
2011-01-01
We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...
SCHOOL COMMUNITY PERCEPTION OF LIBRARY APPS AGAINTS LIBRARY EMPOWERMENT
Directory of Open Access Journals (Sweden)
Achmad Riyadi Alberto
2017-07-01
Full Text Available Abstract. This research is motivated by the development of information and communication technology (ICT in the library world so rapidly that allows libraries in the present to develop its services into digital-based services. This study aims to find out the school community’s perception of library apps developed by Riche Cynthia Johan, Hana Silvana, and Holin Sulistyo and its influence on library empowerment at the library of SD Laboratorium Percontohan UPI Bandung. Library apps in this research belong to the context of m-libraries, which is a library that meets the needs of its users by using mobile platforms such as smartphones,computers, and other mobile devices. Empowerment of library is the utilization of all aspects of the implementation of libraries to the best in order to achieve the expected goals. An analysis of the schoolcommunity’s perception of library apps using the Technology Acceptance Model (TAM includes: ease of use, usefulness, usability, usage trends, and real-use conditions. While the empowerment of the library includes aspects: information empowerment, empowerment of learning resources, empowerment of human resources, empowerment of library facilities, and library promotion. The research method used in this research is descriptive method with quantitative approach. Population and sample in this research is school community at SD Laboratorium Percontohan UPI Bandung. Determination of sample criteria by using disproportionate stratified random sampling with the number of samples of 83 respondents. Data analysis using simple linear regression to measure the influence of school community perception about library apps to library empowerment. The result of data analysis shows that there is influence between school community perception about library apps to library empowerment at library of SD Laboratorium Percontohan UPI Bandung which is proved by library acceptance level and library empowerment improvement.
The Experience of Evidence-Based Practice in an Australian Public Library: An Ethnography
Gillespie, Ann; Partridge, Helen; Bruce, Christine; Howlett, Alisa
2016-01-01
Introduction: This paper presents the findings from a project that investigated the lived experiences of library and information professionals in relation to evidence-based practice within an Australian public library. Method: The project employed ethnography, which allows holistic description of people's experiences within a particular community…
Affinity-based screening of combinatorial libraries using automated, serial-column chromatography
Energy Technology Data Exchange (ETDEWEB)
Evans, D.M.; Williams, K.P.; McGuinness, B. [PerSeptive Biosystems, Framingham, MA (United States)] [and others
1996-04-01
The authors have developed an automated serial chromatographic technique for screening a library of compounds based upon their relative affinity for a target molecule. A {open_quotes}target{close_quotes} column containing the immobilized target molecule is set in tandem with a reversed-phase column. A combinatorial peptide library is injected onto the target column. The target-bound peptides are eluted from the first column and transferred automatically to the reversed-phase column. The target-specific peptide peaks from the reversed-phase column are identified and sequenced. Using a monoclonal antibody (3E-7) against {beta}-endorphin as a target, we selected a single peptide with sequence YGGFL from approximately 5800 peptides present in a combinatorial library. We demonstrated the applicability of the technology towards selection of peptides with predetermined affinity for bacterial lipopolysaccharide (LPS, endotoxin). We expect that this technology will have broad applications for high throughput screening of chemical libraries or natural product extracts. 21 refs., 4 figs.
Analyses of criticality and reactivity for TRACY experiments based on JENDL-3.3 data library
International Nuclear Information System (INIS)
Sono, Hiroki; Miyoshi, Yoshinori; Nakajima, Ken
2003-01-01
The parameters on criticality and reactivity employed for computational simulations of the TRACY supercritical experiments were analyzed using a recently revised nuclear data library, JENDL-3.3. The parameters based on the JENDL-3.3 library were compared to those based on two former-used libraries, JENDL-3.2 and ENDF/B-VI. In the analyses computational codes, MVP, MCNP version 4C and TWOTRAN, were used. The following conclusions were obtained from the analyses: (1) The computational biases of the effective neutron multiplication factor attributable to the nuclear data libraries and to the computational codes do not depend the TRACY experimental conditions such as fuel conditions. (2) The fractional discrepancies in the kinetic parameters and coefficients of reactivity are within ∼5% between the three libraries. By comparison between calculations and measurements of the parameters, the JENDL-3.3 library is expected to give closer values to the measurements than the JENDL-3.2 and ENDF/B-VI libraries. (3) While the reactivity worth of transient rods expressed in the $ unit shows ∼5% discrepancy between the three libraries according to their respective β eff values, there is little discrepancy in that expressed in the Δk/k unit. (author)
Template-based combinatorial enumeration of virtual compound libraries for lipids.
Sud, Manish; Fahy, Eoin; Subramaniam, Shankar
2012-09-25
A variety of software packages are available for the combinatorial enumeration of virtual libraries for small molecules, starting from specifications of core scaffolds with attachments points and lists of R-groups as SMILES or SD files. Although SD files include atomic coordinates for core scaffolds and R-groups, it is not possible to control 2-dimensional (2D) layout of the enumerated structures generated for virtual compound libraries because different packages generate different 2D representations for the same structure. We have developed a software package called LipidMapsTools for the template-based combinatorial enumeration of virtual compound libraries for lipids. Virtual libraries are enumerated for the specified lipid abbreviations using matching lists of pre-defined templates and chain abbreviations, instead of core scaffolds and lists of R-groups provided by the user. 2D structures of the enumerated lipids are drawn in a specific and consistent fashion adhering to the framework for representing lipid structures proposed by the LIPID MAPS consortium. LipidMapsTools is lightweight, relatively fast and contains no external dependencies. It is an open source package and freely available under the terms of the modified BSD license.
A Covariance Generation Methodology for Fission Product Yields
Directory of Open Access Journals (Sweden)
Terranova N.
2016-01-01
Full Text Available Recent safety and economical concerns for modern nuclear reactor applications have fed an outstanding interest in basic nuclear data evaluation improvement and completion. It has been immediately clear that the accuracy of our predictive simulation models was strongly affected by our knowledge on input data. Therefore strong efforts have been made to improve nuclear data and to generate complete and reliable uncertainty information able to yield proper uncertainty propagation on integral reactor parameters. Since in modern nuclear data banks (such as JEFF-3.1.1 and ENDF/BVII.1 no correlations for fission yields are given, in the present work we propose a covariance generation methodology for fission product yields. The main goal is to reproduce the existing European library and to add covariance information to allow proper uncertainty propagation in depletion and decay heat calculations. To do so, we adopted the Generalized Least Square Method (GLSM implemented in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation, developed at CEA-Cadarache. Theoretical values employed in the Bayesian parameter adjustment are delivered thanks to a convolution of different models, representing several quantities in fission yield calculations: the Brosa fission modes for pre-neutron mass distribution, a simplified Gaussian model for prompt neutron emission probability, theWahl systematics for charge distribution and the Madland-England model for the isomeric ratio. Some results will be presented for the thermal fission of U-235, Pu-239 and Pu-241.
Covariance specification and estimation to improve top-down Green House Gas emission estimates
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.
2015-12-01
The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve
Visualization and assessment of spatio-temporal covariance properties
Huang, Huang
2017-11-23
Spatio-temporal covariances are important for describing the spatio-temporal variability of underlying random fields in geostatistical data. For second-order stationary random fields, there exist subclasses of covariance functions that assume a simpler spatio-temporal dependence structure with separability and full symmetry. However, it is challenging to visualize and assess separability and full symmetry from spatio-temporal observations. In this work, we propose a functional data analysis approach that constructs test functions using the cross-covariances from time series observed at each pair of spatial locations. These test functions of temporal lags summarize the properties of separability or symmetry for the given spatial pairs. We use functional boxplots to visualize the functional median and the variability of the test functions, where the extent of departure from zero at all temporal lags indicates the degree of non-separability or asymmetry. We also develop a rank-based nonparametric testing procedure for assessing the significance of the non-separability or asymmetry. Essentially, the proposed methods only require the analysis of temporal covariance functions. Thus, a major advantage over existing approaches is that there is no need to estimate any covariance matrix for selected spatio-temporal lags. The performances of the proposed methods are examined by simulations with various commonly used spatio-temporal covariance models. To illustrate our methods in practical applications, we apply it to real datasets, including weather station data and climate model outputs.
Early selection in open-pollinated Eucalyptus families based on competition covariates
Directory of Open Access Journals (Sweden)
Bruno Ettore Pavan
2014-06-01
Full Text Available The objetive of this work was to evaluate the influence of intergenotypic competition in open-pollinated families of Eucalyptus and its effects on early selection efficiency. Two experiments were carried out, in which the timber volume was evaluated at three ages, in a randomized complete block design. Data from the three years of evaluation (experiment 1, at 2, 4, and 7 years; and experiment 2, at 2, 5, and 7 years were analyzed using mixed models. The following were estimated: variance components, genetic parameters, selection gains, effective number, early selection efficiency, selection gain per unit time, and coincidence of selection with and without the use of competition covariates. Competition effect was nonsignificant for ages under three years, and adjustment using competition covariates was unnecessary. Early selection for families is effective; families that have a late growth spurt are more vulnerable to competition, which markedly impairs ranking at the end of the cycle. Early selection is efficient according to all adopted criteria, and the age of around three years is the most recommended, given the high efficiency and accuracy rate in the indication of trees and families. The addition of competition covariates at the end of the cycle improves early selection efficiency for almost all studied criteria.
Working covariance model selection for generalized estimating equations.
Carey, Vincent J; Wang, You-Gan
2011-11-20
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.
Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang
2018-05-08
When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.
A Robust Adaptive Unscented Kalman Filter for Nonlinear Estimation with Uncertain Noise Covariance.
Zheng, Binqi; Fu, Pengcheng; Li, Baoqing; Yuan, Xiaobing
2018-03-07
The Unscented Kalman filter (UKF) may suffer from performance degradation and even divergence while mismatch between the noise distribution assumed as a priori by users and the actual ones in a real nonlinear system. To resolve this problem, this paper proposes a robust adaptive UKF (RAUKF) to improve the accuracy and robustness of state estimation with uncertain noise covariance. More specifically, at each timestep, a standard UKF will be implemented first to obtain the state estimations using the new acquired measurement data. Then an online fault-detection mechanism is adopted to judge if it is necessary to update current noise covariance. If necessary, innovation-based method and residual-based method are used to calculate the estimations of current noise covariance of process and measurement, respectively. By utilizing a weighting factor, the filter will combine the last noise covariance matrices with the estimations as the new noise covariance matrices. Finally, the state estimations will be corrected according to the new noise covariance matrices and previous state estimations. Compared with the standard UKF and other adaptive UKF algorithms, RAUKF converges faster to the actual noise covariance and thus achieves a better performance in terms of robustness, accuracy, and computation for nonlinear estimation with uncertain noise covariance, which is demonstrated by the simulation results.
Poincare covariance and κ-Minkowski spacetime
International Nuclear Information System (INIS)
Dabrowski, Ludwik; Piacitelli, Gherardo
2011-01-01
A fully Poincare covariant model is constructed as an extension of the κ-Minkowski spacetime. Covariance is implemented by a unitary representation of the Poincare group, and thus complies with the original Wigner approach to quantum symmetries. This provides yet another example (besides the DFR model), where Poincare covariance is realised a la Wigner in the presence of two characteristic dimensionful parameters: the light speed and the Planck length. In other words, a Doubly Special Relativity (DSR) framework may well be realised without deforming the meaning of 'Poincare covariance'. -- Highlights: → We construct a 4d model of noncommuting coordinates (quantum spacetime). → The coordinates are fully covariant under the undeformed Poincare group. → Covariance a la Wigner holds in presence of two dimensionful parameters. → Hence we are not forced to deform covariance (e.g. as quantum groups). → The underlying κ-Minkowski model is unphysical; covariantisation does not cure this.
Growing Competition for Libraries.
Gibbons, Susan
2001-01-01
Describes the Questia subscription-based online academic digital books library. Highlights include weaknesses of the collection; what college students want from a library; importance of marketing; competition for traditional academic libraries that may help improve library services; and the ability of Questia to overcome barriers and…
Abnormalities in structural covariance of cortical gyrification in schizophrenia
Palaniyappan, Lena; Park, Bert; Balain, Vijender; Dangi, Raj; Liddle, Peter
2014-01-01
The highly convoluted shape of the adult human brain results from several well-coordinated maturational events that start from embryonic development and extend through the adult life span. Disturbances in these maturational events can result in various neurological and psychiatric disorders, resulting in abnormal patterns of morphological relationship among cortical structures (structural covariance). Structural covariance can be studied using graph theory-based approaches that evaluate topol...
Directory of Open Access Journals (Sweden)
Ana Vogrinčič Čepič
2013-09-01
Full Text Available ABSTRACTPurpose: The article uses sociological concepts in order to rethink the changes in library practices. Contemporary trends are discussed with regard to the changing nature of working habits, referring mostly to the new technology, and the (emergence of the third space phenomenon. The author does not regard libraries only as concrete public service institutions, but rather as complex cultural forms, taking in consideration wider social context with a stress on users’ practices in relation to space.Methodology/approach: The article is based on the (self- observation of the public library use, and on the (discourse analysis of internal library documents (i.e. annual reports and plans and secondary sociological literature. As such, the cultural form approach represents a classic method of sociology of culture.Results: The study of relevant material in combination with direct personal experiences reveals socio-structural causes for the change of users’ needs and habits, and points at the difficulty of spatial redefinition of libraries as well as at the power of the discourse.Research limitations: The article is limited to an observation of users’ practices in some of the public libraries in Ljubljana and examines only a small number of annual reports – the discoveries are then further debated from the sociological perspective.Originality/practical implications: The article offers sociological insight in the current issues of the library science and tries to suggest a wider explanation that could answer some of the challenges of the contemporary librarianship.
Snyder, Donna L.; Miller, Andrea L.
2009-01-01
What is the relative importance of current and emerging technologies in school library media programs? In order to answer this question, in Fall 2007 the authors administered a survey to 1,053 school library media specialists (SLMSs) throughout the state of Pennsylvania. As a part of the MSLS degree with Library Science K-12 certification, Clarion…
ENDF/B-VII.0: Next Generation Evaluated Nuclear Data Library for Nuclear Science and Technology
Energy Technology Data Exchange (ETDEWEB)
Chadwick, M B; Oblozinsky, P; Herman, M; Greene, N M; McKnight, R D; Smith, D L; Young, P G; MacFarlane, R E; Hale, G M; Haight, R C; Frankle, S; Kahler, A C; Kawano, T; Little, R C; Madland, D G; Moller, P; Mosteller, R; Page, P; Talou, P; Trellue, H; White, M; Wilson, W B; Arcilla, R; Dunford, C L; Mughabghab, S F; Pritychenko, B; Rochman, D; Sonzogni, A A; Lubitz, C; Trumbull, T H; Weinman, J; Brown, D; Cullen, D E; Heinrichs, D; McNabb, D; Derrien, H; Dunn, M; Larson, N M; Leal, L C; Carlson, A D; Block, R C; Briggs, B; Cheng, E; Huria, H; Kozier, K; Courcelle, A; Pronyaev, V; der Marck, S
2006-10-02
We describe the next generation general purpose Evaluated Nuclear Data File, ENDF/B-VII.0, of recommended nuclear data for advanced nuclear science and technology applications. The library, released by the U.S. Cross Section Evaluation Working Group (CSEWG) in December 2006, contains data primarily for reactions with incident neutrons, protons, and photons on almost 400 isotopes. The new evaluations are based on both experimental data and nuclear reaction theory predictions. The principal advances over the previous ENDF/B-VI library are the following: (1) New cross sections for U, Pu, Th, Np and Am actinide isotopes, with improved performance in integral validation criticality and neutron transmission benchmark tests; (2) More precise standard cross sections for neutron reactions on H, {sup 6}Li, {sup 10}B, Au and for {sup 235,238}U fission, developed by a collaboration with the IAEA and the OECD/NEA Working Party on Evaluation Cooperation (WPEC); (3) Improved thermal neutron scattering; (4) An extensive set of neutron cross sections on fission products developed through a WPEC collaboration; (5) A large suite of photonuclear reactions; (6) Extension of many neutron- and proton-induced reactions up to an energy of 150 MeV; (7) Many new light nucleus neutron and proton reactions; (8) Post-fission beta-delayed photon decay spectra; (9) New radioactive decay data; and (10) New methods developed to provide uncertainties and covariances, together with covariance evaluations for some sample cases. The paper provides an overview of this library, consisting of 14 sublibraries in the same, ENDF-6 format, as the earlier ENDF/B-VI library. We describe each of the 14 sublibraries, focusing on neutron reactions. Extensive validation, using radiation transport codes to simulate measured critical assemblies, show major improvements: (a) The long-standing underprediction of low enriched U thermal assemblies is removed; (b) The {sup 238}U, {sup 208}Pb, and {sup 9}Be reflector
Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection
Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd
2015-02-01
Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.
Libraries and Accessibility: Istanbul Public Libraries Case
Directory of Open Access Journals (Sweden)
Gül Yücel
2016-12-01
Full Text Available In the study; the assessment of accessibility has been conducted in Istanbul public libraries within the scope of public area. Public libraries commonly serve with its user of more than 20 million in total, spread to the general of Turkey, having more than one thousand branches in the centrums and having more than one million registered members. The building principles and standards covering the subjects such as the selection of place, historical and architectural specification of the region, distance to the centre of population and design in a way that the disabled people could benefit from the library services fully have been determined with regulations in the construction of new libraries. There are works for the existent libraries such as access for the disabled, fire safety precautions etc. within the scope of the related standards. Easy access by everyone is prioritized in the public libraries having a significant role in life-long learning. The purpose of the study is to develop solution suggestions for the accessibility problems in the public libraries. The study based on the eye inspection and assessments carried out within the scope of accessibility in the public libraries subsidiary to Istanbul Culture and Tourism Provincial Directorate Library and Publications Department within the provincial borders of Istanbul. The arrangements such as reading halls, study areas, book shelves etc. have been examined within the frame of accessible building standards. Building entrances, ramps and staircases, horizontal and vertical circulation of building etc. have been taken into consideration within the scope of accessible building standards. The subjects such as the reading and studying areas and book shelf arrangements for the library have been assessed within the scope of specific buildings. There are a total of 34 public libraries subsidiary to Istanbul Culture and Tourism Provincial Directorate on condition that 20 ea. of them are in the
Covariant field theory of closed superstrings
International Nuclear Information System (INIS)
Siopsis, G.
1989-01-01
The authors construct covariant field theories of both type-II and heterotic strings. Toroidal compactification is also considered. The interaction vertices are based on Witten's vertex representing three strings interacting at the mid-point. For closed strings, the authors thus obtain a bilocal interaction
Modeling Covariance Breakdowns in Multivariate GARCH
Jin, Xin; Maheu, John M
2014-01-01
This paper proposes a flexible way of modeling dynamic heterogeneous covariance breakdowns in multivariate GARCH (MGARCH) models. During periods of normal market activity, volatility dynamics are governed by an MGARCH specification. A covariance breakdown is any significant temporary deviation of the conditional covariance matrix from its implied MGARCH dynamics. This is captured through a flexible stochastic component that allows for changes in the conditional variances, covariances and impl...
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances
Directory of Open Access Journals (Sweden)
Manuel Gil
2014-09-01
Full Text Available Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989 which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.
Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.
Gil, Manuel
2014-01-01
Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.
Islamiyati, A.; Fatmawati; Chamidah, N.
2018-03-01
The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.
Ritchie, A; Sowter, B
2000-01-01
This article reports on the results of an exploratory survey of the availability and accessibility of evidence-based information resources provided by medical libraries in Australia. Although barriers impede access to evidence-based information for hospital clinicians, the survey revealed that Medline and Cinahl are available in over 90% of facilities. In most cases they are widely accessible via internal networks and the Internet. The Cochrane Library is available in 69% of cases. The Internet is widely accessible and most libraries provide access to some full-text, electronic journals. Strategies for overcoming restrictions and integrating information resources with clinical workflow are being pursued. State, regional and national public and private consortia are developing agreements utilising on-line technology. These could produce cost savings and more equitable access to a greater range of evidence-based resources.
MOSFET-like CNFET based logic gate library for low-power application: a comparative study
International Nuclear Information System (INIS)
Gowri Sankar, P. A.; Udhayakumar, K.
2014-01-01
The next generation of logic gate devices are expected to depend upon radically new technologies mainly due to the increasing difficulties and limitations of existing CMOS technology. MOSFET like CNFETs should ideally be the best devices to work with for high-performance VLSI. This paper presents results of a comprehensive comparative study of MOSFET-like carbon nanotube field effect transistors (CNFETs) technology based logic gate library for high-speed, low-power operation than conventional bulk CMOS libraries. It focuses on comparing four promising logic families namely: complementary-CMOS (C-CMOS), transmission gate (TG), complementary pass logic (CPL) and Domino logic (DL) styles are presented. Based on these logic styles, the proposed library of static and dynamic NAND-NOR logic gates, XOR, multiplexer and full adder functions are implemented efficiently and carefully analyzed with a test bench to measure propagation delay and power dissipation as a function of supply voltage. This analysis provides the right choice of logic style for low-power, high-speed applications. Proposed logic gates libraries are simulated using Synopsys HSPICE based on the standard 32 nm CNFET model. The simulation results demonstrate that, it is best to use C-CMOS logic style gates that are implemented in CNFET technology which are superior in performance compared to other logic styles, because of their low average power-delay-product (PDP). The analysis also demonstrates how the optimum supply voltage varies with logic styles in ultra-low power systems. The robustness of the proposed logic gate library is also compared with conventional and state-art of CMOS logic gate libraries. (semiconductor integrated circuits)
Validation of new 240Pu cross section and covariance data via criticality calculation
International Nuclear Information System (INIS)
Kim, Do Heon; Gil, Choong-Sup; Kim, Hyeong Il; Lee, Young-Ouk; Leal, Luiz C.; Dunn, Michael E.
2011-01-01
Recent collaboration between KAERI and ORNL has completed an evaluation for 240 Pu neutron cross section with covariance data. The new 240 Pu cross section data has been validated through 28 criticality safety benchmark problems taken from the ICSBEP and/or CSEWG specifications with MCNP calculations. The calculation results based on the new evaluation have been compared with those based on recent evaluations such as ENDF/B-VII.0, JEFF-3.1.1, and JENDL-4.0. In addition, the new 240 Pu covariance data has been tested for some criticality benchmarks via the DANTSYS/SUSD3D-based nuclear data sensitivity and uncertainty analysis of k eff . The k eff uncertainty estimates by the new covariance data has been compared with those by JENDL-4.0, JENDL-3.3, and Low-Fidelity covariance data. (author)
New Neutron, Proton, and S(α,β) MCNP Data Libraries Based on ENDF/B-VII
International Nuclear Information System (INIS)
Little, Robert C.; Trellue, Holly R.; MacFarlane, Robert E.; Kahler, A.C.; Lee, Mary Beth; White, Morgan C.
2008-01-01
The general-purpose Evaluated Nuclear Data File ENDF/B-VII.0 was released in December 2006. A number of sub-libraries were included in ENDF/B-VII.0 such that data were provided for incident neutrons, photons, and charged particles. This paper describes the creation of MCNP data libraries at Los Alamos National Laboratory based on three ENDF/B-VII.0 sub-libraries: neutrons, protons, and thermal scattering. An ACE-formatted continuous-energy neutron data library called ENDF70 for MCNP has been produced. This library provides data for 390 materials at five temperatures: 293.6, 600, 900, 1200, and 2500 K. The library was processed primarily with Version 248 of NJOY99. Extensive checking and quality-assurance tests were applied to the data. Improvements to the processing code were made and certain evaluations were modified as a result of these tests. ENDF/B-VII.0 included proton evaluations for 48 target materials. Forty-seven proton evaluations (all except for 13 C) were processed at room temperature and combined into the MCNP library ENDF70PROT. Neutron thermal S(α,β) scattering data exist for twenty different materials in ENDF/B-VII.0. All twenty of these evaluations were processed at all applicable temperatures (these vary for each evaluation), and combined into the MCNP library ENDF70SAB. All of these ENDF/B-VII.0 based MCNP libraries (ENDF70, ENDF70PROT, and ENDF70SAB) are available as part of the MCNP5 1.50 release. (authors)
Denda, Kayo; Smulewitz, Gracemary
2004-01-01
In the contemporary library environment, the presence of the Internet and the infrastructure of the integrated library system suggest an integrated internal organization. The article describes the example of Douglass Rationalization, a team-based collaborative project to refocus the collection of Rutgers' Douglass Library, taking advantage of the…
Gaskins, J T; Daniels, M J
2016-01-02
The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.
Covariance fitting of highly-correlated data in lattice QCD
Yoon, Boram; Jang, Yong-Chull; Jung, Chulwoo; Lee, Weonjong
2013-07-01
We address a frequently-asked question on the covariance fitting of highly-correlated data such as our B K data based on the SU(2) staggered chiral perturbation theory. Basically, the essence of the problem is that we do not have a fitting function accurate enough to fit extremely precise data. When eigenvalues of the covariance matrix are small, even a tiny error in the fitting function yields a large chi-square value and spoils the fitting procedure. We have applied a number of prescriptions available in the market, such as the cut-off method, modified covariance matrix method, and Bayesian method. We also propose a brand new method, the eigenmode shift (ES) method, which allows a full covariance fitting without modifying the covariance matrix at all. We provide a pedagogical example of data analysis in which the cut-off method manifestly fails in fitting, but the rest work well. In our case of the B K fitting, the diagonal approximation, the cut-off method, the ES method, and the Bayesian method work reasonably well in an engineering sense. However, interpreting the meaning of χ 2 is easier in the case of the ES method and the Bayesian method in a theoretical sense aesthetically. Hence, the ES method can be a useful alternative optional tool to check the systematic error caused by the covariance fitting procedure.
Covariant diagrams for one-loop matching
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhengkang [Michigan Center for Theoretical Physics (MCTP), University of Michigan,450 Church Street, Ann Arbor, MI 48109 (United States); Deutsches Elektronen-Synchrotron (DESY),Notkestraße 85, 22607 Hamburg (Germany)
2017-05-30
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Covariant diagrams for one-loop matching
International Nuclear Information System (INIS)
Zhang, Zhengkang
2017-01-01
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Developing Applications in the Era of Cloud-based SaaS Library Systems
Directory of Open Access Journals (Sweden)
Josh Weisman
2014-10-01
Full Text Available As the move to cloud-based SaaS library systems accelerates, we must consider what it means to develop applications when the core of the system isn't under the library's control. The entire application lifecycle is changing, from development to testing to production. Developing applications for cloud solutions raises new concerns, such as security, multi-tenancy, latency, and analytics. In this article, we review the landscape and suggest a view of how to be successful for the benefit of library staff and end-users in this new reality. We discuss what kinds of APIs and protocols vendors should be supporting, and suggest how best to take advantage of the innovations being introduced.
Fast Computing for Distance Covariance
Huo, Xiaoming; Szekely, Gabor J.
2014-01-01
Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...
On estimating cosmology-dependent covariance matrices
International Nuclear Information System (INIS)
Morrison, Christopher B.; Schneider, Michael D.
2013-01-01
We describe a statistical model to estimate the covariance matrix of matter tracer two-point correlation functions with cosmological simulations. Assuming a fixed number of cosmological simulation runs, we describe how to build a 'statistical emulator' of the two-point function covariance over a specified range of input cosmological parameters. Because the simulation runs with different cosmological models help to constrain the form of the covariance, we predict that the cosmology-dependent covariance may be estimated with a comparable number of simulations as would be needed to estimate the covariance for fixed cosmology. Our framework is a necessary first step in planning a simulations campaign for analyzing the next generation of cosmological surveys
Feng, Jingjie; Huang, Zhongyi; Zhou, Congcong; Ye, Xuesong
2018-06-01
It is widely recognized that pulse transit time (PTT) can track blood pressure (BP) over short periods of time, and hemodynamic covariates such as heart rate, stiffness index may also contribute to BP monitoring. In this paper, we derived a proportional relationship between BP and PPT -2 and proposed an improved method adopting hemodynamic covariates in addition to PTT for continuous BP estimation. We divided 28 subjects from the Multi-parameter Intelligent Monitoring for Intensive Care database into two groups (with/without cardiovascular diseases) and utilized a machine learning strategy based on regularized linear regression (RLR) to construct BP models with different covariates for corresponding groups. RLR was performed for individuals as the initial calibration, while recursive least square algorithm was employed for the re-calibration. The results showed that errors of BP estimation by our method stayed within the Association of Advancement of Medical Instrumentation limits (- 0.98 ± 6.00 mmHg @ SBP, 0.02 ± 4.98 mmHg @ DBP) when the calibration interval extended to 1200-beat cardiac cycles. In comparison with other two representative studies, Chen's method kept accurate (0.32 ± 6.74 mmHg @ SBP, 0.94 ± 5.37 mmHg @ DBP) using a 400-beat calibration interval, while Poon's failed (- 1.97 ± 10.59 mmHg @ SBP, 0.70 ± 4.10 mmHg @ DBP) when using a 200-beat calibration interval. With additional hemodynamic covariates utilized, our method improved the accuracy of PTT-based BP estimation, decreased the calibration frequency and had the potential for better continuous BP estimation.
Collection-based analysis of selected medical libraries in the Philippines using Doody’s Core Titles
Directory of Open Access Journals (Sweden)
Efren Torres Jr., MLIS
2017-01-01
Conclusion: De La Salle Health Sciences Institute and University of Santo Tomas had sound medical collections based on Doody’s Core Titles. Collectively, the medical libraries shared common collection development priorities, as evidenced by similarities in strong areas. Library budget and the role of the library director in book selection were among the factors that could contributed to a high percentage of match titles.
ERRORJ. Covariance processing code. Version 2.2
International Nuclear Information System (INIS)
Chiba, Go
2004-07-01
ERRORJ is the covariance processing code that can produce covariance data of multi-group cross sections, which are essential for uncertainty analyses of nuclear parameters, such as neutron multiplication factor. The ERRORJ code can process the covariance data of cross sections including resonance parameters, angular and energy distributions of secondary neutrons. Those covariance data cannot be processed by the other covariance processing codes. ERRORJ has been modified and the version 2.2 has been developed. This document describes the modifications and how to use. The main topics of the modifications are as follows. Non-diagonal elements of covariance matrices are calculated in the resonance energy region. Option for high-speed calculation is implemented. Perturbation amount is optimized in a sensitivity calculation. Effect of the resonance self-shielding on covariance of multi-group cross section can be considered. It is possible to read a compact covariance format proposed by N.M. Larson. (author)
Gosho, Masahiko; Hirakawa, Akihiro; Noma, Hisashi; Maruo, Kazushi; Sato, Yasunori
2017-10-01
In longitudinal clinical trials, some subjects will drop out before completing the trial, so their measurements towards the end of the trial are not obtained. Mixed-effects models for repeated measures (MMRM) analysis with "unstructured" (UN) covariance structure are increasingly common as a primary analysis for group comparisons in these trials. Furthermore, model-based covariance estimators have been routinely used for testing the group difference and estimating confidence intervals of the difference in the MMRM analysis using the UN covariance. However, using the MMRM analysis with the UN covariance could lead to convergence problems for numerical optimization, especially in trials with a small-sample size. Although the so-called sandwich covariance estimator is robust to misspecification of the covariance structure, its performance deteriorates in settings with small-sample size. We investigated the performance of the sandwich covariance estimator and covariance estimators adjusted for small-sample bias proposed by Kauermann and Carroll ( J Am Stat Assoc 2001; 96: 1387-1396) and Mancl and DeRouen ( Biometrics 2001; 57: 126-134) fitting simpler covariance structures through a simulation study. In terms of the type 1 error rate and coverage probability of confidence intervals, Mancl and DeRouen's covariance estimator with compound symmetry, first-order autoregressive (AR(1)), heterogeneous AR(1), and antedependence structures performed better than the original sandwich estimator and Kauermann and Carroll's estimator with these structures in the scenarios where the variance increased across visits. The performance based on Mancl and DeRouen's estimator with these structures was nearly equivalent to that based on the Kenward-Roger method for adjusting the standard errors and degrees of freedom with the UN structure. The model-based covariance estimator with the UN structure under unadjustment of the degrees of freedom, which is frequently used in applications
Afghanistan Digital Library Initiative: Revitalizing an Integrated Library System
Directory of Open Access Journals (Sweden)
Yan HAN
2007-12-01
Full Text Available This paper describes an Afghanistan digital library initiative of building an integrated library system (ILS for Afghanistan universities and colleges based on open-source software. As one of the goals of the Afghan eQuality Digital Libraries Alliance, the authors applied systems analysis approach, evaluated different open-source ILSs, and customized the selected software to accommodate users’ needs. Improvements include Arabic and Persian language support, user interface changes, call number label printing, and ISBN-13 support. To our knowledge, this ILS is the first at a large academic library running on open-source software.
Erickson, Jonathan C; Putney, Joy; Hilbert, Douglas; Paskaranandavadivel, Niranchan; Cheng, Leo K; O'Grady, Greg; Angeli, Timothy R
2016-11-01
The aim of this study was to develop, validate, and apply a fully automated method for reducing large temporally synchronous artifacts present in electrical recordings made from the gastrointestinal (GI) serosa, which are problematic for properly assessing slow wave dynamics. Such artifacts routinely arise in experimental and clinical settings from motion, switching behavior of medical instruments, or electrode array manipulation. A novel iterative Covariance-Based Reduction of Artifacts (COBRA) algorithm sequentially reduced artifact waveforms using an updating across-channel median as a noise template, scaled and subtracted from each channel based on their covariance. Application of COBRA substantially increased the signal-to-artifact ratio (12.8 ± 2.5 dB), while minimally attenuating the energy of the underlying source signal by 7.9% on average ( -11.1 ± 3.9 dB). COBRA was shown to be highly effective for aiding recovery and accurate marking of slow wave events (sensitivity = 0.90 ± 0.04; positive-predictive value = 0.74 ± 0.08) from large segments of in vivo porcine GI electrical mapping data that would otherwise be lost due to a broad range of contaminating artifact waveforms. Strongly reducing artifacts with COBRA ultimately allowed for rapid production of accurate isochronal activation maps detailing the dynamics of slow wave propagation in the porcine intestine. Such mapping studies can help characterize differences between normal and dysrhythmic events, which have been associated with GI abnormalities, such as intestinal ischemia and gastroparesis. The COBRA method may be generally applicable for removing temporally synchronous artifacts in other biosignal processing domains.
Covariance matrix estimation for stationary time series
Xiao, Han; Wu, Wei Biao
2011-01-01
We obtain a sharp convergence rate for banded covariance matrix estimates of stationary processes. A precise order of magnitude is derived for spectral radius of sample covariance matrices. We also consider a thresholded covariance matrix estimator that can better characterize sparsity if the true covariance matrix is sparse. As our main tool, we implement Toeplitz [Math. Ann. 70 (1911) 351–376] idea and relate eigenvalues of covariance matrices to the spectral densities or Fourier transforms...
Yu, Xu; Yu, Miao; Xu, Li-xun; Yang, Jing; Xie, Zhi-qiang
2015-01-01
The assumption that the training and testing samples are drawn from the same distribution is violated under covariate shift setting, and most algorithms for the covariate shift setting try to first estimate distributions and then reweight samples based on the distributions estimated. Due to the difficulty of estimating a correct distribution, previous methods can not get good classification performance. In this paper, we firstly present two types of covariate shift problems. Rather than estim...
Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.
Ma, Yunbei; Zhou, Xiao-Hua
2017-02-01
For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.
Sp(2) covariant quantisation of general gauge theories
Energy Technology Data Exchange (ETDEWEB)
Vazquez-Bello, J L
1994-11-01
The Sp(2) covariant quantization of gauge theories is studied. The geometrical interpretation of gauge theories in terms of quasi principal fibre bundles Q(M{sub s}, G{sub s}) is reviewed. It is then described the Sp(2) algebra of ordinary Yang-Mills theory. A consistent formulation of covariant Lagrangian quantisation for general gauge theories based on Sp(2) BRST symmetry is established. The original N = 1, ten dimensional superparticle is considered as an example of infinitely reducible gauge algebras, and given explicitly its Sp(2) BRST invariant action. (author). 18 refs.
Sp(2) covariant quantisation of general gauge theories
International Nuclear Information System (INIS)
Vazquez-Bello, J.L.
1994-11-01
The Sp(2) covariant quantization of gauge theories is studied. The geometrical interpretation of gauge theories in terms of quasi principal fibre bundles Q(M s , G s ) is reviewed. It is then described the Sp(2) algebra of ordinary Yang-Mills theory. A consistent formulation of covariant Lagrangian quantisation for general gauge theories based on Sp(2) BRST symmetry is established. The original N = 1, ten dimensional superparticle is considered as an example of infinitely reducible gauge algebras, and given explicitly its Sp(2) BRST invariant action. (author). 18 refs
Competing risks and time-dependent covariates
DEFF Research Database (Denmark)
Cortese, Giuliana; Andersen, Per K
2010-01-01
Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates......, as classified by Kalbfleisch and Prentice [The Statistical Analysis of Failure Time Data, Wiley, New York, 2002] with the intent of clarifying their role and emphasizing the limitations in standard survival models and in the competing risks setting. If random (internal) time-dependent covariates...
Activities of covariance utilization working group
International Nuclear Information System (INIS)
Tsujimoto, Kazufumi
2013-01-01
During the past decade, there has been a interest in the calculational uncertainties induced by nuclear data uncertainties in the neutronics design of advanced nuclear system. The covariance nuclear data is absolutely essential for the uncertainty analysis. In the latest version of JENDL, JENDL-4.0, the covariance data for many nuclides, especially actinide nuclides, was substantialy enhanced. The growing interest in the uncertainty analysis and the covariance data has led to the organisation of the working group for covariance utilization under the JENDL committee. (author)
Library based x-ray scatter correction for dedicated cone beam breast CT
International Nuclear Information System (INIS)
Shi, Linxi; Zhu, Lei; Vedantham, Srinivasan; Karellas, Andrew
2016-01-01
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal
Library based x-ray scatter correction for dedicated cone beam breast CT
Energy Technology Data Exchange (ETDEWEB)
Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Vedantham, Srinivasan; Karellas, Andrew [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States)
2016-08-15
Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal
Semiparametric estimation of covariance matrices for longitudinal data.
Fan, Jianqing; Wu, Yichao
2008-12-01
Estimation of longitudinal data covariance structure poses significant challenges because the data are usually collected at irregular time points. A viable semiparametric model for covariance matrices was proposed in Fan, Huang and Li (2007) that allows one to estimate the variance function nonparametrically and to estimate the correlation function parametrically via aggregating information from irregular and sparse data points within each subject. However, the asymptotic properties of their quasi-maximum likelihood estimator (QMLE) of parameters in the covariance model are largely unknown. In the current work, we address this problem in the context of more general models for the conditional mean function including parametric, nonparametric, or semi-parametric. We also consider the possibility of rough mean regression function and introduce the difference-based method to reduce biases in the context of varying-coefficient partially linear mean regression models. This provides a more robust estimator of the covariance function under a wider range of situations. Under some technical conditions, consistency and asymptotic normality are obtained for the QMLE of the parameters in the correlation function. Simulation studies and a real data example are used to illustrate the proposed approach.
Sanchez, P.; Hinojosa, J.; Ruiz, R.
2005-06-01
Recently, neuromodeling methods of microwave devices have been developed. These methods are suitable for the model generation of novel devices. They allow fast and accurate simulations and optimizations. However, the development of libraries makes these methods to be a formidable task, since they require massive input-output data provided by an electromagnetic simulator or measurements and repeated artificial neural network (ANN) training. This paper presents a strategy reducing the cost of library development with the advantages of the neuromodeling methods: high accuracy, large range of geometrical and material parameters and reduced CPU time. The library models are developed from a set of base prior knowledge input (PKI) models, which take into account the characteristics common to all the models in the library, and high-level ANNs which give the library model outputs from base PKI models. This technique is illustrated for a microwave multiconductor tunable phase shifter using anisotropic substrates. Closed-form relationships have been developed and are presented in this paper. The results show good agreement with the expected ones.
Improvement of covariance data for fast reactors
International Nuclear Information System (INIS)
Shibata, Keiichi; Hasegawa, Akira
2000-02-01
We estimated covariances of the JENDL-3.2 data on the nuclides and reactions needed to analyze fast-reactor cores for the past three years, and produced covariance files. The present work was undertaken to re-examine the covariance files and to make some improvements. The covariances improved are the ones for the inelastic scattering cross section of 16 O, the total cross section of 23 Na, the fission cross section of 235 U, the capture cross section of 238 U, and the resolved resonance parameters for 238 U. Moreover, the covariances of 233 U data were newly estimated by the present work. The covariances obtained were compiled in the ENDF-6 format. (author)
Dreano, Denis
2017-04-05
Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.
JENDL-4.0: A new library for nuclear science and engineering
International Nuclear Information System (INIS)
Shibata, Keiichi; Iwamoto, Osamu; Nakagawa, Tsuneo; Iwamoto, Nobuyuki; Ichihara, Akira; Kunieda, Satoshi; Chiba, Satoshi; Furutaka, Kazuyoshi; Katakura, Jun-ichi; Otuka, Naohiko; Ohsawa, Takaaki; Murata, Toru; Matsunobu, Hiroyuki; Zukeran, Atsushi; Kamada, So
2011-01-01
The fourth version of the Japanese Evaluated Nuclear Data Library has been produced in cooperation with the Japanese Nuclear Data Committee. In the new library, much emphasis is placed on the improvements of fission product and minor actinoid data. Two nuclear model codes were developed in order to evaluate the cross sections of fission products and minor actinoids. Coupled-channel optical model parameters, which can be applied to wide mass and energy regions, were obtained for nuclear model calculations. Thermal cross sections of actinoids were carefully examined by considering experimental data or by the systematics of neighboring nuclei. Most of the fission cross sections were derived from experimental data. A simultaneous evaluation was performed for the fission cross sections of important uranium and plutonium isotopes above 10 keV. New evaluations were performed for the thirty fission-product nuclides that had not been contained in the previous library JENDL-3.3. The data for light elements and structural materials were partly reevaluated. Moreover, covariances were estimated mainly for actinoids. The new library was released as JENDL-4.0, and the data can be retrieved from the Web site of the JAEA Nuclear Data Center. (author)
Lorentz Covariance of Langevin Equation
International Nuclear Information System (INIS)
Koide, T.; Denicol, G.S.; Kodama, T.
2008-01-01
Relativistic covariance of a Langevin type equation is discussed. The requirement of Lorentz invariance generates an entanglement between the force and noise terms so that the noise itself should not be a covariant quantity. (author)
Incorporating Library School Interns on Academic Library Subject Teams
Sargent, Aloha R.; Becker, Bernd W.; Klingberg, Susan
2011-01-01
This case study analyzes the use of library school interns on subject-based teams for the social sciences, humanities, and sciences in the San Jose State University Library. Interns worked closely with team librarians on reference, collection development/management, and instruction activities. In a structured focus group, interns reported that the…
Dupont, Odile
2014-01-01
This book based on experiences of libraries serving interreligious dialogue, presents themes like library tools serving dialogue between cultures, collections dialoguing, children and young adults dialoguing beyond borders, story telling as dialog, librarians serving interreligious dialogue.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-01-01
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang
2017-09-27
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
Early grey matter changes in structural covariance networks in Huntington's disease.
Coppen, Emma M; van der Grond, Jeroen; Hafkemeijer, Anne; Rombouts, Serge A R B; Roos, Raymund A C
2016-01-01
Progressive subcortical changes are known to occur in Huntington's disease (HD), a hereditary neurodegenerative disorder. Less is known about the occurrence and cohesion of whole brain grey matter changes in HD. We aimed to detect network integrity changes in grey matter structural covariance networks and examined relationships with clinical assessments. Structural magnetic resonance imaging data of premanifest HD ( n = 30), HD patients (n = 30) and controls (n = 30) was used to identify ten structural covariance networks based on a novel technique using the co-variation of grey matter with independent component analysis in FSL. Group differences were studied controlling for age and gender. To explore whether our approach is effective in examining grey matter changes, regional voxel-based analysis was additionally performed. Premanifest HD and HD patients showed decreased network integrity in two networks compared to controls. One network included the caudate nucleus, precuneous and anterior cingulate cortex (in HD p covariance might be a sensitive approach to reveal early grey matter changes, especially for premanifest HD.
Endf/B-VII.0 Based Library for Paragon - 313
International Nuclear Information System (INIS)
Huria, H.C.; Kucukboyaci, V.N.; Ouisloumen, M.
2010-01-01
A new 70-group library has been generated for the Westinghouse lattice physics code PARAGON using the ENDF/B-VII.0 nuclear data files. The new library retains the major features of the current library, including the number of energy groups and the reduction in the U-238 resonance integral. The upper bound for the up-scattering effects in the new library, however, has been moved to 4.0 eV from 2.1 eV for better MOX fuel predictions. The new library has been used to analyze standard benchmarks and also to compare the measured and predicted parameters for different types of Westinghouse and Combustion Engineering (CE) type operating reactor cores. Results indicate that the new library will not impact the reactivity, power distribution and the temperature coefficient predictions over a wide range of physics design parameters; however, will improve the MOX core predictions. In other words, the ENDF/B-VI.3 and ENDF/B-VII.0 produce similar results for reactor core calculations. (authors)
ENDF/B-VII.0: Next Generation Evaluated Nuclear Data Library for Nuclear Science and Technology
International Nuclear Information System (INIS)
Chadwick, M.B.; Oblozinsky, P.; Herman, M.
2006-01-01
We describe the next generation general purpose Evaluated Nuclear Data File, ENDF/B-VII.0, of recommended nuclear data for advanced nuclear science and technology applications. The library, released by the U.S. Cross Section Evaluation Working Group (CSEWG) in December 2006, contains data primarily for reactions with incident neutrons, protons, and photons on almost 400 isotopes, based on experimental data and theory predictions. The principal advances over the previous ENDF/B-VI library are the following: (1) New cross sections for U, Pu, Th, Np and Am actinide isotopes, with improved performance in integral validation criticality and neutron transmission benchmark tests; (2) More precise standard cross sections for neutron reactions on H, 6 Li, 10 B, Au and for 235,238 U fission, developed by a collaboration with the IAEA and the OECD/NEA Working Party on Evaluation Cooperation (WPEC); (3) Improved thermal neutron scattering; (4) An extensive set of neutron cross sections on fission products developed through a WPEC collaboration; (5) A large suite of photonuclear reactions; (6) Extension of many neutron- and proton-induced evaluations up to 150 MeV; (7) Many new light nucleus neutron and proton reactions; (8) Post-fission beta-delayed photon decay spectra; (9) New radioactive decay data; (10) New methods for uncertainties and covariances, together with covariance evaluations for some sample cases; and (11) New actinide fission energy deposition. The paper provides an overview of this library, consisting of 14 sublibraries in the same ENDF-6 format as the earlier ENDF/B-VI library. We describe each of the 14 sublibraries, focusing on neutron reactions. Extensive validation, using radiation transport codes to simulate measured critical assemblies, show major improvements: (a) The long-standing underprediction of low enriched uranium thermal assemblies is removed; (b) The 238 U and 208 Pb reflector biases in fast systems are largely removed; (c) ENDF/B-VI.8 good
Covariant electrodynamics in linear media: Optical metric
Thompson, Robert T.
2018-03-01
While the postulate of covariance of Maxwell's equations for all inertial observers led Einstein to special relativity, it was the further demand of general covariance—form invariance under general coordinate transformations, including between accelerating frames—that led to general relativity. Several lines of inquiry over the past two decades, notably the development of metamaterial-based transformation optics, has spurred a greater interest in the role of geometry and space-time covariance for electrodynamics in ponderable media. I develop a generally covariant, coordinate-free framework for electrodynamics in general dielectric media residing in curved background space-times. In particular, I derive a relation for the spatial medium parameters measured by an arbitrary timelike observer. In terms of those medium parameters I derive an explicit expression for the pseudo-Finslerian optical metric of birefringent media and show how it reduces to a pseudo-Riemannian optical metric for nonbirefringent media. This formulation provides a basis for a unified approach to ray and congruence tracing through media in curved space-times that may smoothly vary among positively refracting, negatively refracting, and vacuum.
Bayesian estimation of covariance matrices: Application to market risk management at EDF
International Nuclear Information System (INIS)
Jandrzejewski-Bouriga, M.
2012-01-01
In this thesis, we develop new methods of regularized covariance matrix estimation, under the Bayesian setting. The regularization methodology employed is first related to shrinkage. We investigate a new Bayesian modeling of covariance matrix, based on hierarchical inverse-Wishart distribution, and then derive different estimators under standard loss functions. Comparisons between shrunk and empirical estimators are performed in terms of frequentist performance under different losses. It allows us to highlight the critical importance of the definition of cost function and show the persistent effect of the shrinkage-type prior on inference. In a second time, we consider the problem of covariance matrix estimation in Gaussian graphical models. If the issue is well treated for the decomposable case, it is not the case if you also consider non-decomposable graphs. We then describe a Bayesian and operational methodology to carry out the estimation of covariance matrix of Gaussian graphical models, decomposable or not. This procedure is based on a new and objective method of graphical-model selection, combined with a constrained and regularized estimation of the covariance matrix of the model chosen. The procedures studied effectively manage missing data. These estimation techniques were applied to calculate the covariance matrices involved in the market risk management for portfolios of EDF (Electricity of France), in particular for problems of calculating Value-at-Risk or in Asset Liability Management. (author)
Generalized Linear Covariance Analysis
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Santos, Radleigh G; Appel, Jon R; Giulianotti, Marc A; Edwards, Bruce S; Sklar, Larry A; Houghten, Richard A; Pinilla, Clemencia
2013-05-30
In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays.
Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices
Lan, Shiwei
2017-11-08
Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.
Development of library documents in BRICEM on network-based environment
International Nuclear Information System (INIS)
Gao Renxi
2010-01-01
With the development of the internet, the transformation from a traditional library to a modem one is essential to the development of BRICEM (Beijing Research Institute of Chemical Engineering and Metallurgy) Technology Library and the situations of other libraries, this thesis integrates the reality of BRICEM and its library in an effort to work out a tentative plan, as well as concrete measures and procedures of digitalising and online-sharing the resources of BRICEM Technology Library. (author)
Maintainability analysis considering time-dependent and time-independent covariates
International Nuclear Information System (INIS)
Barabadi, Abbas; Barabady, Javad; Markeset, Tore
2011-01-01
Traditional parametric methods for assessing maintainability most often only consider time to repair (TTR) as a single explanatory variable. However, to predict availability more precisely for high availability systems, a better model is needed to quantify the effect of operational environment on maintainability. The proportional repair model (PRM), which is developed based on proportional hazard model (PHM), may be used to analyze maintainability in the present of covariates. In the PRM, the effect of covariates is considered to be time independent. However this assumption may not be valid for some situations. The aim of this paper is to develop the Cox regression model and its extension in the presence of time-dependent covariates for determining maintainability. A simple case study is used to demonstrate how the model can be applied in a real case.
Spatial implications of covariate adjustment on patterns of risk
DEFF Research Database (Denmark)
Sabel, Clive Eric; Wilson, Jeff Gaines; Kingham, Simon
2007-01-01
Epidemiological studies that examine the relationship between environmental exposures and health often address other determinants of health that may influence the relationship being studied by adjusting for these factors as covariates. While disease surveillance methods routinely control...... for covariates such as deprivation, there has been limited investigative work on the spatial movement of risk at the intraurban scale due to the adjustment. It is important that the nature of any spatial relocation be well understood as a relocation to areas of increased risk may also introduce additional...... localised factors that influence the exposure-response relationship. This paper examines the spatial patterns of relative risk and clusters of hospitalisations based on an illustrative small-area example from Christchurch, New Zealand. A four-stage test of the spatial relocation effects of covariate...
Survival analysis with functional covariates for partial follow-up studies.
Fang, Hong-Bin; Wu, Tong Tong; Rapoport, Aaron P; Tan, Ming
2016-12-01
Predictive or prognostic analysis plays an increasingly important role in the era of personalized medicine to identify subsets of patients whom the treatment may benefit the most. Although various time-dependent covariate models are available, such models require that covariates be followed in the whole follow-up period. This article studies a new class of functional survival models where the covariates are only monitored in a time interval that is shorter than the whole follow-up period. This paper is motivated by the analysis of a longitudinal study on advanced myeloma patients who received stem cell transplants and T cell infusions after the transplants. The absolute lymphocyte cell counts were collected serially during hospitalization. Those patients are still followed up if they are alive after hospitalization, while their absolute lymphocyte cell counts cannot be measured after that. Another complication is that absolute lymphocyte cell counts are sparsely and irregularly measured. The conventional method using Cox model with time-varying covariates is not applicable because of the different lengths of observation periods. Analysis based on each single observation obviously underutilizes available information and, more seriously, may yield misleading results. This so-called partial follow-up study design represents increasingly common predictive modeling problem where we have serial multiple biomarkers up to a certain time point, which is shorter than the total length of follow-up. We therefore propose a solution to the partial follow-up design. The new method combines functional principal components analysis and survival analysis with selection of those functional covariates. It also has the advantage of handling sparse and irregularly measured longitudinal observations of covariates and measurement errors. Our analysis based on functional principal components reveals that it is the patterns of the trajectories of absolute lymphocyte cell counts, instead of
gLibrary/DRI: A grid-based platform to host multiple repositories for digital content
International Nuclear Information System (INIS)
Calanducci, A.; Gonzalez Martin, J. M.; Ramos Pollan, R.; Rubio del Solar, M.; Tcaci, S.
2007-01-01
In this work we present the gLibrary/DRI (Digital Repositories Infrastructure) platform. gLibrary/DRI extends gLibrary, a system with a easy-to-use web front-end designed to save and organize multimedia assets on Grid-based storage resources. The main goal of the extended platform is to reduce the cost in terms of time and effort that a repository provider spends to get its repository deployed. This is achieved by providing a common infrastructure and a set of mechanisms (APIs and specifications) that the repository providers use to define the data model, the access to the content (by navigation trees and filters) and the storage model. DRI offers a generic way to provide all this functionality; nevertheless the providers can add specific behaviours to the default functions for their repositories. The architecture is Grid based (VO system, data federation and distribution, computing power, etc). A working example based on a mammograms repository is also presented. (Author)
On superfield covariant quantization in general coordinates
International Nuclear Information System (INIS)
Gitman, D.M.; Moshin, P. Yu.; Tomazelli, J.L.
2005-01-01
We propose a natural extension of the BRST-antiBRST superfield covariant scheme in general coordinates. Thus, the coordinate dependence of the basic tensor fields and scalar density of the formalism is extended from the base supermanifold to the complete set of superfield variables. (orig.)
On superfield covariant quantization in general coordinates
Energy Technology Data Exchange (ETDEWEB)
Gitman, D.M. [Universidade de Sao Paulo, Instituto de Fisica, Sao Paulo, S.P (Brazil); Moshin, P. Yu. [Universidade de Sao Paulo, Instituto de Fisica, Sao Paulo, S.P (Brazil); Tomsk State Pedagogical University, Tomsk (Russian Federation); Tomazelli, J.L. [UNESP, Departamento de Fisica e Quimica, Campus de Guaratingueta (Brazil)
2005-12-01
We propose a natural extension of the BRST-antiBRST superfield covariant scheme in general coordinates. Thus, the coordinate dependence of the basic tensor fields and scalar density of the formalism is extended from the base supermanifold to the complete set of superfield variables. (orig.)
Teaching Electronic Literacy A Concepts-Based Approach for School Library Media Specialists
Craver, Kathleen W
1997-01-01
School library media specialists will find this concepts-based approach to teaching electronic literacy an indispensable basic tool for instructing students and teachers. It provides step-by-step instruction on how to find and evaluate needed information from electronic databases and the Internet, how to formulate successful electronic search strategies and retrieve relevant results, and how to interpret and critically analyze search results. The chapters contain a suggested lesson plan and sample assignments for the school library media specialist to use in teaching electronic literacy skills
Covariant diagrams for one-loop matching
International Nuclear Information System (INIS)
Zhang, Zhengkang
2016-10-01
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Covariant diagrams for one-loop matching
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2016-10-15
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Construction of naïve camelids VHH repertoire in phage display-based library.
Sabir, Jamal S M; Atef, Ahmed; El-Domyati, Fotouh M; Edris, Sherif; Hajrah, Nahid; Alzohairy, Ahmed M; Bahieldin, Ahmed
2014-04-01
Camelids have unique antibodies, namely HCAbs (VHH) or commercially named Nanobodies(®) (Nb) that are composed only of a heavy-chain homodimer. As libraries based on immunized camelids are time-consuming, costly and likely redundant for certain antigens, we describe the construction of a naïve camelid VHHs library from blood serum of non-immunized camelids with affinity in the subnanomolar range and suitable for standard immune applications. This approach is rapid and recovers VHH repertoire with the advantages of being more diverse, non-specific and devoid of subpopulations of specific antibodies, which allows the identification of binders for any potential antigen (or pathogen). RNAs from a number of camelids from Saudi Arabia were isolated and cDNAs of the diverse vhh gene were amplified; the resulting amplicons were cloned in the phage display pSEX81 vector. The size of the library was found to be within the required range (10(7)) suitable for subsequent applications in disease diagnosis and treatment. Two hundred clones were randomly selected and the inserted gene library was either estimated for redundancy or sequenced and aligned to the reference camelid vhh gene (acc. No. ADE99145). Results indicated complete non-specificity of this small library in which no single event of redundancy was detected. These results indicate the efficacy of following this approach in order to yield a large and diverse enough gene library to secure the presence of the required version encoding the required antibodies for any target antigen. This work is a first step towards the construction of phage display-based biosensors useful in disease (e.g., TB or tuberculosis) diagnosis and treatment. Copyright © 2014 Académie des sciences. Published by Elsevier SAS. All rights reserved.
International Nuclear Information System (INIS)
Rahman, Mafizur; Takano, Hideki
2001-01-01
A new 69-group library of multigroup constants for the lattice code WIMS-D/4 has been generated with an improved resonance treatment, processing nuclear data from JENDL-3.2 by NJOY91.108. A parallel ENDF/B-VI based library has also been constructed for intercomparison of results. Benchmark calculations for a number of thermal reactor critical assemblies of both uranium and plutonium fuels have been performed with the code WIMS-D/4.1 with its three different libraries: the original WIMS library (NEA-0329/10) and the new ENDF/B-VI and JENDL-3.2 based libraries. The results calculated with both ENDF and JENDL based libraries show a similar tendency and are found in better agreement with the experimental values. Benchmark parameters are further calculated with the comprehensive lattice code SRAC95. The results from SRAC95 and WIMS-D/4.1 (both using JENDL-3.2 based libraries) agree well with each other. The new library is also verified for its applicability to mixed-oxide cores of varying plutonium contents
Generalized Extreme Value model with Cyclic Covariate Structure ...
Indian Academy of Sciences (India)
48
enhances the estimation of the return period; however, its application is ...... Cohn T A and Lins H F 2005 Nature's style: Naturally trendy; GEOPHYSICAL ..... Final non-stationary GEV models with covariate structures shortlisted based on.
Changing State Digital Libraries
Pappas, Marjorie L.
2006-01-01
Research has shown that state virtual or digital libraries are evolving into websites that are loaded with free resources, subscription databases, and instructional tools. In this article, the author explores these evolving libraries based on the following questions: (1) How user-friendly are the state digital libraries?; (2) How do state digital…
Directory of Open Access Journals (Sweden)
I PUTU EKA IRAWAN
2013-11-01
Full Text Available Principal Component Regression is a method to overcome multicollinearity techniques by combining principal component analysis with regression analysis. The calculation of classical principal component analysis is based on the regular covariance matrix. The covariance matrix is optimal if the data originated from a multivariate normal distribution, but is very sensitive to the presence of outliers. Alternatives are used to overcome this problem the method of Least Median Square-Minimum Covariance Determinant (LMS-MCD. The purpose of this research is to conduct a comparison between Principal Component Regression (RKU and Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD in dealing with outliers. In this study, Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD has a bias and mean square error (MSE is smaller than the parameter RKU. Based on the difference of parameter estimators, still have a test that has a difference of parameter estimators method LMS-MCD greater than RKU method.
A broad-group cross-section library based on ENDF/B-VII.0 for fast neutron dosimetry Applications
Energy Technology Data Exchange (ETDEWEB)
Alpan, F.A. [Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States)
2011-07-01
A new ENDF/B-VII.0-based coupled 44-neutron, 20-gamma-ray-group cross-section library was developed to investigate the latest evaluated nuclear data file (ENDF) ,in comparison to ENDF/B-VI.3 used in BUGLE-96, as well as to generate an objective-specific library. The objectives selected for this work consisted of dosimetry calculations for in-vessel and ex-vessel reactor locations, iron atom displacement calculations for reactor internals and pressure vessel, and {sup 58}Ni(n,{gamma}) calculation that is important for gas generation in the baffle plate. The new library was generated based on the contribution and point-wise cross-section-driven (CPXSD) methodology and was applied to one of the most widely used benchmarks, the Oak Ridge National Laboratory Pool Critical Assembly benchmark problem. In addition to the new library, BUGLE-96 and an ENDF/B-VII.0-based coupled 47-neutron, 20-gamma-ray-group cross-section library was generated and used with both SNLRML and IRDF dosimetry cross sections to compute reaction rates. All reaction rates computed by the multigroup libraries are within {+-} 20 % of measurement data and meet the U. S. Nuclear Regulatory Commission acceptance criterion for reactor vessel neutron exposure evaluations specified in Regulatory Guide 1.190. (authors)
Covariance Evaluation Methodology for Neutron Cross Sections
Energy Technology Data Exchange (ETDEWEB)
Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.
2008-09-01
We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.
Covariance matrices of experimental data
International Nuclear Information System (INIS)
Perey, F.G.
1978-01-01
A complete statement of the uncertainties in data is given by its covariance matrix. It is shown how the covariance matrix of data can be generated using the information available to obtain their standard deviations. Determination of resonance energies by the time-of-flight method is used as an example. The procedure for combining data when the covariance matrix is non-diagonal is given. The method is illustrated by means of examples taken from the recent literature to obtain an estimate of the energy of the first resonance in carbon and for five resonances of 238 U
Directory of Open Access Journals (Sweden)
A. J. Dolman
2012-12-01
Full Text Available We determine the net land to atmosphere flux of carbon in Russia, including Ukraine, Belarus and Kazakhstan, using inventory-based, eddy covariance, and inversion methods. Our high boundary estimate is −342 Tg C yr−1 from the eddy covariance method, and this is close to the upper bounds of the inventory-based Land Ecosystem Assessment and inverse models estimates. A lower boundary estimate is provided at −1350 Tg C yr−1 from the inversion models. The average of the three methods is −613.5 Tg C yr−1. The methane emission is estimated separately at 41.4 Tg C yr−1.
These three methods agree well within their respective error bounds. There is thus good consistency between bottom-up and top-down methods. The forests of Russia primarily cause the net atmosphere to land flux (−692 Tg C yr−1 from the LEA. It remains however remarkable that the three methods provide such close estimates (−615, −662, −554 Tg C yr–1 for net biome production (NBP, given the inherent uncertainties in all of the approaches. The lack of recent forest inventories, the few eddy covariance sites and associated uncertainty with upscaling and undersampling of concentrations for the inversions are among the prime causes of the uncertainty. The dynamic global vegetation models (DGVMs suggest a much lower uptake at −91 Tg C yr−1, and we argue that this is caused by a high estimate of heterotrophic respiration compared to other methods.
Kim, Yong-Mi; Abbas, June
2010-01-01
This study investigates the adoption of Library 2.0 functionalities by academic libraries and users through a knowledge management perspective. Based on randomly selected 230 academic library Web sites and 184 users, the authors found RSS and blogs are widely adopted by academic libraries while users widely utilized the bookmark function.…
Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.
Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen
2017-12-01
In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.
International Nuclear Information System (INIS)
Mi Aijun; Li Junjie
2010-01-01
In this paper the multi-group libraries were constructed by processing ENDF/B-VII neutron incident files into multi-group structure, and the application of the multi-group libraries in the pressurized-water reactor(PWR) design was studied. The construction of the multi-group library is realized by using the NJOY nuclear data processing system. The code can process the neutron cross section files form ENDF format to MATXS format which was required in SN code. Two dimension transport theory code of discrete ordinates DORT was used to verify the multi-group libraries and the method of the construction by comparing calculations for some representative benchmarks. We made the PWR shielding calculation by using the multi-group libraries and studied the influence of the parameters involved during the construction of the libraries such as group structure, temperatures and weight functions on the shielding design of the PWR. This work is the preparation for the construction of the multi-group library which will be used in PWR shielding design in engineering. (authors)
Critical experiments analyses by using 70 energy group library based on ENDF/B-VI
Energy Technology Data Exchange (ETDEWEB)
Tahara, Yoshihisa; Matsumoto, Hideki [Mitsubishi Heavy Industries Ltd., Yokohama (Japan). Nuclear Energy Systems Engineering Center; Huria, H.C.; Ouisloumen, M.
1998-03-01
The newly developed 70-group library has been validated by comparing kinf from a continuous energy Monte-Carlo code MCNP and two dimensional spectrum calculation code PHOENIX-CP. The code employs Discrete Angular Flux Method based on Collision Probability. The library has been also validated against a large number of critical experiments and numerical benchmarks for assemblies with MOX and Gd fuels. (author)
Covariant perturbations of Schwarzschild black holes
International Nuclear Information System (INIS)
Clarkson, Chris A; Barrett, Richard K
2003-01-01
We present a new covariant and gauge-invariant perturbation formalism for dealing with spacetimes having spherical symmetry (or some preferred spatial direction) in the background, and apply it to the case of gravitational wave propagation in a Schwarzschild black-hole spacetime. The 1 + 3 covariant approach is extended to a '1 + 1 + 2 covariant sheet' formalism by introducing a radial unit vector in addition to the timelike congruence, and decomposing all covariant quantities with respect to this. The background Schwarzschild solution is discussed and a covariant characterization is given. We give the full first-order system of linearized 1 + 1 + 2 covariant equations, and we show how, by introducing (time and spherical) harmonic functions, these may be reduced to a system of first-order ordinary differential equations and algebraic constraints for the 1 + 1 + 2 variables which may be solved straightforwardly. We show how both odd- and even-parity perturbations may be unified by the discovery of a covariant, frame- and gauge-invariant, transverse-traceless tensor describing gravitational waves, which satisfies a covariant wave equation equivalent to the Regge-Wheeler equation for both even- and odd-parity perturbations. We show how the Zerilli equation may be derived from this tensor, and derive a similar transverse-traceless tensor equation equivalent to this equation. The so-called special quasinormal modes with purely imaginary frequency emerge naturally. The significance of the degrees of freedom in the choice of the two frame vectors is discussed, and we demonstrate that, for a certain frame choice, the underlying dynamics is governed purely by the Regge-Wheeler tensor. The two transverse-traceless Weyl tensors which carry the curvature of gravitational waves are discussed, and we give the closed system of four first-order ordinary differential equations describing their propagation. Finally, we consider the extension of this work to the study of
Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.
Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F
2001-01-01
When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.
MICROX-2 cross section library based on ENDF/B-VII
International Nuclear Information System (INIS)
Hou, J.; Ivanov, K.; Choi, H.
2012-01-01
New cross section libraries of a neutron transport code MICROX-2 have been generated for advanced reactor design and fuel cycle analyses. A total of 386 nuclides were processed, including 10 thermal scattering nuclides, which are available in ENDF/B-VII release 0 nuclear data. The NJOY system and MICROR code were used to process nuclear data and convert them into MICROX-2 format. The energy group structure of the new library was optimized for both the thermal and fast neutron spectrum reactors based on Contributon and Point-wise Cross Section Driven (CPXSD) method, resulting in a total of 1173 energy groups. A series of lattice cell level benchmark calculations have been performed against both experimental measurements and Monte Carlo calculations for the effective/infinite multiplication factor and reaction rate ratios. The results of MICROX-2 calculation with the new library were consistent with those of 15 reference cases. The average errors of the infinite multiplication factor and reaction rate ratio were 0.31% δk and 1.9%, respectively. The maximum error of reaction rate ratio was 8% for 238 U-to- 235 U fission of ZEBRA lattice against the reference calculation done by MCNP5. (authors)
International Nuclear Information System (INIS)
Hasegawa, Akira
1991-01-01
A common group cross-section library has been developed in JAERI. This system is called 'JSSTDL-295n-104γ (neutron:295 gamma:104) group constants library system', which is composed of a common 295n-104γ group cross-section library based on JENDL-3 nuclear data file and its utility codes. This system is applicable to fast and fusion reactors. In this paper, firstly outline of group cross-section processing adopted in Prof. GROUCH-G/B system is described in detail which is a common step for all group cross-section library generation. Next available group cross-section libraries developed in Japan based on JENDL-3 are briefly reviewed. Lastly newly developed JSSTDL library system is presented with some special attention to the JENDL-3 data. (author)
Covariation in Natural Causal Induction.
Cheng, Patricia W.; Novick, Laura R.
1991-01-01
Biases and models usually offered by cognitive and social psychology and by philosophy to explain causal induction are evaluated with respect to focal sets (contextually determined sets of events over which covariation is computed). A probabilistic contrast model is proposed as underlying covariation computation in natural causal induction. (SLD)
QED on curved background and on manifolds with boundaries: Unitarity versus covariance
International Nuclear Information System (INIS)
Vassilevich, D.V.
1994-11-01
Some recent results show that the covariant path integral and the integral over physical degrees of freedom give contradicting results on curved background and on manifolds with boundaries. This looks like a conflict between unitarity and covariance. We argue that this effect is due to the use of non-covariant measure on the space of physical degrees of freedom. Starting with the reduced phase space path integral and using covariant measure throughout computations we recover standard path integral in the Lorentz gauge and the Moss and Poletti BRST-invariant boundary conditions. We also demonstrate by direct calculations that in the approach based on Gaussian path integral on the space of physical degrees of freedom some basic symmetries are broken. (author). 39 refs
Compilation of MCNP data library based on JENDL-3T and test through analysis of benchmark experiment
International Nuclear Information System (INIS)
Sakurai, K.; Sasamoto, N.; Kosako, K.; Ishikawa, T.; Sato, O.; Oyama, Y.; Narita, H.; Maekawa, H.; Ueki, K.
1989-01-01
Based on an evaluated nuclear data library JENDL-3T, a temporary version of JENDL-3, a pointwise neutron cross section library for MCNP code is compiled which involves 39 nuclides from H-1 to Am-241 which are important for shielding calculations. Compilation is performed with the code system which consists of the nuclear data processing code NJOY-83 and library compilation code MACROS. Validity of the code system and reliability of the library are certified by analysing benchmark experiments. (author)
International Nuclear Information System (INIS)
Ford, W.E. III; Arwood, J.W.; Greene, N.M.; Petrie, L.M.; Primm, R.T. III; Waddell, M.W.; Webster, C.C.; Westfall, R.M.; Wright, R.Q.
1987-01-01
Multigroup P3 neutron, P0-P3 secondary gamma ray production (SGRP), and P6 gamma ray interaction (GRI) cross section libraries have been generated to support design work on the Advanced Neutron Source (ANS) reactor. The libraries, designated ANSL-V (Advanced Neutron Source Cross-Section Libraries), are data bases in a format suitable for subsequent generation of problem dependent cross sections. The ANSL-V libraries are available on magnetic tape from the Radiation Shielding Information Center at Oak Ridge National Laboratory
Schur Complement Inequalities for Covariance Matrices and Monogamy of Quantum Correlations.
Lami, Ludovico; Hirche, Christoph; Adesso, Gerardo; Winter, Andreas
2016-11-25
We derive fundamental constraints for the Schur complement of positive matrices, which provide an operator strengthening to recently established information inequalities for quantum covariance matrices, including strong subadditivity. This allows us to prove general results on the monogamy of entanglement and steering quantifiers in continuous variable systems with an arbitrary number of modes per party. A powerful hierarchical relation for correlation measures based on the log-determinant of covariance matrices is further established for all Gaussian states, which has no counterpart among quantities based on the conventional von Neumann entropy.
Zero curvature conditions and conformal covariance
International Nuclear Information System (INIS)
Akemann, G.; Grimm, R.
1992-05-01
Two-dimensional zero curvature conditions were investigated in detail, with special emphasis on conformal properties, and the appearance of covariant higher order differential operators constructed in terms of a projective connection was elucidated. The analysis is based on the Kostant decomposition of simple Lie algebras in terms of representations with respect to their 'principal' SL(2) subalgebra. (author) 27 refs
Energy Technology Data Exchange (ETDEWEB)
Chadwick, M.B.; Herman, M.; Author(s): Chadwick,M.B.; Herman,M.; Oblozinsky,P.; Dunn,M.E.; Danon,Y.; Kahler,A.C.; Smith,D.L.; Pritychenko,B.; Arbanas,G.; Arcilla,R.; Brewer,R.; Brown,D.A.; Capote,R.; Carlson,A.D.; Cho,Y.S.; Derrien,H.; Guber,K.; Hale,G.M.; Hoblit,S.; Holloway,S.: Johnson,T.D.; Kawano,T.; Kiedrowski,B.C.; Kim,H.; Kunieda,S.; Larson,N.M.; Leal,L.; Lestone,J.P.; Little,R.C.; McCutchan,E.A.; MacFarlane,R.E.; MacInnes,M.; Mattoon,C.M.; McKnight,R.D.; Mughabghab,S.F.; Nobre,G.P.A.; Palmiotti,G.; Palumbo,A.; Pigni,M.T.; Pronyaev,V.G.; Sayer,R.O.; Sonzogni,A.A.; Summers,N.C.; Talou,P.; Thompson,I.J.; Trkov,A.; Vogt,R.L.; van der Marck,S.C.; Wallner,A.; White,M.C.; Wiarda,D.; Young,P.G.
2011-12-01
The ENDF/B-VII.1 library is our latest recommended evaluated nuclear data file for use in nuclear science and technology applications, and incorporates advances made in the five years since the release of ENDF/B-VII.0. These advances focus on neutron cross sections, covariances, fission product yields and decay data, and represent work by the US Cross Section Evaluation Working Group (CSEWG) in nuclear data evaluation that utilizes developments in nuclear theory, modeling, simulation, and experiment. The principal advances in the new library are: (1) An increase in the breadth of neutron reaction cross section coverage, extending from 393 nuclides to 423 nuclides; (2) Covariance uncertainty data for 190 of the most important nuclides, as documented in companion papers in this edition; (3) R-matrix analyses of neutron reactions on light nuclei, including isotopes of He, Li, and Be; (4) Resonance parameter analyses at lower energies and statistical high energy reactions for isotopes of Cl, K, Ti, V, Mn, Cr, Ni, Zr and W; (5) Modifications to thermal neutron reactions on fission products (isotopes of Mo, Tc, Rh, Ag, Cs, Nd, Sm, Eu) and neutron absorber materials (Cd, Gd); (6) Improved minor actinide evaluations for isotopes of U, Np, Pu, and Am (we are not making changes to the major actinides {sup 235,238}U and {sup 239}Pu at this point, except for delayed neutron data and covariances, and instead we intend to update them after a further period of research in experiment and theory), and our adoption of JENDL-4.0 evaluations for isotopes of Cm, Bk, Cf, Es, Fm, and some other minor actinides; (7) Fission energy release evaluations; (8) Fission product yield advances for fission-spectrum neutrons and 14 MeV neutrons incident on {sup 239}Pu; and (9) A new decay data sublibrary. Integral validation testing of the ENDF/B-VII.1 library is provided for a variety of quantities: For nuclear criticality, the VII.1 library maintains the generally-good performance seen for VII.0
Time-Driven Activity-Based Costing for Inter-Library Services: A Case Study in a University
Pernot, Eli; Roodhooft, Filip; Van den Abbeele, Alexandra
2007-01-01
Although the true costs of inter-library loans (ILL) are unknown, universities increasingly rely on them to provide better library services at lower costs. Through a case study, we show how to perform a time-driven activity-based costing analysis of ILL and provide evidence of the benefits of such an analysis.
A Generic High-performance GPU-based Library for PDE solvers
DEFF Research Database (Denmark)
Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter
, the privilege of high-performance parallel computing is now in principle accessible for many scientific users, no matter their economic resources. Though being highly effective units, GPUs and parallel architectures in general, pose challenges for software developers to utilize their efficiency. Sequential...... legacy codes are not always easily parallelized and the time spent on conversion might not pay o in the end. We present a highly generic C++ library for fast assembling of partial differential equation (PDE) solvers, aiming at utilizing the computational resources of GPUs. The library requires a minimum...... of GPU computing knowledge, while still oering the possibility to customize user-specic solvers at kernel level if desired. Spatial dierential operators are based on matrix free exible order nite dierence approximations. These matrix free operators minimize both memory consumption and main memory access...
Customer Satisfaction with Public Libraries.
D'Elia, George; Rodger, Eleanor Jo
1996-01-01
Surveys conducted in 142 urban public libraries examined customer satisfaction, comparisons with other libraries, and factors affecting satisfaction. Overall, customers were satisfied with their libraries but experienced different levels of satisfaction based on convenience, availability of materials and information, and services facilitating…
Structural Analysis of Covariance and Correlation Matrices.
Joreskog, Karl G.
1978-01-01
A general approach to analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.…
Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data
International Nuclear Information System (INIS)
Jun, Sung C; Plis, Sergey M; Ranken, Doug M; Schmidt, David M
2006-01-01
The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We
Library Design-Facilitated High-Throughput Sequencing of Synthetic Peptide Libraries.
Vinogradov, Alexander A; Gates, Zachary P; Zhang, Chi; Quartararo, Anthony J; Halloran, Kathryn H; Pentelute, Bradley L
2017-11-13
A methodology to achieve high-throughput de novo sequencing of synthetic peptide mixtures is reported. The approach leverages shotgun nanoliquid chromatography coupled with tandem mass spectrometry-based de novo sequencing of library mixtures (up to 2000 peptides) as well as automated data analysis protocols to filter away incorrect assignments, noise, and synthetic side-products. For increasing the confidence in the sequencing results, mass spectrometry-friendly library designs were developed that enabled unambiguous decoding of up to 600 peptide sequences per hour while maintaining greater than 85% sequence identification rates in most cases. The reliability of the reported decoding strategy was additionally confirmed by matching fragmentation spectra for select authentic peptides identified from library sequencing samples. The methods reported here are directly applicable to screening techniques that yield mixtures of active compounds, including particle sorting of one-bead one-compound libraries and affinity enrichment of synthetic library mixtures performed in solution.
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
The Research and Education of Evidence Based Library and Information Practice; A Narrative Review
Directory of Open Access Journals (Sweden)
Vahideh Zareh Gavgani
2018-01-01
Full Text Available Background and Objectives: Evidence based librarianship (EBL was defined as “use of best available evidence from qualitative and quantitative research results and rational experience and decisions acquired from the daily practice of library”. However there are controversies about if the nature of EBL deals with library services or professional practice and if it needs a formal education or informal continuing education is enough? To shed light on this ambiguity, the aim of this study was to find out the state-of-the-art of education of EBL in the world. Material and Methods: The study utilized library and documentation methods to investigate the academic education of EBL through review of the available literature and websites. Results: The findings of the study revealed that evidence based librarianship does have formal curriculum for academic education in post graduate levels (post master and master. It also revealed that “Evidence Based Approach” (EBA and “Evidence Based Medicine” (EBM were also similar courses that are offered in Master and PhD levels. Conclusion: Based on the history and revolution of EBA, it is time to develop formal curriculum and field of study for Evidence Based Information Practice. This study suggests establishment of the academic field of Evidence Based and Information Science to overcome the problems and limitations that library science faces in practice.
Organizational prerequisites for the preservation of library collections in monastery libraries
Directory of Open Access Journals (Sweden)
Maja Krtalić
2012-02-01
libraries. The conceptual approach to heritage preservation in monastery libraries used in this paper is based on the theoretical model of written heritage preservation management that comprises five aspects bringing together the theoretical, strategic, legal, financial, educational, operative, and cultural components. Monastery libraries are facing many problems in protection and preservation of collections. Most of their problems fall under the economic, legal, material and operative aspects of preservation. In addition to the most common limitations, such as financial constraints, inadequate premises for keeping and storing the collections, they often have staff shortages, especially of trained staff with particular knowledge and skills required to protect the collections of such libraries, which brings the educational aspect of preservation into the foreground. Even though the protection and preservation of library collections in monastery libraries is mostly the responsibility of monastery monks, their knowledge is mostly not expert enough and they cannot be the only ones made responsible for preservation. This issue needs to be considered from the strategic and theoretical aspects of written heritage preservation management. One of the major hurdles is the inadequacy of legislation and its lack of recognition of those collections as national heritage. One of the factors that needs to be taken into consideration are changes that occur in the cultural and social contexts, i.e. changes in relations between church orders and communities in which they are active. Based on the data collected and relying on the conceptual model of written heritage preservation management in the Republic of Croatia, the current state of monastery library collections preservation has been established and measures for a systematic and more efficient management of written heritage preservation in monastery libraries have been proposed.
ROSFOND based heating-damage cross sections sub-library: Preliminary uncertainty assessment
International Nuclear Information System (INIS)
Sinitsa, V.V.
2016-01-01
The accuracy of radiation damage calculations for the most important LWR component, the reactor pressure vessel (RPV), directly linked with the RPV End-of-Life (EoL) prediction which is in its turn connected with fundamental nuclear safety aspects and relevant economic impacts. In this connection, for nearly ten years the ENEA-Bologna Nuclear Data Group conducts the nuclear data processing and validation activities addressed to update the specialized broad-group coupled neutron/photon working cross section libraries for shielding and radiation damage calculations through NJOY and Bologna revised version of SCAMPI data processing systems. A number of working group-wise data libraries has been prepared and transferred to the ENEA Data Bank for dissemination. Several years ago the NRC ”Kurchatov Institute” has reset the GRUCON project, originally designed to provide group constants for fast nuclear reactor calculations [12], with aim to expand its application area and to use in the WWER safety tasks, in particular, in the RPV radiation damage analyses. By means of updated GRUCON and NJOY-99 processing codes, and calculation procedure, developed in the NDG of ENEA Bologna, a sample of kerma&damage energy point-wise data sub-libraries from different evaluated data libraries has been generated. On the base of this sample, the quantitative assessment of kerma/dpa data precision in the RPV calculations is obtained
ZZ ORIGEN2.2-UPJ, A complete package of ORIGEN2 libraries based on JENDL-3.2 and JENDL-3.3
International Nuclear Information System (INIS)
Ishikawa, Makoto; Kataoka, Masaharu; Ohkawachi, Yasushi; Ohki, Shigeo; JIN, Tomoyuki; Katakura, Jun-ich; Suyama, Kenya; Yanagisawa, Hiroshi; Matsumoto, Hideki; ONOUE, Akira; Sasahara, Akihiro
2006-01-01
1 - Description: ORLIBJ32 is a package of the libraries for ORIGEN2 code based on JENDL-3.2(NEA-1642). The one grouped cross section data for PWR and BWR were compiled using the burnup calculation results by SWAT code. The FBR libraries were compiled by the analysis system used at JNC for FBR core calculation. The fission yield and decay constants data were also updated using the second version of the JNDC FP library. In ORLIBJ32, not only one-grouped cross section data but also variable actinide cross section data are prepared, using a code written in FORTRAN77. The routines should be linked to the Original ORIGEN2.1 program. The LWR Libraries are prepared based on the current PWR fuel assembly specification, and the FBR libraries are based on the request by the Japanese FBR researchers. Before compiling the libraries, the specification of fuel assembly was completely reviewed and evaluated by the members of Working Group in the Japanese Nuclear Data Committee, 'working group on the evaluation of the amount of isotope generation'. ORLIBJ33 is a new libraries based on JENDL-3.3 following the release of JENDL-3.3. The parameters used to prepare the library are the same as those of ORLIBJ32. The Original version or ORLIBJ33 is coupled with ORIGEN2.1. But after the release of ORIGEN2.2 from ORNL as CCC-0371 through RSICC, several requests for a combination with ORLIBJ33 and ORIGEN2.2 were received. During the development of ORLIBJ33, released as NEA-1642, authors found a problem in the library maker for FBR libraries, and consequently it was revised and tested in JNC-Oarai. This package 'ORIGEN2.2-UPJ' contains: - updated source code of ORIGEN2.2 of CCC-0371 to use ORLIBJ32 and ORLIBJ33, - all Original libraries in CCC-0371, - ORLIBJ32 in NEA-164/03 (but libraries for FBR are revised), - and ORLIBJ33. In this package, decay data based on the second version of the JNDC FP library and, photon and decay data libraries based on JENDL-3.3 are also included. NLB and NLIB
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
Modifications of Sp(2) covariant superfield quantization
Energy Technology Data Exchange (ETDEWEB)
Gitman, D.M.; Moshin, P.Yu
2003-12-04
We propose a modification of the Sp(2) covariant superfield quantization to realize a superalgebra of generating operators isomorphic to the massless limit of the corresponding superalgebra of the osp(1,2) covariant formalism. The modified scheme ensures the compatibility of the superalgebra of generating operators with extended BRST symmetry without imposing restrictions eliminating superfield components from the quantum action. The formalism coincides with the Sp(2) covariant superfield scheme and with the massless limit of the osp(1,2) covariant quantization in particular cases of gauge-fixing and solutions of the quantum master equations.
CORALINA: a universal method for the generation of gRNA libraries for CRISPR-based screening.
Köferle, Anna; Worf, Karolina; Breunig, Christopher; Baumann, Valentin; Herrero, Javier; Wiesbeck, Maximilian; Hutter, Lukas H; Götz, Magdalena; Fuchs, Christiane; Beck, Stephan; Stricker, Stefan H
2016-11-14
The bacterial CRISPR system is fast becoming the most popular genetic and epigenetic engineering tool due to its universal applicability and adaptability. The desire to deploy CRISPR-based methods in a large variety of species and contexts has created an urgent need for the development of easy, time- and cost-effective methods enabling large-scale screening approaches. Here we describe CORALINA (comprehensive gRNA library generation through controlled nuclease activity), a method for the generation of comprehensive gRNA libraries for CRISPR-based screens. CORALINA gRNA libraries can be derived from any source of DNA without the need of complex oligonucleotide synthesis. We show the utility of CORALINA for human and mouse genomic DNA, its reproducibility in covering the most relevant genomic features including regulatory, coding and non-coding sequences and confirm the functionality of CORALINA generated gRNAs. The simplicity and cost-effectiveness make CORALINA suitable for any experimental system. The unprecedented sequence complexities obtainable with CORALINA libraries are a necessary pre-requisite for less biased large scale genomic and epigenomic screens.
Globally covering a-priori regional gravity covariance models
Directory of Open Access Journals (Sweden)
D. Arabelos
2003-01-01
Full Text Available Gravity anomaly data generated using Wenzel’s GPM98A model complete to degree 1800, from which OSU91A has been subtracted, have been used to estimate covariance functions for a set of globally covering equal-area blocks of size 22.5° × 22.5° at Equator, having a 2.5° overlap. For each block an analytic covariance function model was determined. The models are based on 4 parameters: the depth to the Bjerhammar sphere (determines correlation, the free-air gravity anomaly variance, a scale factor of the OSU91A error degree-variances and a maximal summation index, N, of the error degree-variances. The depth of Bjerhammar-sphere varies from -134km to nearly zero, N varies from 360 to 40, the scale factor from 0.03 to 38.0 and the gravity variance from 1081 to 24(10µms-22. The parameters are interpreted in terms of the quality of the data used to construct OSU91A and GPM98A and general conditions such as the occurrence of mountain chains. The variation of the parameters show that it is necessary to use regional covariance models in order to obtain a realistic signal to noise ratio in global applications.Key words. GOCE mission, Covariance function, Spacewise approach`
Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions
DEFF Research Database (Denmark)
Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier
We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...
Covariant quantizations in plane and curved spaces
International Nuclear Information System (INIS)
Assirati, J.L.M.; Gitman, D.M.
2017-01-01
We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)
Covariant quantizations in plane and curved spaces
Energy Technology Data Exchange (ETDEWEB)
Assirati, J.L.M. [University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil); Gitman, D.M. [Tomsk State University, Department of Physics, Tomsk (Russian Federation); P.N. Lebedev Physical Institute, Moscow (Russian Federation); University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil)
2017-07-15
We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)
Construction of covariance matrix for experimental data
International Nuclear Information System (INIS)
Liu Tingjin; Zhang Jianhua
1992-01-01
For evaluators and experimenters, the information is complete only in the case when the covariance matrix is given. The covariance matrix of the indirectly measured data has been constructed and discussed. As an example, the covariance matrix of 23 Na(n, 2n) cross section is constructed. A reasonable result is obtained
Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde
2017-01-01
Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.
Smooth individual level covariates adjustment in disease mapping.
Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise
2018-05-01
Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
phpMs: A PHP-Based Mass Spectrometry Utilities Library.
Collins, Andrew; Jones, Andrew R
2018-03-02
The recent establishment of cloud computing, high-throughput networking, and more versatile web standards and browsers has led to a renewed interest in web-based applications. While traditionally big data has been the domain of optimized desktop and server applications, it is now possible to store vast amounts of data and perform the necessary calculations offsite in cloud storage and computing providers, with the results visualized in a high-quality cross-platform interface via a web browser. There are number of emerging platforms for cloud-based mass spectrometry data analysis; however, there is limited pre-existing code accessible to web developers, especially for those that are constrained to a shared hosting environment where Java and C applications are often forbidden from use by the hosting provider. To remedy this, we provide an open-source mass spectrometry library for one of the most commonly used web development languages, PHP. Our new library, phpMs, provides objects for storing and manipulating spectra and identification data as well as utilities for file reading, file writing, calculations, peptide fragmentation, and protein digestion as well as a software interface for controlling search engines. We provide a working demonstration of some of the capabilities at http://pgb.liv.ac.uk/phpMs .
Buchy, Lisa; Barbato, Mariapaola; Makowski, Carolina; Bray, Signe; MacMaster, Frank P; Deighton, Stephanie; Addington, Jean
2017-11-01
People with psychosis show deficits recognizing facial emotions and disrupted activation in the underlying neural circuitry. We evaluated associations between facial emotion recognition and cortical thickness using a correlation-based approach to map structural covariance networks across the brain. Fifteen people with an early psychosis provided magnetic resonance scans and completed the Penn Emotion Recognition and Differentiation tasks. Fifteen historical controls provided magnetic resonance scans. Cortical thickness was computed using CIVET and analyzed with linear models. Seed-based structural covariance analysis was done using the mapping anatomical correlations across the cerebral cortex methodology. To map structural covariance networks involved in facial emotion recognition, the right somatosensory cortex and bilateral fusiform face areas were selected as seeds. Statistics were run in SurfStat. Findings showed increased cortical covariance between the right fusiform face region seed and right orbitofrontal cortex in controls than early psychosis subjects. Facial emotion recognition scores were not significantly associated with thickness in any region. A negative effect of Penn Differentiation scores on cortical covariance was seen between the left fusiform face area seed and right superior parietal lobule in early psychosis subjects. Results suggest that facial emotion recognition ability is related to covariance in a temporal-parietal network in early psychosis. Copyright © 2017 Elsevier B.V. All rights reserved.
Precomputing Process Noise Covariance for Onboard Sequential Filters
Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell
2017-01-01
Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis studies is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.
FilTer BaSe: A web accessible chemical database for small compound libraries.
Kolte, Baban S; Londhe, Sanjay R; Solanki, Bhushan R; Gacche, Rajesh N; Meshram, Rohan J
2018-03-01
Finding novel chemical agents for targeting disease associated drug targets often requires screening of large number of new chemical libraries. In silico methods are generally implemented at initial stages for virtual screening. Filtering of such compound libraries on physicochemical and substructure ground is done to ensure elimination of compounds with undesired chemical properties. Filtering procedure, is redundant, time consuming and requires efficient bioinformatics/computer manpower along with high end software involving huge capital investment that forms a major obstacle in drug discovery projects in academic setup. We present an open source resource, FilTer BaSe- a chemoinformatics platform (http://bioinfo.net.in/filterbase/) that host fully filtered, ready to use compound libraries with workable size. The resource also hosts a database that enables efficient searching the chemical space of around 348,000 compounds on the basis of physicochemical and substructure properties. Ready to use compound libraries and database presented here is expected to aid a helping hand for new drug developers and medicinal chemists. Copyright © 2017 Elsevier Inc. All rights reserved.
Multi-Group Covariance Data Generation from Continuous-Energy Monte Carlo Transport Calculations
International Nuclear Information System (INIS)
Lee, Dong Hyuk; Shim, Hyung Jin
2015-01-01
The sensitivity and uncertainty (S/U) methodology in deterministic tools has been utilized for quantifying uncertainties of nuclear design parameters induced by those of nuclear data. The S/U analyses which are based on multi-group cross sections can be conducted by an simple error propagation formula with the sensitivities of nuclear design parameters to multi-group cross sections and the covariance of multi-group cross section. The multi-group covariance data required for S/U analysis have been produced by nuclear data processing codes such as ERRORJ or PUFF from the covariance data in evaluated nuclear data files. However in the existing nuclear data processing codes, an asymptotic neutron flux energy spectrum, not the exact one, has been applied to the multi-group covariance generation since the flux spectrum is unknown before the neutron transport calculation. It can cause an inconsistency between the sensitivity profiles and the covariance data of multi-group cross section especially in resolved resonance energy region, because the sensitivities we usually use are resonance self-shielded while the multi-group cross sections produced from an asymptotic flux spectrum are infinitely-diluted. In order to calculate the multi-group covariance estimation in the ongoing MC simulation, mathematical derivations for converting the double integration equation into a single one by utilizing sampling method have been introduced along with the procedure of multi-group covariance tally
Challenges and Opportunities for Libraries in Pakistan
Shafiq UR, Rehman; Pervaiz, Ahmad
2007-01-01
Abstract: This paper, based on review of literature, observation, and informal conversations, discusses various challenges regarding finance, collection development, ICTs, human resources, library education, library association and research & development faced by library profession in Pakistan. The opportunities to meet these challenges have also been explored. Keywords: Library challenges and opportunities (Pakistan); Librarianship (Pakistan); Library issues; Library profession in Pa...
Energy Technology Data Exchange (ETDEWEB)
Bourget, Antoine; Troost, Jan [Laboratoire de Physique Théorique, École Normale Supérieure, 24 rue Lhomond, 75005 Paris (France)
2016-03-23
We construct a covariant generating function for the spectrum of chiral primaries of symmetric orbifold conformal field theories with N=(4,4) supersymmetry in two dimensions. For seed target spaces K3 and T{sup 4}, the generating functions capture the SO(21) and SO(5) representation theoretic content of the chiral ring respectively. Via string dualities, we relate the transformation properties of the chiral ring under these isometries of the moduli space to the Lorentz covariance of perturbative string partition functions in flat space.
GLq(N)-covariant quantum algebras and covariant differential calculus
International Nuclear Information System (INIS)
Isaev, A.P.; Pyatov, P.N.
1992-01-01
GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations are considered. It is that, up to some innessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. 25 refs
Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data
Hallac, David; Vare, Sagar; Boyd, Stephen; Leskovec, Jure
2018-01-01
Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. For example, raw sensor data from a fitness-tracking application can be expressed as a timeline of a select few actions (i.e., walking, sitting, running). However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Furthermore, interpreting the resulting clusters is difficult, especially when the data is high-dimensional. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through alternating minimization, using a variation of the expectation maximization (EM) algorithm. We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how TICC can be used to learn interpretable clusters in real-world scenarios. PMID:29770257
Manzari, Laura
2013-01-01
This prestige study surveyed full-time faculty of American Library Association (ALA)-accredited programs in library and information studies regarding library and information science (LIS) journals. Faculty were asked to rate a list of eighty-nine LIS journals on a scale from 1 to 5 based on each journal's importance to their research and teaching.…
About the Library - Betty Petersen Memorial Library
branch library of the NOAA Central Library. The library serves the NOAA Science Center in Camp Springs , Maryland. History and Mission: Betty Petersen Memorial Library began as a reading room in the NOAA Science Science Center staff and advises the library on all aspects of the library program. Library Newsletters
Directory of Open Access Journals (Sweden)
Jacek Wojciechowski
2012-01-01
Full Text Available The efficiency of libraries, academic libraries in particular, necessitates organizational changes facilitating or even imposing co-operation. Any structure of any university has to have an integrated network of libraries, with an appropriate division of work, and one that is consolidated as much as it is possible into medium-size or large libraries. Within thus created network, a chance arises to centralize the main library processes based on appropriate procedures in the main library, highly specialized, more effective and therefore cheaper in operation, including a co-ordination of all more important endeavours and tasks. Hierarchically subordinated libraries can be thus more focused on performing their routine service, more and more frequently providing for the whole of the university, and being able to adjust to changeable requirements and demands of patrons and of new tasks resulting from the new model of the university operation. Another necessary change seems to be a universal implementation of an ov rall programme framework that would include all services in the university’s library networks.
Hierarchical multivariate covariance analysis of metabolic connectivity.
Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J
2014-12-01
Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI).
Yee, Yohan; Fernandes, Darren J; French, Leon; Ellegood, Jacob; Cahill, Lindsay S; Vousden, Dulcie A; Spencer Noakes, Leigh; Scholz, Jan; van Eede, Matthijs C; Nieman, Brian J; Sled, John G; Lerch, Jason P
2018-05-18
An organizational pattern seen in the brain, termed structural covariance, is the statistical association of pairs of brain regions in their anatomical properties. These associations, measured across a population as covariances or correlations usually in cortical thickness or volume, are thought to reflect genetic and environmental underpinnings. Here, we examine the biological basis of structural volume covariance in the mouse brain. We first examined large scale associations between brain region volumes using an atlas-based approach that parcellated the entire mouse brain into 318 regions over which correlations in volume were assessed, for volumes obtained from 153 mouse brain images via high-resolution MRI. We then used a seed-based approach and determined, for 108 different seed regions across the brain and using mouse gene expression and connectivity data from the Allen Institute for Brain Science, the variation in structural covariance data that could be explained by distance to seed, transcriptomic similarity to seed, and connectivity to seed. We found that overall, correlations in structure volumes hierarchically clustered into distinct anatomical systems, similar to findings from other studies and similar to other types of networks in the brain, including structural connectivity and transcriptomic similarity networks. Across seeds, this structural covariance was significantly explained by distance (17% of the variation, up to a maximum of 49% for structural covariance to the visceral area of the cortex), transcriptomic similarity (13% of the variation, up to maximum of 28% for structural covariance to the primary visual area) and connectivity (15% of the variation, up to a maximum of 36% for structural covariance to the intermediate reticular nucleus in the medulla) of covarying structures. Together, distance, connectivity, and transcriptomic similarity explained 37% of structural covariance, up to a maximum of 63% for structural covariance to the
Le Meur, Jean-Yves; Vigen, Jens; CERN. Geneva
1997-01-01
The new interface to the CERN library data base by Jean-Yves Le Meur / CERN-AS I will give a short (hands on) demonstration of the new interface to the whole library database. Emphasis will be given to the new feature allowing any user to build his personal virtual library. A user oriented interface to physics preprint servers by Carlos LourenÂo / CERN-PPE I will give a (hands-on) presentation of a first version of a tailor made WEB based interface to the preprints kept in the CERN or Los Alamos servers. This interface has already been successfully implemented for the field of high energy heavy ion physics, and can easily be expanded or adapted to other research fields of interest to the HEP community. For the already published papers, a direct link to the published e-journal version is provided (if available). Status of the digital library at CERN by Jens Vigen / CERN-AS I will review the present situation concerning the availability of the electronic versions of scientific publications at CERN and the el...
Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.
Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya
2018-05-05
This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.
Energy Technology Data Exchange (ETDEWEB)
Chadwick, M. B. [Los Alamos National Laboratory (LANL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Oblozinsky, Pavel [Brookhaven National Laboratory (BNL); Dunn, Michael E [ORNL; Danon, Y. [Rensselaer Polytechnic Institute (RPI); Kahler, A. [Los Alamos National Laboratory (LANL); Smith, Donald L. [Argonne National Laboratory (ANL); Pritychenko, B [Brookhaven National Laboratory (BNL); Arbanas, Goran [ORNL; Arcilla, r [Brookhaven National Laboratory (BNL); Brewer, R [Los Alamos National Laboratory (LANL); Brown, D A [Brookhaven National Laboratory (BNL); Capote, R. [International Atomic Energy Agency (IAEA); Carlson, A. D. [National Institute of Standards and Technology (NIST); Cho, Y S [Korea Atomic Energy Research Institute; Derrien, Herve [ORNL; Guber, Klaus H [ORNL; Hale, G. M. [Los Alamos National Laboratory (LANL); Hoblit, S [Brookhaven National Laboratory (BNL); Holloway, Shannon T. [Los Alamos National Laboratory (LANL); Johnson, T D [Brookhaven National Laboratory (BNL); Kawano, T. [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Kim, H [Korea Atomic Energy Research Institute; Kunieda, S [Los Alamos National Laboratory (LANL); Larson, Nancy M [ORNL; Leal, Luiz C [ORNL; Lestone, J P [Los Alamos National Laboratory (LANL); Little, R C [Los Alamos National Laboratory (LANL); Mccutchan, E A [Brookhaven National Laboratory (BNL); Macfarlane, R E [Los Alamos National Laboratory (LANL); MacInnes, M [Los Alamos National Laboratory (LANL); Matton, C M [Lawrence Livermore National Laboratory (LLNL); Mcknight, R D [Argonne National Laboratory (ANL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Nobre, G P [Brookhaven National Laboratory (BNL); Palmiotti, G [Idaho National Laboratory (INL); Palumbo, A [Brookhaven National Laboratory (BNL); Pigni, Marco T [ORNL; Pronyaev, V. G. [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Sayer, Royce O [ORNL; Sonzogni, A A [Brookhaven National Laboratory (BNL); Summers, N C [Lawrence Livermore National Laboratory (LLNL); Talou, P [Los Alamos National Laboratory (LANL); Thompson, I J [Lawrence Livermore National Laboratory (LLNL); Trkov, A. [Jozef Stefan Institute, Slovenia; Vogt, R L [Lawrence Livermore National Laboratory (LLNL); Van der Marck, S S [Nucl Res & Consultancy Grp, Petten, Netherlands; Wallner, A [University of Vienna, Austria; White, M C [Los Alamos National Laboratory (LANL); Wiarda, Dorothea [ORNL; Young, P C [Los Alamos National Laboratory (LANL)
2011-01-01
The ENDF/B-VII.1 library is our latest recommended evaluated nuclear data file for use in nuclear science and technology applications, and incorporates advances made in the five years since the release of ENDF/B-VII.0. These advances focus on neutron cross sections, covariances, fission product yields and decay data, and represent work by the US Cross Section Evaluation Working Group (CSEWG) in nuclear data evaluation that utilizes developments in nuclear theory, modeling, simulation, and experiment. The principal advances in the new library are: (1) An increase in the breadth of neutron reaction cross section coverage, extending from 393 nuclides to 423 nuclides; (2) Covariance uncertainty data for 190 of the most important nuclides, as documented in companion papers in this edition; (3) R-matrix analyses of neutron reactions on light nuclei, including isotopes of He; Li, and Be; (4) Resonance parameter analyses at lower energies and statistical high energy reactions for isotopes of Cl; K; Ti, V, Mn, Cr, Ni, Zr and W; (5) Modifications to thermal neutron reactions on fission products (isotopes of Mo, Tc, Rh, Ag, Cs, Nd, Sm, Eu) and neutron absorber materials (Cd, Gd); (6) Improved minor actinide evaluations for isotopes of U, Np, Pu, and Am (we are not making changes to the major actinides (235,238)U and (239)Pu at this point, except for delayed neutron data and covariances, and instead we intend to update them after a further period of research in experiment and theory), and our adoption of JENDL-4.0 evaluations for isotopes of Cm, Bk, Cf, Es; Fm; and some other minor actinides; (7) Fission energy release evaluations; (8) Fission product yield advances for fission-spectrum neutrons and 14 MeV neutrons incident on (239)Pu; and (9) A new decay data sublibrary. Integral validation testing of the ENDF/B-VII.1 library is provided for a variety of quantities: For nuclear criticality, the VII.1 library maintains the generally-good performance seen for VII.0 for a wide
The library was used as Copernicus in Auersperg’s and Lyceal Libraries
Directory of Open Access Journals (Sweden)
Stanislav Južnič
2006-01-01
Full Text Available We described the beginnings of the Auersperg Prince’s Library of Ljubljana. The special concern was put on the most important bibliophile among the Auerpergs, Volk Engelbert. The work of his friend and librarian, Schönleben, was put in the limelight. We researched the catalogue of Auersperg’s mathematical books, including astronomy and discussed the importance and value of particular items. The library was used as the base for the analysis of Auersperg’s scientific interests just after they returned to the Catholic faith. We also examined their opinion about Copernicus. The contemporary destiny of the Auersperg Prince’s Library was mentioned. In this very moment just some books of the former Ljubljanian library could be traced in different foreign libraries, especially in USA. We discovered the second edition of Copernicus’ De Revolutionibus which National and University Library of Ljubljana inherited from the Ljubljanian Jesuit library. Because of the wrong year written in Cobiss record, this precious treasure was unknown to the researchers up to now.
Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang
2018-01-01
Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.
Covariant single-hole optical potential
International Nuclear Information System (INIS)
Kam, J. de
1982-01-01
In this investigation a covariant optical potential model is constructed for scattering processes of mesons from nuclei in which the meson interacts repeatedly with one of the target nucleons. The nuclear binding interactions in the intermediate scattering state are consistently taken into account. In particular for pions and K - projectiles this is important in view of the strong energy dependence of the elementary projectile-nucleon amplitude. Furthermore, this optical potential satisfies unitarity and relativistic covariance. The starting point in our discussion is the three-body model for the optical potential. To obtain a practical covariant theory I formulate the three-body model as a relativistic quasi two-body problem. Expressions for the transition interactions and propagators in the quasi two-body equations are found by imposing the correct s-channel unitarity relations and by using dispersion integrals. This is done in such a way that the correct non-relativistic limit is obtained, avoiding clustering problems. Corrections to the quasi two-body treatment from the Pauli principle and the required ground-state exclusion are taken into account. The covariant equations that we arrive at are amenable to practical calculations. (orig.)
Featured Library: Parrish Library
Kirkwood, Hal P, Jr
2015-01-01
The Roland G. Parrish Library of Management & Economics is located within the Krannert School of Management at Purdue University. Between 2005 - 2007 work was completed on a white paper that focused on a student-centered vision for the Management & Economics Library. The next step was a massive collection reduction and a re-envisioning of both the services and space of the library. Thus began a 3 phase renovation from a 2 floor standard, collection-focused library into a single floor, 18,000s...
Nuclear data covariances in the Indian context
International Nuclear Information System (INIS)
Ganesan, S.
2014-01-01
The topic of covariances is recognized as an important part of several ongoing nuclear data science activities, since 2007, in the Nuclear Data Physics Centre of India (NDPCI). A Phase-1 project in collaboration with the Statistics department in Manipal University, Karnataka (Prof. K.M. Prasad and Prof. S. Nair) on nuclear data covariances was executed successfully during 2007-2011 period. In Phase-I, the NDPCI has conducted three national Theme meetings sponsored by the DAE-BRNS in 2008, 2010 and 2013 on nuclear data covariances. In Phase-1, the emphasis was on a thorough basic understanding of the concept of covariances including assigning uncertainties to experimental data in terms of partial errors and micro correlations, through a study and a detailed discussion of open literature. Towards the end of Phase-1, measurements and a first time covariance analysis of cross-sections for 58 Ni (n, p) 58 Co reaction measured in Mumbai Pelletron accelerator using 7 Li (p,n) reactions as neutron source in the MeV energy region were performed under a PhD programme on nuclear data covariances in which enrolled are two students, Shri B.S. Shivashankar and Ms. Shanti Sheela. India is also successfully evolving a team of young researchers to code nuclear data of uncertainties, with the perspectives on covariances, in the IAEA-EXFOR format. A Phase-II DAE-BRNS-NDPCI proposal of project at Manipal has been submitted and the proposal is undergoing a peer-review at this time. In Phase-2, modern nuclear data evaluation techniques that including covariances will be further studied as a research and development effort, as a first time effort. These efforts include the use of techniques such as that of the Kalman filter. Presently, a 48 hours lecture series on treatment of errors and their propagation is being formulated under auspices of the Homi Bhabha National Institute. The talk describes the progress achieved thus far in the learning curve of the above-mentioned and exciting
International Nuclear Information System (INIS)
Yu, Lingfei; Wang, Hao; Wang, Guangshuai; Song, Weimin; Huang, Yao; Li, Sheng-Gong; Liang, Naishen; Tang, Yanhong; He, Jin-Sheng
2013-01-01
Comparing of different CH 4 flux measurement techniques allows for the independent evaluation of the performance and reliability of those techniques. We compared three approaches, the traditional discrete Manual Static Chamber (MSC), Continuous Automated Chamber (CAC) and Eddy Covariance (EC) methods of measuring the CH 4 fluxes in an alpine wetland. We found a good agreement among the three methods in the seasonal CH 4 flux patterns, but the diurnal patterns from both the CAC and EC methods differed. While the diurnal CH 4 flux variation from the CAC method was positively correlated with the soil temperature, the diurnal variation from the EC method was closely correlated with the solar radiation and net CO 2 fluxes during the daytime but was correlated with the soil temperature at nighttime. The MSC method showed 25.3% and 7.6% greater CH 4 fluxes than the CAC and EC methods when measured between 09:00 h and 12:00 h, respectively. -- Highlights: •Chamber and eddy covariance methods showed similar seasonal CH 4 flux patterns. •Chamber and eddy covariance methods showed different diurnal CH 4 flux patterns. •Static chamber methods gave a higher magnitude of CH 4 flux. -- The chamber-based methods and the eddy covariance method showed similar seasonal CH 4 flux patterns, but the manual static chamber method resulted in a higher CH 4 flux measurement
Abouserie, Hossam Eldin Mohamed Refaat
2010-01-01
The study investigated and analyzed the state of academic web-based job announcements in Library and Information Science Field. The purpose of study was to get in depth understanding about main characteristics and trends of academic job market in Library and Information science field. The study focused on web-based version announcement as it was…
Homonuclear long-range correlation spectra from HMBC experiments by covariance processing.
Schoefberger, Wolfgang; Smrecki, Vilko; Vikić-Topić, Drazen; Müller, Norbert
2007-07-01
We present a new application of covariance nuclear magnetic resonance processing based on 1H--13C-HMBC experiments which provides an effective way for establishing indirect 1H--1H and 13C--13C nuclear spin connectivity at natural isotope abundance. The method, which identifies correlated spin networks in terms of covariance between one-dimensional traces from a single decoupled HMBC experiment, derives 13C--13C as well as 1H--1H spin connectivity maps from the two-dimensional frequency domain heteronuclear long-range correlation data matrix. The potential and limitations of this novel covariance NMR application are demonstrated on two compounds: eugenyl-beta-D-glucopyranoside and an emodin-derivative. Copyright (c) 2007 John Wiley & Sons, Ltd.
Strategic marketing planning in library
Directory of Open Access Journals (Sweden)
Karmen Štular-Sotošek
2000-01-01
Full Text Available The article is based on the idea that every library can design instruments for creating events and managing the important resources of today's world, especially to manage the changes. This process can only be successful if libraries use adequate marketing methods. Strategic marketing planning starts with the analysis of library's mission, its objectives, goals and corporate culture. By analysing the public environment, the competitive environment and the macro environment, libraries recognise their opportunities and threats. These analyses are the foundations for library definitions: What does the library represent?, What does it aspire to? Which goals does it want to reach? What kind of marketing strategy will it use for its target market?
Xing, Li; McDonald, Joseph J; Kolodziej, Steve A; Kurumbail, Ravi G; Williams, Jennifer M; Warren, Chad J; O'Neal, Janet M; Skepner, Jill E; Roberds, Steven L
2011-03-10
Structure-based virtual screening was applied to design combinatorial libraries to discover novel and potent soluble epoxide hydrolase (sEH) inhibitors. X-ray crystal structures revealed unique interactions for a benzoxazole template in addition to the conserved hydrogen bonds with the catalytic machinery of sEH. By exploitation of the favorable binding elements, two iterations of library design based on amide coupling were employed, guided principally by the docking results of the enumerated virtual products. Biological screening of the libraries demonstrated as high as 90% hit rate, of which over two dozen compounds were single digit nanomolar sEH inhibitors by IC(50) determination. In total the library design and synthesis produced more than 300 submicromolar sEH inhibitors. In cellular systems consistent activities were demonstrated with biochemical measurements. The SAR understanding of the benzoxazole template provides valuable insights into discovery of novel sEH inhibitors as therapeutic agents.
Key Performance Indicators in Irish Hospital Libraries: Developing Outcome-Based Metrics
Directory of Open Access Journals (Sweden)
Michelle Dalton
2012-12-01
Full Text Available Objective – To develop a set of generic outcome-based performance measures for Irishhospital libraries.Methods – Various models and frameworks of performance measurement were used as atheoretical paradigm to link the impact of library services directly with measurablehealthcare objectives and outcomes. Strategic objectives were identified, mapped toperformance indicators, and finally translated into response choices to a single-questiononline survey for distribution via email.Results – The set of performance indicators represents an impact assessment tool whichis easy to administer across a variety of healthcare settings. In using a model directlyaligned with the mission and goals of the organization, and linked to core activities andoperations in an accountable way, the indicators can also be used as a channel throughwhich to implement action, change, and improvement.Conclusion – The indicators can be adopted at a local and potentially a national level, asboth a tool for advocacy and to assess and improve service delivery at a macro level. Toovercome the constraints posed by necessary simplifications, substantial further research is needed by hospital libraries to develop more sophisticated and meaningful measures of impact to further aid decision making at a micro level.
International Nuclear Information System (INIS)
Fu, C.Y.; Hetrick, D.M.
1982-01-01
Recent ratio data, with carefully evaluated covariances, were combined with eleven of the ENDF/B-V dosimetry cross sections using the generalized least-squares method. The purpose was to improve these evaluated cross sections and covariances, as well as to generate values for the cross-reaction covariances. The results represent improved cross sections as well as realistic and usable covariances. The latter are necessary for meaningful intergral-differential comparisons and for spectrum unfolding
THEWASP library. Thermodynamic water and steam properties library in GPU
International Nuclear Information System (INIS)
Waintraub, M.; Lapa, C.M.F.; Mol, A.C.A.; Heimlich, A.
2011-01-01
In this paper we present a new library for thermodynamic evaluation of water properties, THEWASP. This library consists of a C++ and CUDA based programs used to accelerate a function evaluation using GPU and GPU clusters. Global optimization problems need thousands of evaluations of the objective functions to nd the global optimum implying in several days of expensive processing. This problem motivates to seek a way to speed up our code, as well as to use MPI on Beowulf clusters, which however increases the cost in terms of electricity, air conditioning and others. The GPU based programming can accelerate the implementation up to 100 times and help increase the number of evaluations in global optimization problems using, for example, the PSO or DE Algorithms. THEWASP is based on Water-Steam formulations publish by the International Association for the properties of water and steam, Lucerne - Switzerland, and provides several temperature and pressure function evaluations, such as specific heat, specific enthalpy, specific entropy and also some inverse maps. In this study we evaluated the gain in speed and performance and compared it a CPU based processing library. (author)
High-dimensional covariance estimation with high-dimensional data
Pourahmadi, Mohsen
2013-01-01
Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac
Multigroup cross section library; WIMS library
International Nuclear Information System (INIS)
Kannan, Umasankari
2000-01-01
The WIMS library has been extensively used in thermal reactor calculations. This multigroup constants library was originally developed from the UKNDL in the late 60's and has been updated in 1986. This library has been distributed with the WIMS-D code by NEA data bank. The references to WIMS library in literature are the 'old' which is the original as developed by the AEA Winfrith and the 'new' which is the current 1986 WIMS library. IAEA has organised a CRP where a new and fully updated WIMS library will soon be available. This paper gives an overview of the definitions of the group constants that go into any basic nuclear data library used for reactor calculations. This paper also outlines the contents of the WIMS library and some of its shortcomings
A measure of association between vectors based on "similarity covariance"
Pascual-Marqui, Roberto D.; Lehmann, Dietrich; Kochi, Kieko; Kinoshita, Toshihiko; Yamada, Naoto
2013-01-01
The "maximum similarity correlation" definition introduced in this study is motivated by the seminal work of Szekely et al on "distance covariance" (Ann. Statist. 2007, 35: 2769-2794; Ann. Appl. Stat. 2009, 3: 1236-1265). Instead of using Euclidean distances "d" as in Szekely et al, we use "similarity", which can be defined as "exp(-d/s)", where the scaling parameter s>0 controls how rapidly the similarity falls off with distance. Scale parameters are chosen by maximizing the similarity corre...
Covariance problem in two-dimensional quantum chromodynamics
International Nuclear Information System (INIS)
Hagen, C.R.
1979-01-01
The problem of covariance in the field theory of a two-dimensional non-Abelian gauge field is considered. Since earlier work has shown that covariance fails (in charged sectors) for the Schwinger model, particular attention is given to an evaluation of the role played by the non-Abelian nature of the fields. In contrast to all earlier attempts at this problem, it is found that the potential covariance-breaking terms are identical to those found in the Abelian theory provided that one expresses them in terms of the total (i.e., conserved) current operator. The question of covariance is thus seen to reduce in all cases to a determination as to whether there exists a conserved global charge in the theory. Since the charge operator in the Schwinger model is conserved only in neutral sectors, one is thereby led to infer a probable failure of covariance in the non-Abelian theory, but one which is identical to that found for the U(1) case
Form of the manifestly covariant Lagrangian
Johns, Oliver Davis
1985-10-01
The preferred form for the manifestly covariant Lagrangian function of a single, charged particle in a given electromagnetic field is the subject of some disagreement in the textbooks. Some authors use a ``homogeneous'' Lagrangian and others use a ``modified'' form in which the covariant Hamiltonian function is made to be nonzero. We argue in favor of the ``homogeneous'' form. We show that the covariant Lagrangian theories can be understood only if one is careful to distinguish quantities evaluated on the varied (in the sense of the calculus of variations) world lines from quantities evaluated on the unvaried world lines. By making this distinction, we are able to derive the Hamilton-Jacobi and Klein-Gordon equations from the ``homogeneous'' Lagrangian, even though the covariant Hamiltonian function is identically zero on all world lines. The derivation of the Klein-Gordon equation in particular gives Lagrangian theoretical support to the derivations found in standard quantum texts, and is also shown to be consistent with the Feynman path-integral method. We conclude that the ``homogeneous'' Lagrangian is a completely adequate basis for covariant Lagrangian theory both in classical and quantum mechanics. The article also explores the analogy with the Fermat theorem of optics, and illustrates a simple invariant notation for the Lagrangian and other four-vector equations.
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
Determination of covariant Schwinger terms in anomalous gauge theories
International Nuclear Information System (INIS)
Kelnhofer, G.
1991-01-01
A functional integral method is used to determine equal time commutators between the covariant currents and the covariant Gauss-law operators in theories which are affected by an anomaly. By using a differential geometrical setup we show how the derivation of consistent- and covariant Schwinger terms can be understood on an equal footing. We find a modified consistency condition for the covariant anomaly. As a by-product the Bardeen-Zumino functional, which relates consistent and covariant anomalies, can be interpreted as connection on a certain line bundle over all gauge potentials. Finally the covariant commutator anomalies are calculated for the two- and four dimensional case. (orig.)
Cross-population myelination covariance of human cerebral cortex.
Ma, Zhiwei; Zhang, Nanyin
2017-09-01
Cross-population covariance of brain morphometric quantities provides a measure of interareal connectivity, as it is believed to be determined by the coordinated neurodevelopment of connected brain regions. Although useful, structural covariance analysis predominantly employed bulky morphological measures with mixed compartments, whereas studies of the structural covariance of any specific subdivisions such as myelin are rare. Characterizing myelination covariance is of interest, as it will reveal connectivity patterns determined by coordinated development of myeloarchitecture between brain regions. Using myelin content MRI maps from the Human Connectome Project, here we showed that the cortical myelination covariance was highly reproducible, and exhibited a brain organization similar to that previously revealed by other connectivity measures. Additionally, the myelination covariance network shared common topological features of human brain networks such as small-worldness. Furthermore, we found that the correlation between myelination covariance and resting-state functional connectivity (RSFC) was uniform within each resting-state network (RSN), but could considerably vary across RSNs. Interestingly, this myelination covariance-RSFC correlation was appreciably stronger in sensory and motor networks than cognitive and polymodal association networks, possibly due to their different circuitry structures. This study has established a new brain connectivity measure specifically related to axons, and this measure can be valuable to investigating coordinated myeloarchitecture development. Hum Brain Mapp 38:4730-4743, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A covariant form of the Maxwell's equations in four-dimensional spaces with an arbitrary signature
International Nuclear Information System (INIS)
Lukac, I.
1991-01-01
The concept of duality in the four-dimensional spaces with the arbitrary constant metric is strictly mathematically formulated. A covariant model for covariant and contravariant bivectors in this space based on three four-dimensional vectors is proposed. 14 refs
Covariant extensions and the nonsymmetric unified field
International Nuclear Information System (INIS)
Borchsenius, K.
1976-01-01
The problem of generally covariant extension of Lorentz invariant field equations, by means of covariant derivatives extracted from the nonsymmetric unified field, is considered. It is shown that the contracted curvature tensor can be expressed in terms of a covariant gauge derivative which contains the gauge derivative corresponding to minimal coupling, if the universal constant p, characterizing the nonsymmetric theory, is fixed in terms of Planck's constant and the elementary quantum of charge. By this choice the spinor representation of the linear connection becomes closely related to the spinor affinity used by Infeld and Van Der Waerden (Sitzungsber. Preuss. Akad. Wiss. Phys. Math. Kl.; 9:380 (1933)) in their generally covariant formulation of Dirac's equation. (author)
Computing more proper covariances of energy dependent nuclear data
International Nuclear Information System (INIS)
Vanhanen, R.
2016-01-01
Highlights: • We present conditions for covariances of energy dependent nuclear data to be proper. • We provide methods to detect non-positive and inconsistent covariances in ENDF-6 format. • We propose methods to find nearby more proper covariances. • The methods can be used as a part of a quality assurance program. - Abstract: We present conditions for covariances of energy dependent nuclear data to be proper in the sense that the covariances are positive, i.e., its eigenvalues are non-negative, and consistent with respect to the sum rules of nuclear data. For the ENDF-6 format covariances we present methods to detect non-positive and inconsistent covariances. These methods would be useful as a part of a quality assurance program. We also propose methods that can be used to find nearby more proper energy dependent covariances. These methods can be used to remove unphysical components, while preserving most of the physical components. We consider several different senses in which the nearness can be measured. These methods could be useful if a re-evaluation of improper covariances is not feasible. Two practical examples are processed and analyzed. These demonstrate some of the properties of the methods. We also demonstrate that the ENDF-6 format covariances of linearly dependent nuclear data should usually be encoded with the derivation rules.
International Nuclear Information System (INIS)
Sato, Masanori; Matsubara, Takahiko; Takada, Masahiro; Hamana, Takashi
2011-01-01
Using 1000 ray-tracing simulations for a Λ-dominated cold dark model in Sato et al., we study the covariance matrix of cosmic shear correlation functions, which is the standard statistics used in previous measurements. The shear correlation function of a particular separation angle is affected by Fourier modes over a wide range of multipoles, even beyond a survey area, which complicates the analysis of the covariance matrix. To overcome such obstacles we first construct Gaussian shear simulations from the 1000 realizations and then use the Gaussian simulations to disentangle the Gaussian covariance contribution to the covariance matrix we measured from the original simulations. We found that an analytical formula of Gaussian covariance overestimates the covariance amplitudes due to an effect of the finite survey area. Furthermore, the clean separation of the Gaussian covariance allows us to examine the non-Gaussian covariance contributions as a function of separation angles and source redshifts. For upcoming surveys with typical source redshifts of z s = 0.6 and 1.0, the non-Gaussian contribution to the diagonal covariance components at 1 arcmin scales is greater than the Gaussian contribution by a factor of 20 and 10, respectively. Predictions based on the halo model qualitatively well reproduce the simulation results, however show a sizable disagreement in the covariance amplitudes. By combining these simulation results we develop a fitting formula to the covariance matrix for a survey with arbitrary area coverage, taking into account effects of the finiteness of survey area on the Gaussian covariance.
Covariance Spectroscopy for Fissile Material Detection
International Nuclear Information System (INIS)
Trainham, Rusty; Tinsley, Jim; Hurley, Paul; Keegan, Ray
2009-01-01
Nuclear fission produces multiple prompt neutrons and gammas at each fission event. The resulting daughter nuclei continue to emit delayed radiation as neutrons boil off, beta decay occurs, etc. All of the radiations are causally connected, and therefore correlated. The correlations are generally positive, but when different decay channels compete, so that some radiations tend to exclude others, negative correlations could also be observed. A similar problem of reduced complexity is that of cascades radiation, whereby a simple radioactive decay produces two or more correlated gamma rays at each decay. Covariance is the usual means for measuring correlation, and techniques of covariance mapping may be useful to produce distinct signatures of special nuclear materials (SNM). A covariance measurement can also be used to filter data streams because uncorrelated signals are largely rejected. The technique is generally more effective than a coincidence measurement. In this poster, we concentrate on cascades and the covariance filtering problem
Optimal covariate designs theory and applications
Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar
2015-01-01
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...
Covariate Imbalance and Precision in Measuring Treatment Effects
Liu, Xiaofeng Steven
2011-01-01
Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…
Covariant description of Hamiltonian form for field dynamics
International Nuclear Information System (INIS)
Ozaki, Hiroshi
2005-01-01
Hamiltonian form of field dynamics is developed on a space-like hypersurface in space-time. A covariant Poisson bracket on the space-like hypersurface is defined and it plays a key role to describe every algebraic relation into a covariant form. It is shown that the Poisson bracket has the same symplectic structure that was brought in the covariant symplectic approach. An identity invariant under the canonical transformations is obtained. The identity follows a canonical equation in which the interaction Hamiltonian density generates a deformation of the space-like hypersurface. The equation just corresponds to the Yang-Feldman equation in the Heisenberg pictures in quantum field theory. By converting the covariant Poisson bracket on the space-like hypersurface to four-dimensional commutator, we can pass over to quantum field theory in the Heisenberg picture without spoiling the explicit relativistic covariance. As an example the canonical QCD is displayed in a covariant way on a space-like hypersurface
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
Library Spaces for 21st-Century Learners: A Planning Guide for Creating New School Library Concepts
Sullivan, Margaret
2013-01-01
"Library Spaces for 21st-Century Learners: A Planning Guide for Creating New School Library Concepts" focuses on planning contemporary school library spaces with user-based design strategies. The book walks school librarians and administrators through the process of gathering information from students and other stakeholders involved in…
Exploiting Data Sparsity In Covariance Matrix Computations on Heterogeneous Systems
Charara, Ali M.
2018-05-24
Covariance matrices are ubiquitous in computational sciences, typically describing the correlation of elements of large multivariate spatial data sets. For example, covari- ance matrices are employed in climate/weather modeling for the maximum likelihood estimation to improve prediction, as well as in computational ground-based astronomy to enhance the observed image quality by filtering out noise produced by the adap- tive optics instruments and atmospheric turbulence. The structure of these covariance matrices is dense, symmetric, positive-definite, and often data-sparse, therefore, hier- archically of low-rank. This thesis investigates the performance limit of dense matrix computations (e.g., Cholesky factorization) on covariance matrix problems as the number of unknowns grows, and in the context of the aforementioned applications. We employ recursive formulations of some of the basic linear algebra subroutines (BLAS) to accelerate the covariance matrix computation further, while reducing data traffic across the memory subsystems layers. However, dealing with large data sets (i.e., covariance matrices of billions in size) can rapidly become prohibitive in memory footprint and algorithmic complexity. Most importantly, this thesis investigates the tile low-rank data format (TLR), a new compressed data structure and layout, which is valuable in exploiting data sparsity by approximating the operator. The TLR com- pressed data structure allows approximating the original problem up to user-defined numerical accuracy. This comes at the expense of dealing with tasks with much lower arithmetic intensities than traditional dense computations. In fact, this thesis con- solidates the two trends of dense and data-sparse linear algebra for HPC. Not only does the thesis leverage recursive formulations for dense Cholesky-based matrix al- gorithms, but it also implements a novel TLR-Cholesky factorization using batched linear algebra operations to increase hardware occupancy and
An Empirical State Error Covariance Matrix Orbit Determination Example
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance
ANL Critical Assembly Covariance Matrix Generation - Addendum
Energy Technology Data Exchange (ETDEWEB)
McKnight, Richard D. [Argonne National Lab. (ANL), Argonne, IL (United States); Grimm, Karl N. [Argonne National Lab. (ANL), Argonne, IL (United States)
2014-01-13
In March 2012, a report was issued on covariance matrices for Argonne National Laboratory (ANL) critical experiments. That report detailed the theory behind the calculation of covariance matrices and the methodology used to determine the matrices for a set of 33 ANL experimental set-ups. Since that time, three new experiments have been evaluated and approved. This report essentially updates the previous report by adding in these new experiments to the preceding covariance matrix structure.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Library improvement through data analytics
Farmer, Lesley S J
2017-01-01
This book shows how to act on and make sense of data in libraries. Using a range of techniques, tools and methodologies it explains how data can be used to help inform decision making at every level. Sound data analytics is the foundation for making an evidence-based case for libraries, in addition to guiding myriad organizational decisions, from optimizing operations for efficiency to responding to community needs. Designed to be useful for beginners as well as those with a background in data, this book introduces the basics of a six point framework that can be applied to a variety of library settings for effective system based, data-driven management. Library Improvement Through Data Analytics includes: - the basics of statistical concepts - recommended data sources for various library functions and processes, and guidance for using census, university, or - - government data in analysis - techniques for cleaning data - matching data to appropriate data analysis methods - how to make descriptive statistics m...
Library 3.0 intelligent libraries and apomediation
Kwanya, Tom; Underwood, Peter
2015-01-01
The emerging generation of research and academic library users expect the delivery of user-centered information services. 'Apomediation' refers to the supporting role librarians can give users by stepping in when users need help. Library 3.0 explores the ongoing debates on the "point oh” phenomenon and its impact on service delivery in libraries. This title analyses Library 3.0 and its potential in creating intelligent libraries capable of meeting contemporary needs, and the growing role of librarians as apomediators. Library 3.0 is divided into four chapters. The first chapter introduces and places the topic in context. The second chapter considers "point oh” libraries. The third chapter covers library 3.0 librarianship, while the final chapter explores ways libraries can move towards '3.0'.
Joshi, Priyanka; Chia, Sean; Habchi, Johnny; Knowles, Tuomas P J; Dobson, Christopher M; Vendruscolo, Michele
2016-03-14
The aggregation process of intrinsically disordered proteins (IDPs) has been associated with a wide range of neurodegenerative disorders, including Alzheimer's and Parkinson's diseases. Currently, however, no drug in clinical use targets IDP aggregation. To facilitate drug discovery programs in this important and challenging area, we describe a fragment-based approach of generating small-molecule libraries that target specific IDPs. The method is based on the use of molecular fragments extracted from compounds reported in the literature to inhibit of the aggregation of IDPs. These fragments are used to screen existing large generic libraries of small molecules to form smaller libraries specific for given IDPs. We illustrate this approach by describing three distinct small-molecule libraries to target, Aβ, tau, and α-synuclein, which are three IDPs implicated in Alzheimer's and Parkinson's diseases. The strategy described here offers novel opportunities for the identification of effective molecular scaffolds for drug discovery for neurodegenerative disorders and to provide insights into the mechanism of small-molecule binding to IDPs.
Altered structural covariance of the striatum in functional dyspepsia patients.
Liu, P; Zeng, F; Yang, F; Wang, J; Liu, X; Wang, Q; Zhou, G; Zhang, D; Zhu, M; Zhao, R; Wang, A; Gong, Q; Liang, F
2014-08-01
Functional dyspepsia (FD) is thought to be involved in dysregulation within the brain-gut axis. Recently, altered striatum activation has been reported in patients with FD. However, the gray matter (GM) volumes in the striatum and structural covariance patterns of this area are rarely explored. The purpose of this study was to examine the GM volumes and structural covariance patterns of the striatum between FD patients and healthy controls (HCs). T1-weighted magnetic resonance images were obtained from 44 FD patients and 39 HCs. Voxel-based morphometry (VBM) analysis was adopted to examine the GM volumes in the two groups. The caudate- or putamen-related regions identified from VBM analysis were then used as seeds to map the whole brain voxel-wise structural covariance patterns. Finally, a correlation analysis was used to investigate the effects of FD symptoms on the striatum. The results showed increased GM volumes in the bilateral putamen and right caudate. Compared with the structural covariance patterns of the HCs, the FD-related differences were mainly located in the amygdala, hippocampus/parahippocampus (HIPP/paraHIPP), thalamus, lingual gyrus, and cerebellum. And significant positive correlations were found between the volumes in the striatum and the FD duration in the patients. These findings provided preliminary evidence for GM changes in the striatum and different structural covariance patterns in patients with FD. The current results might expand our understanding of the pathophysiology of FD. © 2014 John Wiley & Sons Ltd.
International Nuclear Information System (INIS)
Sebestyen, A.
1975-07-01
The principle of covariance is extended to coordinates corresponding to internal degrees of freedom. The conditions for a system to be isolated are given. It is shown how internal forces arise in such systems. Equations for internal fields are derived. By an interpretation of the generalized coordinates based on group theory it is shown how particles of the ordinary sense enter into the model and as a simple application the gravitational interaction of two pointlike particles is considered and the shift of the perihelion is deduced. (Sz.Z.)
Activities on covariance estimation in Japanese Nuclear Data Committee
Energy Technology Data Exchange (ETDEWEB)
Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-03-01
Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)
Croatian library leaders’ views on (their library quality
Directory of Open Access Journals (Sweden)
Kornelija Petr Balog
2014-04-01
Full Text Available The purpose of this paper is to determine and describe the library culture in Croatian public libraries. Semi-structured interviews with 14 library directors (ten public and four academic were conducted. The tentative discussion topics were: definition of quality, responsibility for quality, satisfaction with library services, familiarization with user perspective of library and librarians, monitoring of user expectations and opinions. These interviews incorporate some of the findings of the project Evaluation of library and information services: public and academic libraries. The project investigates library culture in Croatian public and academic libraries and their preparedness for activities of performance measurement. The interviews reveal that library culture has changed positively in the past few years and that library leaders have positive attitude towards quality and evaluation activities. Library culture in Croatian libraries is a relatively new concept and as such was not actively developed and/or created. This article looks into the library culture of Croatian libraries, but at the same time investigates whether there is any trace of culture of assessment in them. Also, this article brings the latest update on views, opinions and atmosphere in Croatian public and academic libraries.
International Nuclear Information System (INIS)
Oblozinsky, P.
2008-01-01
We describe the new version of the Evaluated Nuclear Data File, Endf/B-7.0, of recommended nuclear data for advanced nuclear science and technology applications. The library, produced by the US Cross Section Evaluation Working Group, was released in December 2006. The library contains data in 14 sub-libraries, primarily for reactions with incident neutrons, protons and photons, based on the experimental data and nuclear reaction theory predictions. The neutron reaction sub-library contains data for 393 materials. The new library was extensively tested and shows considerable improvements over the earlier Endf/B-6.8 library. (author)
International Nuclear Information System (INIS)
Hasegawa, A.
1992-01-01
JSSTDL 295n-104γ: A common group cross-section library system has been developed in JAERI to be used in fairly wide range of applications in nuclear industry. This system is composed of a common 295n-104γ group cross-section library based on JENDL-3 nuclear data file and its utility codes. Target of this system is focused to the criticality or shielding calculations in fast and fusion reactors using ANISN, DOT, or MORSE code. Specifications of the common group constants were decided responding to the request from various nuclear data users, particularly from nuclear design group in Japan. Group structure is decided so as to cover almost all group structures currently used in our country. This library includes self-shielding factor tables for primary reactions. A routine for generating macro-scopic cross-section using the self-shielding factor table is also provided. Neutron cross-sections and photon production cross-sections are processed by Prof. GROUCH-G/B code system and γ ray transport cross-sections are generated by GAMLEG-JR. In this paper, outline and present status of the JSSTDL library system is described along with two examples adopted in JENDL-3 benchmark test. One is for shielding calculation, where effects of self-shielding factor (f-table) is shown in conjunction with the analysis of the ASPIS natural iron deep penetration experiment. Without considering resonance self-shielding effect in resonance energy region for resonant nuclides like iron, the results is completely missled in the attenuation profile calculation in the shields. The other example is fast rector criticality calculations of very small critical assemblies with very high enrichment fuel materials where some basic characteristics of this library is presented. (orig.)
Marchetti, Luca; Manca, Vincenzo
2015-04-15
MpTheory Java library is an open-source project collecting a set of objects and algorithms for modeling observed dynamics by means of the Metabolic P (MP) theory, that is, a mathematical theory introduced in 2004 for modeling biological dynamics. By means of the library, it is possible to model biological systems both at continuous and at discrete time. Moreover, the library comprises a set of regression algorithms for inferring MP models starting from time series of observations. To enhance the modeling experience, beside a pure Java usage, the library can be directly used within the most popular computing environments, such as MATLAB, GNU Octave, Mathematica and R. The library is open-source and licensed under the GNU Lesser General Public License (LGPL) Version 3.0. Source code, binaries and complete documentation are available at http://mptheory.scienze.univr.it. luca.marchetti@univr.it, marchetti@cosbi.eu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Neutron spectrum adjustment. The role of covariances
International Nuclear Information System (INIS)
Remec, I.
1992-01-01
Neutron spectrum adjustment method is shortly reviewed. Practical example dealing with power reactor pressure vessel exposure rates determination is analysed. Adjusted exposure rates are found only slightly affected by the covariances of measured reaction rates and activation cross sections, while the multigroup spectra covariances were found important. Approximate spectra covariance matrices, as suggested in Astm E944-89, were found useful but care is advised if they are applied in adjustments of spectra at locations without dosimetry. (author) [sl
Covariant holography of a tachyonic accelerating universe
Energy Technology Data Exchange (ETDEWEB)
Rozas-Fernandez, Alberto [Consejo Superior de Investigaciones Cientificas, Instituto de Fisica Fundamental, Madrid (Spain); University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth (United Kingdom)
2014-08-15
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state w = p/ρ, both for w > -1 and w < -1. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analyzed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of an S-matrix at infinite distances. (orig.)
CERN. Geneva
2017-01-01
As a join2 partner, DESY library uses Invenio already for it's publication database and institutional repository. The next logical step is to also migrate the library catalogue from the currently used Aleph system to Invenio. Starting out with a short introduction of how to migrate Aleph. This includes the migration of bibliographic data as well as holdings but also movement data, current loans etc. The talk also outlines some of the new additions required to run Invenio as an ILS at DESY based on the infrastructure already existing. E.g. it is necessary for DESY to interact with RFID based self service terminals, barcode based library cards and external patrons who have not DESY account etc.
Gusriani, N.; Firdaniza
2018-03-01
The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.
Lagishetty, Chakradhar V; Duffull, Stephen B
2015-11-01
Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.
2015-05-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.; Kleiber, William
2015-01-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
Shave, Steven; Auer, Manfred
2013-12-23
Combinatorial chemical libraries produced on solid support offer fast and cost-effective access to a large number of unique compounds. If such libraries are screened directly on-bead, the speed at which chemical space can be explored by chemists is much greater than that addressable using solution based synthesis and screening methods. Solution based screening has a large supporting body of software such as structure-based virtual screening tools which enable the prediction of protein-ligand complexes. Use of these techniques to predict the protein bound complexes of compounds synthesized on solid support neglects to take into account the conjugation site on the small molecule ligand. This may invalidate predicted binding modes, the linker may be clashing with protein atoms. We present CSBB-ConeExclusion, a methodology and computer program which provides a measure of the applicability of solution dockings to solid support. Output is given in the form of statistics for each docking pose, a unique 2D visualization method which can be used to determine applicability at a glance, and automatically generated PyMol scripts allowing visualization of protein atom incursion into a defined exclusion volume. CSBB-ConeExclusion is then exemplarically used to determine the optimum attachment point for a purine library targeting cyclin-dependent kinase 2 CDK2.
Virtual Library: Providing Accessible Online Resources.
Kelly, Rob
2001-01-01
Describes e-global library, a virtual library based on the Jones International University's library that organizes Internet resources to make them more accessible to students at all skill levels. Highlights include online tutorials; research guides; financial aid and career development information; and possible partnerships with other digital…
International Nuclear Information System (INIS)
Green, N.M.; Parks, C.V.; Arwood, J.W.
1989-01-01
The 238 group LAW library is a new multigroup library based on ENDF/B-V data. It contains data for 302 materials and will be distributed by the Radiation Shielding Information Center, located at Oak Ridge National Laboratory. It was generated for use in neutronics calculations required in radioactive waste analyses, though it has equal utility in any study requiring multigroup neutron cross sections
Partially linear varying coefficient models stratified by a functional covariate
Maity, Arnab; Huang, Jianhua Z.
2012-01-01
We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric
The method of covariant symbols in curved space-time
International Nuclear Information System (INIS)
Salcedo, L.L.
2007-01-01
Diagonal matrix elements of pseudodifferential operators are needed in order to compute effective Lagrangians and currents. For this purpose the method of symbols is often used, which however lacks manifest covariance. In this work the method of covariant symbols, introduced by Pletnev and Banin, is extended to curved space-time with arbitrary gauge and coordinate connections. For the Riemannian connection we compute the covariant symbols corresponding to external fields, the covariant derivative and the Laplacian, to fourth order in a covariant derivative expansion. This allows one to obtain the covariant symbol of general operators to the same order. The procedure is illustrated by computing the diagonal matrix element of a nontrivial operator to second order. Applications of the method are discussed. (orig.)
Friis, Thor Einar; Stephenson, Sally; Xiao, Yin; Whitehead, Jon
2014-01-01
The sheep (Ovis aries) is favored by many musculoskeletal tissue engineering groups as a large animal model because of its docile temperament and ease of husbandry. The size and weight of sheep are comparable to humans, which allows for the use of implants and fixation devices used in human clinical practice. The construction of a complimentary DNA (cDNA) library can capture the expression of genes in both a tissue- and time-specific manner. cDNA libraries have been a consistent source of gene discovery ever since the technology became commonplace more than three decades ago. Here, we describe the construction of a cDNA library using cells derived from sheep bones based on the pBluescript cDNA kit. Thirty clones were picked at random and sequenced. This led to the identification of a novel gene, C12orf29, which our initial experiments indicate is involved in skeletal biology. We also describe a polymerase chain reaction-based cDNA clone isolation method that allows the isolation of genes of interest from a cDNA library pool. The techniques outlined here can be applied in-house by smaller tissue engineering groups to generate tools for biomolecular research for large preclinical animal studies and highlights the power of standard cDNA library protocols to uncover novel genes. PMID:24447069
Speakeasy Studio and Cafe: Information Literacy, Web-based Library Instruction, and Technology.
Jacobs, Mark
2001-01-01
Discussion of academic library instruction and information literacy focuses on a Web-based program developed at Washington State University called Speakeasy Studio and Cafe that is used for bibliographic instruction. Highlights include the research process; asking the right question; and adapting to students' differing learning styles. (LRW)
Conformally covariant massless spin-two field equations
International Nuclear Information System (INIS)
Drew, M.S.; Gegenberg, J.D.
1980-01-01
An explicit proof is constructed to show that the field equations for a symmetric tensor field hsub(ab) describing massless spin-2 particles in Minkowski space-time are not covariant under the 15-parameter group SOsub(4,2); this group is usually associated with conformal transformations on flat space, and here it will be considered as a global gauge group which acts upon matter fields defined on space-time. Notwithstanding the above noncovariance, the equations governing the rank-4 tensor Ssub(abcd) constructed from hsub(ab) are shown to be covariant provided the contraction Ssub(ab) vanishes. Conformal covariance is proved by demonstrating the covariance of the equations for the equivalent 5-component complex field; in fact, covariance is proved for a general field equation applicable to massless particles of any spin >0. It is shown that the noncovariance of the hsub(ab) equations may be ascribed to the fact that the transformation behaviour of hsub(ab) is not the same as that of a field consisting of a gauge only. Since this is in contradistinction to the situation for the electromagnetic-field equations, the vector form of the electromagnetic equations is cast into a form which can be duplicated for the hsub(ab)-field. This procedure results in an alternative, covariant, field equation for hsub(ab). (author)
Library Research Support in Queensland: A Survey
Richardson, Joanna; Nolan-Brown, Therese; Loria, Pat; Bradbury, Stephanie
2012-01-01
University libraries worldwide are reconceptualising the ways in which they support the research agenda in their respective institutions. This paper is based on a survey completed by member libraries of the Queensland University Libraries Office of Cooperation (QULOC), the findings of which may be informative for other university libraries. After…
Non-stationary pre-envelope covariances of non-classically damped systems
Muscolino, G.
1991-08-01
A new formulation is given to evaluate the stationary and non-stationary response of linear non-classically damped systems subjected to multi-correlated non-separable Gaussian input processes. This formulation is based on a new and more suitable definition of the impulse response function matrix for such systems. It is shown that, when using this definition, the stochastic response of non-classically damped systems involves the evaluation of quantities similar to those of classically damped ones. Furthermore, considerations about non-stationary cross-covariances, spectral moments and pre-envelope cross-covariances are presented for a monocorrelated input process.
Evaluation of covariance for 238U cross sections
International Nuclear Information System (INIS)
Kawano, Toshihiko; Nakamura, Masahiro; Matsuda, Nobuyuki; Kanda, Yukinori
1995-01-01
Covariances of 238 U are generated using analytic functions for representation of the cross sections. The covariances of the (n,2n) and (n,3n) reactions are derived with a spline function, while the covariances of the total and the inelastic scattering cross section are estimated with a linearized nuclear model calculation. (author)
The application of sparse estimation of covariance matrix to quadratic discriminant analysis
Sun, Jiehuan; Zhao, Hongyu
2015-01-01
Background Although Linear Discriminant Analysis (LDA) is commonly used for classification, it may not be directly applied in genomics studies due to the large p, small n problem in these studies. Different versions of sparse LDA have been proposed to address this significant challenge. One implicit assumption of various LDA-based methods is that the covariance matrices are the same across different classes. However, rewiring of genetic networks (therefore different covariance matrices) acros...
Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z
2009-05-01
Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.
GLq(N)-covariant quantum algebras and covariant differential calculus
International Nuclear Information System (INIS)
Isaev, A.P.; Pyatov, P.N.
1993-01-01
We consider GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations. We show that, up to some inessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. The connection with the bicovariant differential calculus on the linear quantum groups is discussed. (orig.)
Asset allocation with different covariance/correlation estimators
Μανταφούνη, Σοφία
2007-01-01
The subject of the study is to test whether the use of different covariance – correlation estimators than the historical covariance matrix that is widely used, would help in portfolio optimization through the mean-variance analysis. In other words, if an investor would like to use the mean-variance analysis in order to invest in assets like stocks or indices, would it be of some help to use more sophisticated estimators for the covariance matrix of the returns of his portfolio? The procedure ...
Yan, Yuan; Genton, Marc G.
2017-01-01
Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.
Yan, Yuan
2017-07-13
Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.
Students’ Covariational Reasoning in Solving Integrals’ Problems
Harini, N. V.; Fuad, Y.; Ekawati, R.
2018-01-01
Covariational reasoning plays an important role to indicate quantities vary in learning calculus. This study investigates students’ covariational reasoning during their studies concerning two covarying quantities in integral problem. Six undergraduate students were chosen to solve problems that involved interpreting and representing how quantities change in tandem. Interviews were conducted to reveal the students’ reasoning while solving covariational problems. The result emphasizes that undergraduate students were able to construct the relation of dependent variables that changes in tandem with the independent variable. However, students faced difficulty in forming images of continuously changing rates and could not accurately apply the concept of integrals. These findings suggest that learning calculus should be increased emphasis on coordinating images of two quantities changing in tandem about instantaneously rate of change and to promote conceptual knowledge in integral techniques.
Covariance upperbound controllers for networked control systems
International Nuclear Information System (INIS)
Ko, Sang Ho
2012-01-01
This paper deals with designing covariance upperbound controllers for a linear system that can be used in a networked control environment in which control laws are calculated in a remote controller and transmitted through a shared communication link to the plant. In order to compensate for possible packet losses during the transmission, two different techniques are often employed: the zero-input and the hold-input strategy. These use zero input and the latest control input, respectively, when a packet is lost. For each strategy, we synthesize a class of output covariance upperbound controllers for a given covariance upperbound and a packet loss probability. Existence conditions of the covariance upperbound controller are also provided for each strategy. Through numerical examples, performance of the two strategies is compared in terms of feasibility of implementing the controllers
Japanese evaluated nuclear data library version 3 revision-3: JENDL-3.3
Shibata, K; Kawano, T
2002-01-01
Evaluation for JENDL-3.3 has been performed by considering the accumulated feedback information and various benchmark tests of the previous library JENDL-3.2. The major problems of the JENDL-3.2 data were solved by the new library: overestimation of criticality values for thermal fission reactors was improved by the modifications of fission cross sections and fission neutron spectra for sup 2 sup 3 sup 5 U; incorrect energy distributions of secondary neutrons from important heavy nuclides were replaced with statistical model calculations; the inconsistency between elemental and isotopic evaluations was removed for medium-heavy nuclides. Moreover, covariance data were provided for 20 nuclides. The reliability of JENDL-3.3 was investigated by the benchmark analyses on reactor and shielding performances. The results of the analyses indicate that JENDL-3.3 predicts various reactor and shielding characteristics better than JENDL-3.2. (author)
DNA-encoded chemical libraries: advancing beyond conventional small-molecule libraries.
Franzini, Raphael M; Neri, Dario; Scheuermann, Jörg
2014-04-15
DNA-encoded chemical libraries (DECLs) represent a promising tool in drug discovery. DECL technology allows the synthesis and screening of chemical libraries of unprecedented size at moderate costs. In analogy to phage-display technology, where large antibody libraries are displayed on the surface of filamentous phage and are genetically encoded in the phage genome, DECLs feature the display of individual small organic chemical moieties on DNA fragments serving as amplifiable identification barcodes. The DNA-tag facilitates the synthesis and allows the simultaneous screening of very large sets of compounds (up to billions of molecules), because the hit compounds can easily be identified and quantified by PCR-amplification of the DNA-barcode followed by high-throughput DNA sequencing. Several approaches have been used to generate DECLs, differing both in the methods used for library encoding and for the combinatorial assembly of chemical moieties. For example, DECLs can be used for fragment-based drug discovery, displaying a single molecule on DNA or two chemical moieties at the extremities of complementary DNA strands. DECLs can vary substantially in the chemical structures and the library size. While ultralarge libraries containing billions of compounds have been reported containing four or more sets of building blocks, also smaller libraries have been shown to be efficient for ligand discovery. In general, it has been found that the overall library size is a poor predictor for library performance and that the number and diversity of the building blocks are rather important indicators. Smaller libraries consisting of two to three sets of building blocks better fulfill the criteria of drug-likeness and often have higher quality. In this Account, we present advances in the DECL field from proof-of-principle studies to practical applications for drug discovery, both in industry and in academia. DECL technology can yield specific binders to a variety of target
ACORNS, Covariance and Correlation Matrix Diagonalization
International Nuclear Information System (INIS)
Szondi, E.J.
1990-01-01
1 - Description of program or function: The program allows the user to verify the different types of covariance/correlation matrices used in the activation neutron spectrometry. 2 - Method of solution: The program performs the diagonalization of the input covariance/relative covariance/correlation matrices. The Eigen values are then analyzed to determine the rank of the matrices. If the Eigen vectors of the pertinent correlation matrix have also been calculated, the program can perform a complete factor analysis (generation of the factor matrix and its rotation in Kaiser's 'varimax' sense to select the origin of the correlations). 3 - Restrictions on the complexity of the problem: Matrix size is limited to 60 on PDP and to 100 on IBM PC/AT
Reusable libraries for safety-critical Java
DEFF Research Database (Denmark)
Rios Rivas, Juan Ricardo; Schoeberl, Martin
2014-01-01
The large collection of Java class libraries is a main factor of the success of Java. However, these libraries assume that a garbage-collected heap is used. Safety-critical Java uses scope-based memory areas instead of a garbage-collected heap. Therefore, the Java class libraries are problematic...... to use in safety-critical Java. We have identified common programming patterns in the Java class libraries that make them unsuitable for safety-critical Java. We propose ways to improve the libraries to avoid the impact of the identified problematic patterns. We illustrate these changes by implementing...
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Covariate-adjusted measures of discrimination for survival data
DEFF Research Database (Denmark)
White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth
2015-01-01
by the study design (e.g. age and sex) influence discrimination and can make it difficult to compare model discrimination between studies. Although covariate adjustment is a standard procedure for quantifying disease-risk factor associations, there are no covariate adjustment methods for discrimination...... statistics in censored survival data. OBJECTIVE: To develop extensions of the C-index and D-index that describe the prognostic ability of a model adjusted for one or more covariate(s). METHOD: We define a covariate-adjusted C-index and D-index for censored survival data, propose several estimators......, and investigate their performance in simulation studies and in data from a large individual participant data meta-analysis, the Emerging Risk Factors Collaboration. RESULTS: The proposed methods perform well in simulations. In the Emerging Risk Factors Collaboration data, the age-adjusted C-index and D-index were...
Zhaunerchyk, V.; Frasinski, L. J.; Eland, J. H. D.; Feifel, R.
2014-05-01
Multidimensional covariance analysis and its validity for correlation of processes leading to multiple products are investigated from a theoretical point of view. The need to correct for false correlations induced by experimental parameters which fluctuate from shot to shot, such as the intensity of self-amplified spontaneous emission x-ray free-electron laser pulses, is emphasized. Threefold covariance analysis based on simple extension of the two-variable formulation is shown to be valid for variables exhibiting Poisson statistics. In this case, false correlations arising from fluctuations in an unstable experimental parameter that scale linearly with signals can be eliminated by threefold partial covariance analysis, as defined here. Fourfold covariance based on the same simple extension is found to be invalid in general. Where fluctuations in an unstable parameter induce nonlinear signal variations, a technique of contingent covariance analysis is proposed here to suppress false correlations. In this paper we also show a method to eliminate false correlations associated with fluctuations of several unstable experimental parameters.
Sparse Covariance Matrix Estimation by DCA-Based Algorithms.
Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham
2017-11-01
This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.
How much do genetic covariances alter the rate of adaptation?
Agrawal, Aneil F; Stinchcombe, John R
2009-03-22
Genetically correlated traits do not evolve independently, and the covariances between traits affect the rate at which a population adapts to a specified selection regime. To measure the impact of genetic covariances on the rate of adaptation, we compare the rate fitness increases given the observed G matrix to the expected rate if all the covariances in the G matrix are set to zero. Using data from the literature, we estimate the effect of genetic covariances in real populations. We find no net tendency for covariances to constrain the rate of adaptation, though the quality and heterogeneity of the data limit the certainty of this result. There are some examples in which covariances strongly constrain the rate of adaptation but these are balanced by counter examples in which covariances facilitate the rate of adaptation; in many cases, covariances have little or no effect. We also discuss how our metric can be used to identify traits or suites of traits whose genetic covariances to other traits have a particularly large impact on the rate of adaptation.
Bauer, Renato A; Wurst, Jacqueline M; Tan, Derek S
2010-06-01
Existing drugs address a relatively narrow range of biological targets. As a result, libraries of drug-like molecules have proven ineffective against a variety of challenging targets, such as protein-protein interactions, nucleic acid complexes, and antibacterial modalities. In contrast, natural products are known to be effective at modulating such targets, and new libraries are being developed based on underrepresented scaffolds and regions of chemical space associated with natural products. This has led to several recent successes in identifying new chemical probes that address these challenging targets. Copyright 2010 Elsevier Ltd. All rights reserved.
Comparative Analyses of Phenotypic Trait Covariation within and among Populations.
Peiman, Kathryn S; Robinson, Beren W
2017-10-01
Many morphological, behavioral, physiological, and life-history traits covary across the biological scales of individuals, populations, and species. However, the processes that cause traits to covary also change over these scales, challenging our ability to use patterns of trait covariance to infer process. Trait relationships are also widely assumed to have generic functional relationships with similar evolutionary potentials, and even though many different trait relationships are now identified, there is little appreciation that these may influence trait covariation and evolution in unique ways. We use a trait-performance-fitness framework to classify and organize trait relationships into three general classes, address which ones more likely generate trait covariation among individuals in a population, and review how selection shapes phenotypic covariation. We generate predictions about how trait covariance changes within and among populations as a result of trait relationships and in response to selection and consider how these can be tested with comparative data. Careful comparisons of covariation patterns can narrow the set of hypothesized processes that cause trait covariation when the form of the trait relationship and how it responds to selection yield clear predictions about patterns of trait covariation. We discuss the opportunities and limitations of comparative approaches to evaluate hypotheses about the evolutionary causes and consequences of trait covariation and highlight the importance of evaluating patterns within populations replicated in the same and in different selective environments. Explicit hypotheses about trait relationships are key to generating effective predictions about phenotype and its evolution using covariance data.
Starting a Research Data Management Program Based in a University Library
Henderson, Margaret E.; Knott, Teresa L.
2015-01-01
As the need for research data management grows, many libraries are considering adding data services to help with the research mission of their institution. The VCU Libraries created a position and hired a director of research data management in September 2013. The position was new to the libraries and the university. With the backing of the library administration, a plan for building relationships with VCU faculty, researchers, students, service and resource providers, including grant administrators, was developed to educate and engage the community in data management plan writing and research data management training. PMID:25611440
Atrophy and structural covariance of the cholinergic basal forebrain in primary progressive aphasia.
Teipel, Stefan; Raiser, Theresa; Riedl, Lina; Riederer, Isabelle; Schroeter, Matthias L; Bisenius, Sandrine; Schneider, Anja; Kornhuber, Johannes; Fliessbach, Klaus; Spottke, Annika; Grothe, Michel J; Prudlo, Johannes; Kassubek, Jan; Ludolph, Albert; Landwehrmeyer, Bernhard; Straub, Sarah; Otto, Markus; Danek, Adrian
2016-10-01
Primary progressive aphasia (PPA) is characterized by profound destruction of cortical language areas. Anatomical studies suggest an involvement of cholinergic basal forebrain (BF) in PPA syndromes, particularly in the area of the nucleus subputaminalis (NSP). Here we aimed to determine the pattern of atrophy and structural covariance as a proxy of structural connectivity of BF nuclei in PPA variants. We studied 62 prospectively recruited cases with the clinical diagnosis of PPA and 31 healthy older control participants from the cohort study of the German consortium for frontotemporal lobar degeneration (FTLD). We determined cortical and BF atrophy based on high-resolution magnetic resonance imaging (MRI) scans. Patterns of structural covariance of BF with cortical regions were determined using voxel-based partial least square analysis. We found significant atrophy of total BF and BF subregions in PPA patients compared with controls [F(1, 82) = 20.2, p covariance analysis in healthy controls revealed associations of the BF nuclei, particularly the NSP, with left hemispheric predominant prefrontal, lateral temporal, and parietal cortical areas, including Broca's speech area (p covariance of the BF nuclei mostly with right but not with left hemispheric cortical areas (p covariance of the BF with left hemispheric cortical areas in healthy aging towards right hemispheric cortical areas in PPA, possibly reflecting a consequence of the profound and early destruction of cortical language areas in PPA. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Synthetic promoter libraries for Corynebacterium glutamicum
DEFF Research Database (Denmark)
Rytter, Jakob Vang; Helmark, Søren; Chen, Jun
2014-01-01
The ability to modulate gene expression is an important genetic tool in systems biology and biotechnology. Here, we demonstrate that a previously published easy and fast PCR-based method for modulating gene expression in lactic acid bacteria is also applicable to Corynebacterium glutamicum. We co...... promoter library (SPL) technology is convenient for modulating gene expression in C. glutamicum and should have many future applications, within basic research as well as for optimizing industrial production organisms....... constructed constitutive promoter libraries based on various combinations of a previously reported C. glutamicum -10 consensus sequence (gngnTA(c/t)aaTgg) and the Escherichia coli -35 consensus, either with or without an AT-rich region upstream. A promoter library based on consensus sequences frequently found...... in low-GC Gram-positive microorganisms was also included. The strongest promoters were found in the library with a -35 region and a C. glutamicum -10 consensus, and this library also represents the largest activity span. Using the alternative -10 consensus TATAAT, which can be found in many other...
Determination of covariant Schwinger terms in anomalous gauge theories
International Nuclear Information System (INIS)
Kelnhofer, G.
1991-01-01
A functional integral method is used to determine equal time commutators between the covariant currents and the covariant Gauss-law operators in theories which are affected by an anomaly. By using a differential geometrical setup we show how the derivation of consistent- and covariant Schwinger terms can be understood on an equal footing. We find a modified consistency condition for the covariant anomaly. As a by-product the Bardeen-Zumino functional, which relates consistent and covariant anomalies, can be interpreted as connection on a certain line bundle over all gauge potentials. Finally the commutator anomalies are calculated for the two- and four dimensional case. (Author) 13 refs
Coherent states and covariant semi-spectral measures
International Nuclear Information System (INIS)
Scutaru, H.
1976-01-01
The close connection between Mackey's theory of imprimitivity systems and the so called generalized coherent states introduced by Perelomov is established. Coherent states give a covariant description of the ''localization'' of a quantum system in the phase space in a similar way as the imprimitivity systems give a covariant description of the localization of a quantum system in the configuration space. The observation that for any system of coherent states one can define a covariant semi-spectral measure made possible a rigurous formulation of this idea. A generalization of the notion of coherent states is given. Covariant semi-spectral measures associated with systems of coherent states are defined and characterized. Necessary and sufficient conditions for a unitary representation of a Lie group to be i) a subrepresentation of an induced one and ii) a representation with coherent states are given (author)
Covariance NMR Processing and Analysis for Protein Assignment.
Harden, Bradley J; Frueh, Dominique P
2018-01-01
During NMR resonance assignment it is often necessary to relate nuclei to one another indirectly, through their common correlations to other nuclei. Covariance NMR has emerged as a powerful technique to correlate such nuclei without relying on error-prone peak peaking. However, false-positive artifacts in covariance spectra have impeded a general application to proteins. We recently introduced pre- and postprocessing steps to reduce the prevalence of artifacts in covariance spectra, allowing for the calculation of a variety of 4D covariance maps obtained from diverse combinations of pairs of 3D spectra, and we have employed them to assign backbone and sidechain resonances in two large and challenging proteins. In this chapter, we present a detailed protocol describing how to (1) properly prepare existing 3D spectra for covariance, (2) understand and apply our processing script, and (3) navigate and interpret the resulting 4D spectra. We also provide solutions to a number of errors that may occur when using our script, and we offer practical advice when assigning difficult signals. We believe such 4D spectra, and covariance NMR in general, can play an integral role in the assignment of NMR signals.
Eddy Covariance Measurements of the Sea-Spray Aerosol Flu
Brooks, I. M.; Norris, S. J.; Yelland, M. J.; Pascal, R. W.; Prytherch, J.
2015-12-01
Historically, almost all estimates of the sea-spray aerosol source flux have been inferred through various indirect methods. Direct estimates via eddy covariance have been attempted by only a handful of studies, most of which measured only the total number flux, or achieved rather coarse size segregation. Applying eddy covariance to the measurement of sea-spray fluxes is challenging: most instrumentation must be located in a laboratory space requiring long sample lines to an inlet collocated with a sonic anemometer; however, larger particles are easily lost to the walls of the sample line. Marine particle concentrations are generally low, requiring a high sample volume to achieve adequate statistics. The highly hygroscopic nature of sea salt means particles change size rapidly with fluctuations in relative humidity; this introduces an apparent bias in flux measurements if particles are sized at ambient humidity. The Compact Lightweight Aerosol Spectrometer Probe (CLASP) was developed specifically to make high rate measurements of aerosol size distributions for use in eddy covariance measurements, and the instrument and data processing and analysis techniques have been refined over the course of several projects. Here we will review some of the issues and limitations related to making eddy covariance measurements of the sea spray source flux over the open ocean, summarise some key results from the last decade, and present new results from a 3-year long ship-based measurement campaign as part of the WAGES project. Finally we will consider requirements for future progress.
Measuring patrons' technology habits: an evidence-based approach to tailoring library services.
Wu, Jin; Chatfield, Amy J; Hughes, Annie M; Kysh, Lynn; Rosenbloom, Megan Curran
2014-04-01
Librarians continually integrate new technologies into library services for health sciences students. Recently published data are lacking about student ownership of technological devices, awareness of new technologies, and interest in using devices and technologies to interact with the library. A survey was implemented at seven health sciences libraries to help answer these questions. Results show that librarian assumptions about awareness of technologies are not supported, and student interest in using new technologies to interact with the library varies widely. Collecting this evidence provides useful information for successfully integrating technologies into library services.
The Weakest Link: Library Catalogs.
Young, Terrence E., Jr.
2002-01-01
Describes methods of correcting MARC records in online public access catalogs in school libraries. Highlights include in-house methods; professional resources; conforming to library cataloging standards; vendor services, including Web-based services; software specifically developed for record cleanup; and outsourcing. (LRW)
Some remarks on general covariance of quantum theory
International Nuclear Information System (INIS)
Schmutzer, E.
1977-01-01
If one accepts Einstein's general principle of relativity (covariance principle) also for the sphere of microphysics (quantum, mechanics, quantum field theory, theory of elemtary particles), one has to ask how far the fundamental laws of traditional quantum physics fulfil this principle. Attention is here drawn to a series of papers that have appeared during the last years, in which the author criticized the usual scheme of quantum theory (Heisenberg picture, Schroedinger picture etc.) and presented a new foundation of the basic laws of quantum physics, obeying the 'principle of fundamental covariance' (Einstein's covariance principle in space-time and covariance principle in Hilbert space of quantum operators and states). (author)
Knowledge management for libraries
Forrestal, Valerie
2015-01-01
Libraries are creating dynamic knowledge bases to capture both tacit and explicit knowledge and subject expertise for use within and beyond their organizations. In this book, readers will learn to move policies and procedures manuals online using a wiki, get the most out of Microsoft SharePoint with custom portals and Web Parts, and build an FAQ knowledge base from reference management applications such as LibAnswers. Knowledge Management for Libraries guides readers through the process of planning, developing, and launching th
Construction and use of gene expression covariation matrix
Directory of Open Access Journals (Sweden)
Bellis Michel
2009-07-01
Full Text Available Abstract Background One essential step in the massive analysis of transcriptomic profiles is the calculation of the correlation coefficient, a value used to select pairs of genes with similar or inverse transcriptional profiles across a large fraction of the biological conditions examined. Until now, the choice between the two available methods for calculating the coefficient has been dictated mainly by technological considerations. Specifically, in analyses based on double-channel techniques, researchers have been required to use covariation correlation, i.e. the correlation between gene expression changes measured between several pairs of biological conditions, expressed for example as fold-change. In contrast, in analyses of single-channel techniques scientists have been restricted to the use of coexpression correlation, i.e. correlation between gene expression levels. To our knowledge, nobody has ever examined the possible benefits of using covariation instead of coexpression in massive analyses of single channel microarray results. Results We describe here how single-channel techniques can be treated like double-channel techniques and used to generate both gene expression changes and covariation measures. We also present a new method that allows the calculation of both positive and negative correlation coefficients between genes. First, we perform systematic comparisons between two given biological conditions and classify, for each comparison, genes as increased (I, decreased (D, or not changed (N. As a result, the original series of n gene expression level measures assigned to each gene is replaced by an ordered string of n(n-1/2 symbols, e.g. IDDNNIDID....DNNNNNNID, with the length of the string corresponding to the number of comparisons. In a second step, positive and negative covariation matrices (CVM are constructed by calculating statistically significant positive or negative correlation scores for any pair of genes by comparing their
Dalbello, Marija
2009-01-01
This article presents the narrative accounts of the beginnings of digital library programs in five European national libraries: Biblioteca nacional de Portugal, Bibliotheque nationale de France, Die Deutsche Bibliothek, the National Library of Scotland, and the British Library. Based on interviews with policy makers and developers of digital…
Creating a web-based digital photographic archive: one hospital library's experience.
Marshall, Caroline; Hobbs, Janet
2017-04-01
Cedars-Sinai Medical Center is a nonprofit community hospital based in Los Angeles. Its history spans over 100 years, and its growth and development from the merging of 2 Jewish hospitals, Mount Sinai and Cedars of Lebanon, is also part of the history of Los Angeles. The medical library collects and maintains the hospital's photographic archive, to which retiring physicians, nurses, and an active Community Relations Department have donated photographs over the years. The collection was growing rapidly, it was impossible to display all the materials, and much of the collection was inaccessible to patrons. The authors decided to make the photographic collection more accessible to medical staff and researchers by purchasing a web-based digital archival package, Omeka. We decided what material should be digitized by analyzing archival reference requests and considering the institution's plan to create a Timeline Wall documenting and celebrating the history of Cedars-Sinai. Within 8 months, we digitized and indexed over 500 photographs. The digital archive now allows patrons and researchers to access the history of the hospital and enables the library to process archival references more efficiently.
International Nuclear Information System (INIS)
Capote Noy, Roberto; Nichols, Alan L.; Pronyaev, Vladimir G.
2003-01-01
An integral part of the activities of the IAEA Nuclear Data Section involves the development of nuclear data for a wide range of user applications. When considering low-energy nuclear reactions induced by neutrons, photons and charged particles, a detailed knowledge is required of the production cross sections over a wide energy range, spectra of emitted particles and their angular distributions. Two highly relevant IAEA data development projects are considered in this paper. Neutron reaction cross-section standards represent the basic quantities needed in nuclear reaction cross-section measurements and evaluations. These standards and the covariance matrices of their uncertainties were previously evaluated and released in 1987. However, the derived uncertainties were subsequently considered to be unrealistic low due to the effect of the low uncertainties obtained in fitting the light element standards to the R-matrix model; as a result, evaluators were forced to scale up the uncertainties to 'expected values'. An IAEA Coordinated Research Project (CRP) entitled 'Improvement of the Standard Cross Sections for Light Elements' was initiated in 2002 to improve the evaluation methodology for the covariance matrix of uncertainty in the R-matrix model fits, and to produce R-matrix evaluations of the important light element standards. The scope of this CRP has been substantially extended to include the preparation of a full set of evaluated standard reactions and covariance matrices of their uncertainties. While almost all requests for nuclear data were originally addressed through measurement programmes, our theoretical understanding of nuclear phenomena has reached a reasonable degree of reliability and nuclear modeling has become standard practice in nuclear data evaluations (with measurements remaining crucial for data testing and benchmarking). Since nuclear model codes require a considerable amount of numerical input, the IAEA has instigated extensive efforts to
Disruption of structural covariance networks for language in autism is modulated by verbal ability.
Sharda, Megha; Khundrakpam, Budhachandra S; Evans, Alan C; Singh, Nandini C
2016-03-01
The presence of widespread speech and language deficits is a core feature of autism spectrum disorders (ASD). These impairments have often been attributed to altered connections between brain regions. Recent developments in anatomical correlation-based approaches to map structural covariance offer an effective way of studying such connections in vivo. In this study, we employed such a structural covariance network (SCN)-based approach to investigate the integrity of anatomical networks in fronto-temporal brain regions of twenty children with ASD compared to an age and gender-matched control group of twenty-two children. Our findings reflected large-scale disruption of inter and intrahemispheric covariance in left frontal SCNs in the ASD group compared to controls, but no differences in right fronto-temporal SCNs. Interhemispheric covariance in left-seeded networks was further found to be modulated by verbal ability of the participants irrespective of autism diagnosis, suggesting that language function might be related to the strength of interhemispheric structural covariance between frontal regions. Additionally, regional cortical thickening was observed in right frontal and left posterior regions, which was predicted by decreasing symptom severity and increasing verbal ability in ASD. These findings unify reports of regional differences in cortical morphology in ASD. They also suggest that reduced left hemisphere asymmetry and increased frontal growth may not only reflect neurodevelopmental aberrations but also compensatory mechanisms.
Starting a research data management program based in a university library.
Henderson, Margaret E; Knott, Teresa L
2015-01-01
As the need for research data management grows, many libraries are considering adding data services to help with the research mission of their institution. The Virginia Commonwealth University (VCU) Libraries created a position and hired a director of research data management in September 2013. The position was new to the libraries and the university. With the backing of the library administration, a plan for building relationships with VCU faculty, researchers, students, service and resource providers, including grant administrators, was developed to educate and engage the community in data management plan writing and research data management training.
Libraries for users services in academic libraries
Alvite, Luisa
2010-01-01
This book reviews the quality and evolution of academic library services. It revises service trends offered by academic libraries and the challenge of enhancing traditional ones such as: catalogues, repositories and digital collections, learning resources centres, virtual reference services, information literacy and 2.0 tools.studies the role of the university library in the new educational environment of higher educationrethinks libraries in academic contextredefines roles for academic libraries
Changing Traditions: Automation and the Oxford College Libraries.
Bell, Suzanne
1990-01-01
Discussion of automation in the Oxford College Libraries (England) begins with background on the university library system, which consists of numerous independent libraries. Centralized and decentralized automation activities are described, and hardware and software for the microcomputer-based system at the University College Library are…
The Bayesian Covariance Lasso.
Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G
2013-04-01
Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.
Galaxy-galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose
2017-11-01
We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Galaxy–galaxy lensing estimators and their covariance properties
International Nuclear Information System (INIS)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; Slosar, Anze; Gonzalez, Jose Vazquez
2017-01-01
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Biomathematical Description of Synthetic Peptide Libraries
Trepel, Martin
2015-01-01
Libraries of randomised peptides displayed on phages or viral particles are essential tools in a wide spectrum of applications. However, there is only limited understanding of a library's fundamental dynamics and the influences of encoding schemes and sizes on their quality. Numeric properties of libraries, such as the expected number of different peptides and the library's coverage, have long been in use as measures of a library's quality. Here, we present a graphical framework of these measures together with a library's relative efficiency to help to describe libraries in enough detail for researchers to plan new experiments in a more informed manner. In particular, these values allow us to answer-in a probabilistic fashion-the question of whether a specific library does indeed contain one of the "best" possible peptides. The framework is implemented in a web-interface based on two packages, discreteRV and peptider, to the statistical software environment R. We further provide a user-friendly web-interface called PeLiCa (Peptide Library Calculator, http://www.pelica.org), allowing scientists to plan and analyse their peptide libraries. PMID:26042419
Treatment Effects with Many Covariates and Heteroskedasticity
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Jansson, Michael; Newey, Whitney K.
The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results...
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and
Health sciences libraries building survey, 1999-2009.
Ludwig, Logan
2010-04-01
A survey was conducted of health sciences libraries to obtain information about newer buildings, additions, remodeling, and renovations. An online survey was developed, and announcements of survey availability posted to three major email discussion lists: Medical Library Association (MLA), Association of Academic Health Sciences Libraries (AAHSL), and MEDLIB-L. Previous discussions of library building projects on email discussion lists, a literature review, personal communications, and the author's consulting experiences identified additional projects. Seventy-eight health sciences library building projects at seventy-three institutions are reported. Twenty-two are newer facilities built within the last ten years; two are space expansions; forty-five are renovation projects; and nine are combinations of new and renovated space. Six institutions report multiple or ongoing renovation projects during the last ten years. The survey results confirm a continuing migration from print-based to digitally based collections and reveal trends in library space design. Some health sciences libraries report loss of space as they move toward creating space for "community" building. Libraries are becoming more proactive in using or retooling space for concentration, collaboration, contemplation, communication, and socialization. All are moving toward a clearer operational vision of the library as the institution's information nexus and not merely as a physical location with print collections.
Cosmic censorship conjecture revisited: covariantly
International Nuclear Information System (INIS)
Hamid, Aymen I M; Goswami, Rituparno; Maharaj, Sunil D
2014-01-01
In this paper we study the dynamics of the trapped region using a frame independent semi-tetrad covariant formalism for general locally rotationally symmetric (LRS) class II spacetimes. We covariantly prove some important geometrical results for the apparent horizon, and state the necessary and sufficient conditions for a singularity to be locally naked. These conditions bring out, for the first time in a quantitative and transparent manner, the importance of the Weyl curvature in deforming and delaying the trapped region during continual gravitational collapse, making the central singularity locally visible. (paper)
Dolan, C.V.; Molenaar, P.C.M.; Boomsma, D.I.
1991-01-01
D. Soerbom's (1974, 1976) simplex model approach to simultaneous analysis of means and covariance structure was applied to analysis of means observed in a single group. The present approach to the simultaneous biometric analysis of covariance and mean structure is based on the testable assumption
Evaluation of covariance in theoretical calculation of nuclear data
International Nuclear Information System (INIS)
Kikuchi, Yasuyuki
1981-01-01
Covariances of the cross sections are discussed on the statistical model calculations. Two categories of covariance are discussed: One is caused by the model approximation and the other by the errors in the model parameters. As an example, the covariances are calculated for 100 Ru. (author)
An Evidence Based Methodology to Facilitate Public Library Non-fiction Collection Development
Directory of Open Access Journals (Sweden)
Matthew Kelly
2015-12-01
Full Text Available Objective – This research was designed as a pilot study to test a methodology for subject based collection analysis for public libraries. Methods – WorldCat collection data from eight Australian public libraries was extracted using the Collection Evaluation application. The data was aggregated and filtered to assess how the sample’s titles could be compared against the OCLC Conspectus subject categories. A hierarchy of emphasis emerged and this was divided into tiers ranging from 1% of the sample. These tiers were further analysed to quantify their representativeness against both the sample’s titles and the subject categories taken as a whole. The interpretive aspect of the study sought to understand the types of knowledge embedded in the tiers and was underpinned by hermeneutic phenomenology. Results – The study revealed that there was a marked tendency for a small percentage of subject categories to constitute a large proportion of the potential topicality that might have been represented in these types of collections. The study also found that distribution of the aggregated collection conformed to a Power Law distribution (80/20 so that approximately 80% of the collection was represented by 20% of the subject categories. The study also found that there were significant commonalities in the types of subject categories that were found in the designated tiers and that it may be possible to develop ontologies that correspond to the collection tiers. Conclusions – The evidence-based methodology developed in this pilot study has the potential for further development to help to improve the practice of collection development. The introduction of the concept of the epistemic role played by collection tiers is a promising aid to inform our understanding of knowledge organization for public libraries. The research shows a way forward to help to link subjective decision making with a scientifically based approach to managing knowledge
School Libraries Empowering Learning: The Australian Landscape.
Todd, Ross J.
2003-01-01
Describes school libraries in Australia. Highlights include the title of teacher librarian and their education; the history of the role of school libraries in Australian education; empowerment; information skills and benchmarks; national standards for school libraries; information literacy; learning outcomes; evidence-based practice; digital…
Status of CINDER and ENDF/B-V based libraries for transmutation calculations
International Nuclear Information System (INIS)
Wilson, W.B.; England, T.R.; LaBauve, R.J.; Battat, M.E.; Wessol, D.E.; Perry, R.T.
1980-01-01
The CINDER codes and their data libraries are described, and their range of calculational capabilities are described using documented applications. The importance of ENDF/B data and the features of the ENDF/B-IV and ENDF/B-V fission-product and actinide data files are emphasized. The actinide decay data of ENDF/B-V, augmented by additional data from available sources, are used to produce average decay energy values and neutron source values from sponteneous fission, (α,n) and delayed neutron emission for 144 actinide nuclides that are formed in reactor fuel. The status and characteristics of the CINDER-2 code is described, along with a brief description of more well known code versions; a review of the status of new ENDF/B-V based libraries for all versions is presented
What's New in the Library Automation Arena?
Breeding, Marshall
1998-01-01
Reviews trends in library automation based on vendors at the 1998 American Library Association Annual Conference. Discusses the major industry trend, a move from host-based computer systems to the new generation of client/server, object-oriented, open systems-based automation. Includes a summary of developments for 26 vendors. (LRW)
Parameters of the covariance function of galaxies
International Nuclear Information System (INIS)
Fesenko, B.I.; Onuchina, E.V.
1988-01-01
The two-point angular covariance functions for two samples of galaxies are considered using quick methods of analysis. It is concluded that in the previous investigations the amplitude of the covariance function in the Lick counts was overestimated and the rate of decrease of the function underestimated
Generally covariant gauge theories
International Nuclear Information System (INIS)
Capovilla, R.
1992-01-01
A new class of generally covariant gauge theories in four space-time dimensions is investigated. The field variables are taken to be a Lie algebra valued connection 1-form and a scalar density. Modulo an important degeneracy, complex [euclidean] vacuum general relativity corresponds to a special case in this class. A canonical analysis of the generally covariant gauge theories with the same gauge group as general relativity shows that they describe two degrees of freedom per space point, qualifying therefore as a new set of neighbors of general relativity. The modification of the algebra of the constraints with respect to the general relativity case is computed; this is used in addressing the question of how general relativity stands out from its neighbors. (orig.)
MCNP4c JEFF-3.1 Based Libraries. Eccolib-Jeff-3.1 libraries; Les bibliotheques Eccolib-Jeff-3.1
Energy Technology Data Exchange (ETDEWEB)
Sublet, J.Ch
2006-07-01
Continuous-energy and multi-temperatures MCNP Ace types libraries, derived from the Joint European Fusion-Fission JEFF-3.1 evaluations, have been generated using the NJOY-99.111 processing code system. They include the continuous-energy neutron JEFF-3.1/General Purpose, JEFF-3.1/Activation-Dosimetry and thermal S({alpha},{beta}) JEFF-3.1/Thermal libraries and data tables. The processing steps and features are explained together with the Quality Assurance processes and records linked to the generation of such multipurpose libraries. (author)
Structural Covariance of the Prefrontal-Amygdala Pathways Associated with Heart Rate Variability.
Wei, Luqing; Chen, Hong; Wu, Guo-Rong
2018-01-01
The neurovisceral integration model has shown a key role of the amygdala in neural circuits underlying heart rate variability (HRV) modulation, and suggested that reciprocal connections from amygdala to brain regions centered on the central autonomic network (CAN) are associated with HRV. To provide neuroanatomical evidence for these theoretical perspectives, the current study used covariance analysis of MRI-based gray matter volume (GMV) to map structural covariance network of the amygdala, and then determined whether the interregional structural correlations related to individual differences in HRV. The results showed that covariance patterns of the amygdala encompassed large portions of cortical (e.g., prefrontal, cingulate, and insula) and subcortical (e.g., striatum, hippocampus, and midbrain) regions, lending evidence from structural covariance analysis to the notion that the amygdala was a pivotal node in neural pathways for HRV modulation. Importantly, participants with higher resting HRV showed increased covariance of amygdala to dorsal medial prefrontal cortex and anterior cingulate cortex (dmPFC/dACC) extending into adjacent medial motor regions [i.e., pre-supplementary motor area (pre-SMA)/SMA], demonstrating structural covariance of the prefrontal-amygdala pathways implicated in HRV, and also implying that resting HRV may reflect the function of neural circuits underlying cognitive regulation of emotion as well as facilitation of adaptive behaviors to emotion. Our results, thus, provide anatomical substrates for the neurovisceral integration model that resting HRV may index an integrative neural network which effectively organizes emotional, cognitive, physiological and behavioral responses in the service of goal-directed behavior and adaptability.
Library school education for medical librarianship.
Roper, F W
1979-10-01
This paper reviews the current situation in library school education for medical librarianship in the United States and Canada based on information from a questionnaire sent to teachers of courses in medical librarianship in accredited library schools. Since 1939, when the first course devoted entirely to medical librarianship was offered at Columbia University, courses have been introduced into the curricula of at least forty-seven of the ALA-accredited library schools. In 1978 there were seventy courses available through forty-seven library schools. Possibilities for specialization in medical librarianship are examined. Course content is reviewed. Implications of the MLA certification examination for library school courses are explored.
Library School Education for Medical Librarianship *
Roper, Fred W.
1979-01-01
This paper reviews the current situation in library school education for medical librarianship in the United States and Canada based on information from a questionnaire sent to teachers of courses in medical librarianship in accredited library schools. Since 1939, when the first course devoted entirely to medical librarianship was offered at Columbia University, courses have been introduced into the curricula of at least forty-seven of the ALA-accredited library schools. In 1978 there were seventy courses available through forty-seven library schools. Possibilities for specialization in medical librarianship are examined. Course content is reviewed. Implications of the MLA certification examination for library school courses are explored. PMID:385086
Application Portable Parallel Library
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
Nonrelativistic fluids on scale covariant Newton-Cartan backgrounds
Mitra, Arpita
2017-12-01
The nonrelativistic covariant framework for fields is extended to investigate fields and fluids on scale covariant curved backgrounds. The scale covariant Newton-Cartan background is constructed using the localization of space-time symmetries of nonrelativistic fields in flat space. Following this, we provide a Weyl covariant formalism which can be used to study scale invariant fluids. By considering ideal fluids as an example, we describe its thermodynamic and hydrodynamic properties and explicitly demonstrate that it satisfies the local second law of thermodynamics. As a further application, we consider the low energy description of Hall fluids. Specifically, we find that the gauge fields for scale transformations lead to corrections of the Wen-Zee and Berry phase terms contained in the effective action.
PCR-based cDNA library construction: general cDNA libraries at the level of a few cells.
Belyavsky, A; Vinogradova, T; Rajewsky, K
1989-01-01
A procedure for the construction of general cDNA libraries is described which is based on the amplification of total cDNA in vitro. The first cDNA strand is synthesized from total RNA using an oligo(dT)-containing primer. After oligo(dG) tailing the total cDNA is amplified by PCR using two primers complementary to oligo(dA) and oligo(dG) ends of the cDNA. For insertion of the cDNA into a vector a controlled trimming of the 3' ends of the cDNA by Klenow enzyme was used. Starting from 10 J558L ...
Welcome to the National Wetlands Research Center Library: Not Just Another Library-A Special Library
Broussard, Linda
2007-01-01
Libraries are grouped into four major types: public, school, academic, and special. The U.S. Geological Survey's (USGS) National Wetlands Research Center (NWRC) library is classified as a special library because it is sponsored by the Federal government, and the collections focus on a specific subject. The NWRC library is the only USGS library dedicated to wetland science. Library personnel offer expert research services to meet the informational needs of NWRC scientists, managers, and support personnel. The NWRC library participates in international cataloging and resource sharing, which allows libraries from throughout the world to borrow from its collections. This sharing facilitates the research of other governmental agencies, universities, and those interested in the study of wetlands.
International Nuclear Information System (INIS)
Herman, Michal Wladyslaw; Cabellos De Francisco, Oscar; Beck, Bret; Ignatyuk, Anatoly V.; Palmiotti, Giuseppe; Grudzevich, Oleg T.; Salvatores, Massimo; Chadwick, Mark; Pelloni, Sandro; Diez De La Obra, Carlos Javier; Wu, Haicheng; Sobes, Vladimir; Rearden, Bradley T.; Yokoyama, Kenji; Hursin, Mathieu; Penttila, Heikki; Kodeli, Ivan-Alexander; Plevnik, Lucijan; Plompen, Arjan; Gabrielli, Fabrizio; Leal, Luiz Carlos; Aufiero, Manuele; Fiorito, Luca; Hummel, Andrew; Siefman, Daniel; Leconte, Pierre
2016-05-01
The aim of WPEC subgroup 39 'Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files' is to provide criteria and practical approaches to use effectively the results of sensitivity analyses and cross section adjustments for feedback to evaluators and differential measurement experimentalists in order to improve the knowledge of neutron cross sections, uncertainties, and correlations to be used in a wide range of applications. WPEC subgroup 40-CIELO (Collaborative International Evaluated Library Organization) provides a new working paradigm to facilitate evaluated nuclear reaction data advances. It brings together experts from across the international nuclear reaction data community to identify and document discrepancies among existing evaluated data libraries, measured data, and model calculation interpretations, and aims to make progress in reconciling these discrepancies to create more accurate ENDF-formatted files. SG40-CIELO focusses on 6 important isotopes: "1H, "1"6O, "5"6Fe, "2"3"5","2"3"8U, "2"3"9Pu. This document is the proceedings of the seventh formal Subgroup 39 meeting and of the Joint SG39+SG40 Session held at the NEA, OECD Conference Center, Paris, France on 10-11 May 2016. It comprises a Summary Record of the meeting, and all the available presentations (slides) given by the participants: A - Welcome and actions review (Oscar CABELLOS); B - Methods: - XGPT: uncertainty propagation and data assimilation from continuous energy covariance matrix and resonance parameters covariances (Manuele AUFIERO); - Optimal experiment utilization (REWINDing PIA), (G. Palmiotti); C - Experiment analysis, sensitivity calculations and benchmarks: - Tripoli-4 analysis of SEG experiments (Andrew HUMMEL); - Tripoli-4 analysis of BERENICE experiments (P. DUFAY, Cyrille DE SAINT JEAN); - Preparation of sensitivities of k-eff, beta-eff and shielding benchmarks for adjustment exercise (Ivo KODELI); - SA and
Library of files of evaluated neutron data
International Nuclear Information System (INIS)
Blokhin, A.I.; Ignatyuk, A.V.; Koshcheev, V.N.; Kuz'minov, B.D.; Manokhin, V.N.; Manturov, G.N.; Nikolaev, M.N.
1988-01-01
It is reported about development of the evaluated neutron data files library which was recommended by the GKAE Nuclear Data Commission as the base of improving constant systems in neutron engeneering calculations. A short description of the library content is given and status of the library is pointed out
The K-Step Spatial Sign Covariance Matrix
Croux, C.; Dehon, C.; Yadine, A.
2010-01-01
The Sign Covariance Matrix is an orthogonal equivariant estimator of mul- tivariate scale. It is often used as an easy-to-compute and highly robust estimator. In this paper we propose a k-step version of the Sign Covariance Matrix, which improves its e±ciency while keeping the maximal breakdown
On the covariance matrices in the evaluated nuclear data
International Nuclear Information System (INIS)
Corcuera, R.P.
1983-05-01
The implications of the uncertainties of nuclear data on reactor calculations are shown. The concept of variance, covariance and correlation are expressed first by intuitive definitions and then through statistical theory. The format of the covariance data for ENDF/B is explained and the formulas to obtain the multigroup covariances are given. (Author) [pt
Gbadamosi, Belau Olatunde
2011-01-01
The paper examines the level of library automation and virtual library development in four academic libraries. A validated questionnaire was used to capture the responses from academic librarians of the libraries under study. The paper discovers that none of the four academic libraries is fully automated. The libraries make use of librarians with…
Data Analysis Methods for Library Marketing
Minami, Toshiro; Kim, Eunja
Our society is rapidly changing to information society, where the needs and requests of the people on information access are different widely from person to person. Library's mission is to provide its users, or patrons, with the most appropriate information. Libraries have to know the profiles of their patrons, in order to achieve such a role. The aim of library marketing is to develop methods based on the library data, such as circulation records, book catalogs, book-usage data, and others. In this paper we discuss the methodology and imporatnce of library marketing at the beginning. Then we demonstrate its usefulness through some examples of analysis methods applied to the circulation records in Kyushu University and Guacheon Library, and some implication that obtained as the results of these methods. Our research is a big beginning towards the future when library marketing is an unavoidable tool.
Digital Libraries from Concept to Practice
Banciu, D
2007-01-01
The paper represents the result of research in the field of digital libraries functionalities in the context of new Grid infrastructure support. It is defined a new vector of knowledge society, informational vector - content vector. It presents a Grid European project which includes Romanian partners, and it defines on this base a digital library model which can be applied for the libraries in Romania.
Eddy-covariance methane flux measurements over a European beech forest
Gentsch, Lydia; Siebicke, Lukas; Knohl, Alexander
2015-04-01
The role of forests in global methane (CH4) turnover is currently not well constrained, partially because of the lack of spatially integrative forest-scale measurements of CH4 fluxes. Soil chamber measurements imply that temperate forests generally act as CH4 sinks. Upscaling of chamber observations to the forest scale is however problematic, if the upscaling is not constrained by concurrent 'top-down' measurements, such as of the eddy-covariance type, which provide sufficient integration of spatial variations and of further potential CH4 flux components within forest ecosystems. Ongoing development of laser absorption-based optical instruments, resulting in enhanced measurement stability, precision and sampling speed, has recently improved the prospects for meaningful eddy-covariance measurements at sites with presumably low CH4 fluxes, hence prone to reach the flux detection limit. At present, we are launching eddy-covariance CH4 measurements at a long-running ICOS flux tower site (Hainich National Park, Germany), located in a semi natural, unmanaged, beech dominated forest. Eddy-covariance measurements will be conducted with a laser spectrometer for parallel CH4, H2Ov and CO2 measurements (FGGA, Los Gatos Research, USA). Independent observations of the CO2 flux by the FGGA and a standard Infrared Gas Analyser (LI-7200, LI-COR, USA) will allow to evaluate data quality of measured CH4 fluxes. Here, we want to present first results with a focus on uncertainties of the calculated CH4 fluxes with regard to instrument precision, data processing and site conditions. In future, we plan to compare eddy-covariance flux estimates to side-by-side turbulent flux observations from a novel eddy accumulation system. Furthermore, soil CH4 fluxes will be measured with four automated chambers situated within the tower footprint. Based on a previous soil chamber study at the same site, we expect the Hainich forest site to act as a CH4 sink. However, we hypothesize that our
Massive data compression for parameter-dependent covariance matrices
Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise
2017-12-01
We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.
MCNP and MATXS cross section libraries based on JENDL-3.3
International Nuclear Information System (INIS)
Kosako, Kazuaki; Konno, Chikara; Fukahori, Tokio; Shibata, Keiichi
2003-01-01
The continuous energy cross section library for the Monte Carlo transport code MCNP-4C, FSXLIB-J33, has been generated from the latest version of JENDL-3.3. The multigroup cross section library with the MATXS format, MATXS-J33, has been generated also from JENDL-3.3. Both libraries contain all nuclides in JENDL-3.3 and are processed at 300 K by the nuclear data processing system NJOY99. (author)
Phase-covariant quantum cloning of qudits
International Nuclear Information System (INIS)
Fan Heng; Imai, Hiroshi; Matsumoto, Keiji; Wang, Xiang-Bin
2003-01-01
We study the phase-covariant quantum cloning machine for qudits, i.e., the input states in a d-level quantum system have complex coefficients with arbitrary phase but constant module. A cloning unitary transformation is proposed. After optimizing the fidelity between input state and single qudit reduced density operator of output state, we obtain the optimal fidelity for 1 to 2 phase-covariant quantum cloning of qudits and the corresponding cloning transformation
Development of a new nuclear data library based on ROOT
Directory of Open Access Journals (Sweden)
Park Tae-Sun
2017-01-01
Full Text Available We develop a new C++ nuclear data library for the Evaluated Nuclear Data File (ENDF data, which we refer to as TNudy. Main motivation of the development is to provide systematic, powerful and intuitive interfaces and functionalities for browsing, visualizing and manipulating the detailed information embodied in the ENDF. To achieve this aim efficiently, the TNudy project is based on the ROOT system. TNudy is still in the stage of development, and the current status and future plans will be presented.
Directory of Open Access Journals (Sweden)
Roya Pournaghi
2017-09-01
The illustrated maps had shown that they might be helpful in gaining a better understanding of the users’ access to the library entrance and facilities in order to improve its utility and efficiency. This is a new idea started to be used in the libraries of the world. Since the study dealing with the network traffic and the amount of time for non-negative ways, Dijkstra’s algorithm was used to Time-Based Way finding. After creating the database, determining the shortest path at the least time was possible.
A virtualized software based on the NVIDIA cuFFT library for image denoising: performance analysis
DEFF Research Database (Denmark)
Galletti, Ardelio; Marcellino, Livia; Montella, Raffaele
2017-01-01
Abstract Generic Virtualization Service (GVirtuS) is a new solution for enabling GPGPU on Virtual Machines or low powered devices. This paper focuses on the performance analysis that can be obtained using a GPGPU virtualized software. Recently, GVirtuS has been extended in order to support CUDA...... ancillary libraries with good results. Here, our aim is to analyze the applicability of this powerful tool to a real problem, which uses the NVIDIA cuFFT library. As case study we consider a simple denoising algorithm, implementing a virtualized GPU-parallel software based on the convolution theorem...
Covariant differential calculus on quantum spheres of odd dimension
International Nuclear Information System (INIS)
Welk, M.
1998-01-01
Covariant differential calculus on the quantum spheres S q 2N-1 is studied. Two classification results for covariant first order differential calculi are proved. As an important step towards a description of the noncommutative geometry of the quantum spheres, a framework of covariant differential calculus is established, including first and higher order calculi and a symmetry concept. (author)
Recent phylogenetic studies have used DNA as the target molecule for the development of environmental 16S rDNA clone libraries. As DNA may persist in the environment, DNA-based libraries cannot be used to identify metabolically active bacteria in water systems. In this study, a...
DEFF Research Database (Denmark)
Støvring, Henrik; Pottegård, Anton; Hallas, Jesper
2017-01-01
, patient sex and patient age as covariates. Results: The estimated prescription durations increased with redeemed amount and age. Women generally had longer prescription durations, which increased more with age than men. For 70-year-old women redeeming 300+ pills, we predicted a 95th percentile...... of the inter-arrival density of 225 (95%CI: 201, 249) days. For 50-year-old men redeeming 100 pills, the corresponding prediction was 97 (88, 106) days. Conclusions: The algorithm allows estimation of prescription durations based on the reverse WTD, which can depend upon observed covariates. Statistical...
Energy Technology Data Exchange (ETDEWEB)
Leitão, Sofia, E-mail: sofia.leitao@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Stadler, Alfred, E-mail: stadler@uevora.pt [Departamento de Física, Universidade de Évora, 7000-671 Évora (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Peña, M.T., E-mail: teresa.pena@tecnico.ulisboa.pt [Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Biernat, Elmar P., E-mail: elmar.biernat@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)
2017-01-10
The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy–light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin–orbit and tensor forces and do not allow to separate the spin–spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark–antiquark interactions.
MARKETING LIBRARY SERVICES IN ACADEMIC LIBRARIES: A ...
African Journals Online (AJOL)
MARKETING LIBRARY SERVICES IN ACADEMIC LIBRARIES: A TOOL FOR SURVIVAL IN THE ... This article discusses the concept of marketing library and information services as an ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT
Directory of Open Access Journals (Sweden)
Sirous Alidousti
2012-02-01
Full Text Available Ghadir Program has been developed to make direct access to academic universities’ resources. This program has been implemented in Ministry of Science, Research and Education by Iranian Research Institute for Information Science and Technology as the coordinating centre since 1999 in 240 libraries after a period of pilot run. After a couple of days, it was necessary to assess the quality of this program to make changes if required. Therefore, here the results of assessing the quality of services provided by the coordinating center from the viewpoint of participant libraries were presented. Servqual applied as the base for this quality assessment. The population of this research was the entire participants which consist of 240 libraries affiliated to 66 universities and research centers. The questionnaire as the research instrument was sent to the libraries’ managers and they were asked to give them to Ghadir Program agents to fill it as well. As the result, among the questionnaires returned form 131 libraries, 178 ones were analyzable. The satisfaction of participant libraries from the services received and the gap between these services and their expectations from the coordinating center was investigated. According to the findings, the satisfaction of libraries from this center was more than average (3.5 from 5. In the Servqual dimensions, empathy and responsiveness had the maximum and reliability had the minimum gap between services received and expectations from the coordinating center.