No-migration variance petition: Draft. Volume 4, Appendices DIF, GAS, GCR (Volume 1)
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-05-31
The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2.6 million cubic feet of the se waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is volume 4 of the petition which presents details about the transport characteristics across drum filter vents and polymer bags; gas generation reactions and rates during long-term WIPP operation; and geological characterization of the WIPP site.
76 FR 60511 - Amendment of Marine Safety Manual, Volume III
2011-09-29
... SECURITY Coast Guard Amendment of Marine Safety Manual, Volume III AGENCY: Coast Guard, DHS. ACTION: Notice... Offshore Units. The policy is currently found in Chapter 16 of the Marine Safety Manual, Volume III. The... Federal Register (73 FR 3316). Background and Purpose Chapter 16 of Volume III of the Marine Safety...
No-migration variance petition. Volume 3, Revision 1: Appendix B, Attachments A through D
Energy Technology Data Exchange (ETDEWEB)
1990-03-01
Volume III contains the following attachments: TRUPACT-II content codes (TRUCON); TRUPACT-II chemical list; chemical compatibility analysis for Rocky Flats Plant waste forms (Appendix 2.10.12 of TRUPACT-II safety analysis report); and chemical compatibility analyses for waste forms across all sites.
Culture of Schools. Final Report. Volume III.
American Anthropological Association, Washington, DC.
The third volume of this 4-volume report contains the last two speeches, on educational philosophy and the role of reason in society, from the Colloquium on the Culture of Schools held at the New School for Social Research (preceding speeches are in Vol. II, SP 003 901), reports on conferences on the culture of schools held in Pittsburgh and…
CAIXA: a catalogue of AGN in the XMM-Newton archive III. Excess Variance Analysis
Ponti, Gabriele; Bianchi, Stefano; Guainazzi, Matteo; Matt, Giorgio; Uttley, Phil; Bonilla, Fonseca; Nuria,
2011-01-01
We report on the results of the first XMM systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM for more than 10 ks in pointed observations which is the largest sample used so far to study AGN X-ray variability on time scales less than a day. We compute the excess variance for all AGN, on different time-scales (10, 20, 40 and 80 ks) and in different energy bands (0.3-0.7, 0.7-2 and 2-10 keV). We observe a highly significant and tight (~0.7 dex) correlation between excess variance and MBH. The subsample of reverberation mapped AGN shows an even smaller scatter (~0.45 dex) comparable to the one induced by the MBH uncertainties. This implies that X-ray variability can be used as an accurate tool to measure MBH and this method is more accurate than the ones based on single epoch optical spectra. The excess variance vs. accretion rate dependence is weaker than expected based on the PSD break frequency scaling, suggesting that both...
Ways to Environmental Education, Volume III.
Allen, Rodney F., Ed.; And Others
Ten environmental education booklets presented in this document are the third volume of the environmental series developed by community groups around the Tallahassee Junior Museum and its Pioneer Farm. The first three booklets present an overview of the museum and of the various education programs and activities offered for students at the museum…
DART II documentation. Volume III. Appendices
Energy Technology Data Exchange (ETDEWEB)
1979-10-01
The DART II is a remote, interactive, microprocessor-based data acquistion system suitable for use with air monitors. This volume of DART II documentation contains the following appendixes: adjustment and calibration procedures; mother board signature list; schematic diagrams; device specification sheets; ROM program listing; 6800 microprocessor instruction list, octal listing; and cable lists. (RWR)
Statistics of Dark Matter Substructure: III. Halo-to-Halo Variance
Jiang, Fangzhou
2016-01-01
We present a study of unprecedented statistical power regarding the halo-to-halo variance of dark matter substructure. Using a combination of N-body simulations and a semi-analytical model, we investigate the variance in subhalo mass fractions and subhalo occupation numbers, with an emphasis on how these statistics scale with halo formation time. We demonstrate that the subhalo mass fraction, f_sub, is mainly a function of halo formation time, with earlier forming haloes having less substructure. At fixed formation redshift, the average f_sub is virtually independent of halo mass, and the mass dependence of f_sub is therefore mainly a manifestation of more massive haloes assembling later. We compare observational constraints on f_sub from gravitational lensing to our model predictions and simulation results. Although the inferred f_sub are substantially higher than the median LCDM predictions, they fall within the 95th percentile due to halo-to-halo variance. We show that while the halo occupation distributio...
DART II documentation. Volume III. Appendices
Energy Technology Data Exchange (ETDEWEB)
1979-05-23
The DART II is a data acquisition system that can be used with air pollution monitoring equipment. This volume contains appendices that deal with the following topics: adjustment and calibration procedures (power supply adjustment procedure, ADC calibration procedure, analog multiplexer calibration procedure); mother board signature list; schematic diagrams; device specification sheets (microprocessor, asynchronous receiver/transmitter, analog-to-digital converter, arithmetic processing unit, 5-volt power supply, +- 15-volt power supply, 24-volt power supply, floppy disk formater/controller, random access static memory); ROM program listing; 6800 microprocessor instruction set, octal listing; and cable lists. (RR)
Free radicals in biology. Volume III
Energy Technology Data Exchange (ETDEWEB)
Pryor, W.A. (ed.)
1977-01-01
This volume covers topics ranging from radiation chemistry to biochemistry, biology, and medicine. This volume attempts to bridge the gap between chemical investigations and the medical applications and implications of free radical reactions. Chapter 1 provides a general introduction to the technique of radiation chemistry, the thermodynamics and kinetic factors that need be considered, the use of pulse radiolysis and flow techniques, and the application of these methods to free radicals of biological interest. Chapter 3 discusses the mechanisms of carbon tetrachloride toxicity. Chapter 4 reviews the morphological, histochemical, biochemical, and chemical nature of lipofuscin pigments. This chapter brings together the evidence that lipofuscin pigments arise from free radical pathology and that the formation of these pigments proves the presence of lipid peroxidation in vivo. Chapter 5 reviews the evidence for production of free (i.e., scavengeable) radicals from the reactions of selected enzymes with their substrates. Chapter 6 discusses one of the systems in which free radical damage is clearly important in vivo, both for man and animal, the damage caused to skin by sunlight. The evidence that free radical reactions can contribute to carcinogenesis dates from the earliest observations that ionizing radiation often produces higher incidences of tumors. A current working hypothesis is that chemical toxins cause damage to DNA and that the repair of this damge may incorporate viral genetic information into the host cell's chromosomes, producing cell transformation and cancer. The mechanism whereby chemical carcinogens become bound to DNA to produce point defects is discussed in Chapter 7.
Breidenbach, Johannes; McRoberts, Ronald E; Astrup, Rasmus
2016-02-01
Due to the availability of good and reasonably priced auxiliary data, the use of model-based regression-synthetic estimators for small area estimation is popular in operational settings. Examples are forest management inventories, where a linking model is used in combination with airborne laser scanning data to estimate stand-level forest parameters where no or too few observations are collected within the stand. This paper focuses on different approaches to estimating the variances of those estimates. We compared a variance estimator which is based on the estimation of superpopulation parameters with variance estimators which are based on predictions of finite population values. One of the latter variance estimators considered the spatial autocorrelation of the residuals whereas the other one did not. The estimators were applied using timber volume on stand level as the variable of interest and photogrammetric image matching data as auxiliary information. Norwegian National Forest Inventory (NFI) data were used for model calibration and independent data clustered within stands were used for validation. The empirical coverage proportion (ECP) of confidence intervals (CIs) of the variance estimators which are based on predictions of finite population values was considerably higher than the ECP of the CI of the variance estimator which is based on the estimation of superpopulation parameters. The ECP further increased when considering the spatial autocorrelation of the residuals. The study also explores the link between confidence intervals that are based on variance estimates as well as the well-known confidence and prediction intervals of regression models.
Technology transfer package on seismic base isolation - Volume III
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-02-14
This Technology Transfer Package provides some detailed information for the U.S. Department of Energy (DOE) and its contractors about seismic base isolation. Intended users of this three-volume package are DOE Design and Safety Engineers as well as DOE Facility Managers who are responsible for reducing the effects of natural phenomena hazards (NPH), specifically earthquakes, on their facilities. The package was developed as part of DOE's efforts to study and implement techniques for protecting lives and property from the effects of natural phenomena and to support the International Decade for Natural Disaster Reduction. Volume III contains supporting materials not included in Volumes I and II.
Variance in brain volume with advancing age: implications for defining the limits of normality.
Directory of Open Access Journals (Sweden)
David Alexander Dickie
Full Text Available Statistical models of normal ageing brain tissue volumes may support earlier diagnosis of increasingly common, yet still fatal, neurodegenerative diseases. For example, the statistically defined distribution of normal ageing brain tissue volumes may be used as a reference to assess patient volumes. To date, such models were often derived from mean values which were assumed to represent the distributions and boundaries, i.e. percentile ranks, of brain tissue volume. Since it was previously unknown, the objective of the present study was to determine if this assumption was robust, i.e. whether regression models derived from mean values accurately represented the distributions and boundaries of brain tissue volume at older ages.We acquired T1-w magnetic resonance (MR brain images of 227 normal and 219 Alzheimer's disease (AD subjects (aged 55-89 years from publicly available databanks. Using nonlinear regression within both samples, we compared mean and percentile rank estimates of whole brain tissue volume by age.In both the normal and AD sample, mean regression estimates of brain tissue volume often did not accurately represent percentile rank estimates (errors=-74% to 75%. In the normal sample, mean estimates generally underestimated differences in brain volume at percentile ranks below the mean. Conversely, in the AD sample, mean estimates generally underestimated differences in brain volume at percentile ranks above the mean. Differences between ages at the 5(th percentile rank of normal subjects were ~39% greater than mean differences in the AD subjects.While more data are required to make true population inferences, our results indicate that mean regression estimates may not accurately represent the distributions of ageing brain tissue volumes. This suggests that percentile rank estimates will be required to robustly define the limits of brain tissue volume in normal ageing and neurodegenerative disease.
Stereological estimation of the mean and variance of nuclear volume from vertical sections
DEFF Research Database (Denmark)
Sørensen, Flemming Brandt
1991-01-01
The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...
Breckinridge Project, initial effort. Report III, Volume 2. Specifications
Energy Technology Data Exchange (ETDEWEB)
None
1982-01-01
Report III, Volume 2 contains those specifications numbered K through Y, as follows: Specifications for Compressors (K); Specifications for Piping (L); Specifications for Structures (M); Specifications for Insulation (N); Specifications for Electrical (P); Specifications for Concrete (Q); Specifications for Civil (S); Specifications for Welding (W); Specifications for Painting (X); and Specifications for Special (Y). The standard specifications of Bechtel Petroleum Incorporated have been amended as necessary to reflect the specific requirements of the Breckinridge Project and the more stringent specifications of Ashland Synthetic Fuels, Inc. These standard specifications are available for the Initial Effort (Phase Zero) work performed by all contractors and subcontractors.
No-migration variance petition. Appendix B, Attachments E--Q: Volume 4, Revision 1
Energy Technology Data Exchange (ETDEWEB)
1990-03-01
Volume IV contains the following attachments: TRU mixed waste characterization database; hazardous constituents of Rocky flats transuranic waste; summary of waste components in TRU waste sampling program at INEL; total volatile organic compounds (VOC) analyses at Rocky Flats Plant; total metals analyses from Rocky Flats Plant; results of toxicity characteristic leaching procedure (TCLP) analyses; results of extraction procedure (EP) toxicity data analyses; summary of headspace gas analysis in Rocky Flats Plant (RFP) -- sampling program FY 1988; waste drum gas generation--sampling program at Rocky Flats Plant during FY 1988; TRU waste sampling program -- volume one; TRU waste sampling program -- volume two; and summary of headspace gas analyses in TRU waste sampling program; summary of volatile organic compounds (V0C) -- analyses in TRU waste sampling program.
Minerals Yearbook, volume III, Area Reports—International
,
2017-01-01
The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.
No-migration variance petition. Appendices C--J: Volume 5, Revision 1
Energy Technology Data Exchange (ETDEWEB)
1990-03-01
Volume V contains the appendices for: closure and post-closure plans; RCRA ground water monitoring waver; Waste Isolation Division Quality Program Manual; water quality sampling plan; WIPP Environmental Procedures Manual; sample handling and laboratory procedures; data analysis; and Annual Site Environmental Monitoring Report for the Waste Isolation Pilot Plant.
Waste Isolation Pilot Plant No-Migration Variance Petition. Revision 1, Volume 1
Energy Technology Data Exchange (ETDEWEB)
Hunt, Arlen
1990-03-01
The purpose of the WIPP No-Migration Variance Petition is to demonstrate, according to the requirements of RCRA {section}3004(d) and 40 CFR {section}268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the facility for as long as the wastes remain hazardous. The DOE submitted the petition to the EPA in March 1989. Upon completion of its initial review, the EPA provided to DOE a Notice of Deficiencies (NOD). DOE responded to the EPA`s NOD and met with the EPA`s reviewers of the petition several times during 1989. In August 1989, EPA requested that DOE submit significant additional information addressing a variety of topics including: waste characterization, ground water hydrology, geology and dissolution features, monitoring programs, the gas generation test program, and other aspects of the project. This additional information was provided to EPA in January 1990 when DOE submitted Revision 1 of the Addendum to the petition. For clarity and ease of review, this document includes all of these submittals, and the information has been updated where appropriate. This document is divided into the following sections: Introduction, 1.0: Facility Description, 2.0: Waste Description, 3.0; Site Characterization, 4.0; Environmental Impact Analysis, 5.0; Prediction and Assessment of Infrequent Events, 6.0; and References, 7.0.
No-migration variance petition. Appendices A--B: Volume 2, Revision 1
Energy Technology Data Exchange (ETDEWEB)
1990-03-01
Volume II contains Appendix A, emergency plan and Appendix B, waste analysis plan. The Waste Isolation Pilot Plant (WIPP) Emergency plan and Procedures (WP 12-9, Rev. 5, 1989) provides an organized plan of action for dealing with emergencies at the WIPP. A contingency plan is included which is in compliance with 40 CFR Part 265, Subpart D. The waste analysis plan provides a description of the chemical and physical characteristics of the wastes to be emplaced in the WIPP underground facility. A detailed discussion of the WIPP Waste Acceptance Criteria and the rationale for its established units are also included.
Energy Technology Data Exchange (ETDEWEB)
1979-08-01
An Environmental Report on the Memphis Light, Gas and Water Division Industrial Fuel Demonstration Plant was prepared for submission to the US Department of Energy under Contract ET-77-C-01-2582. This document is Volume III of a three-volume Environmental Report. Volume I consists of the Summary, Introduction and the Description of the Proposed Action. Volume II consists of the Description of the Existing Environment. Volume III contains the Environmental Impacts of the Proposed Action, Mitigating Measures and Alternatives to the Proposed Action.
Braibanti, A; Bruschi, C; Fisicaro, E; Pasquali, M
1986-06-01
Homogeneous sets of data from strong acid-strong base potentiometric titrations in aqueous solution at various constant ionic strengths have been analysed by statistical criteria. The aim is to see whether the error distribution matches that for the equilibrium constants determined by competitive potentiometric methods using the glass electrode. The titration curve can be defined when the estimated equivalence volume VEM, with standard deviation (s.d.) sigma (VEM), the standard potential E(0), with s.d. sigma(E(0)), and the operational ionic product of water K(*)(w) (or E(*)(w) in mV), with s.d. sigma(K(*)(w)) [or sigma(E(*)(w))] are known. A special computer program, BEATRIX, has been written which optimizes the values of VEM, E(0) and K(*)(w) by linearization of the titration curve as a Gran plot. Analysis of variance applied to a set of 11 titrations in 1.0M sodium chloride medium at 298 K has demonstrated that the values of VEM belong to a normal population of points corresponding to individual potential/volume data-pairs (E(i); v(i)) of any titration, whereas the values of pK(*)(w) (or of E(*)(w)) belong to a normal population with members corresponding to individual titrations, which is also the case for the equilibrium constants. The intertitration variation is attributable to the electrochemical component of the system and appears as signal noise distributed over the titrations. The correction for junction-potentials, introduced in a further stage of the program by optimization in a Nernst equation, increases the noise, i.e., sigma(pK(*)(w)). This correction should therefore be avoided whenever it causes an increase of sigma(pK(*)(w)). The influence of the ionic medium has been examined by processing data from acid-base titrations in 0.1M potassium chloride and 0.5M potassium nitrate media. The titrations in potassium chloride medium showed the same behaviour as those in sodium chloride medium, but with an s.d. for pK(*)(w) that was smaller and close to the
Industrial Maintenance, Volume III. Post Secondary Curriculum Guide.
Butler, Raymond H.; And Others
This volume is the fourth of four volumes that comprise a curriculum guide for a postsecondary industrial maintenance program. It contains three sections and appendixes. Section 4 provides suggested methods of structuring the curriculum. Suggested ways of recording and documenting student progress are presented in section 5. Section 6 contains…
Energy Technology Data Exchange (ETDEWEB)
1981-10-29
This volume contains a description of the software comprising the National Utility Financial Statement Model (NUFS). This is the third of three volumes describing NUFS provided by ICF Incorporated under contract DEAC-01-79EI-10579. The three volumes are entitled: model overview and description, user's guide, and software guide.
Higher Education: Handbook of Theory and Research. Volumes III [and] IV.
Smart, John C., Ed.
Two volumes of a handbook on theory and research in higher education are presented. The 11 papers included in Volume III are as follows: "Qualitative Research Methods in Higher Education" (R. Crowson); "Bricks and Mortar: Architecture and the Study of Higher Education" (J. Thelin and J. Yankovich); "Enrollment Demand Models and Their Policy Uses…
An Evaluation of the Nutrition Services for the Elderly. Volume III. Descriptive Report.
Kirschner Associates, Inc., Albuquerque, NM.
This document is part of a five-volume nationwide study of Nutrition Services operations and elderly citizens participating in congregate dining and home delivery services authorized by Title III-C of the Older Americans' Act. A descriptive report is contained in this volume, which presents non-selective and preliminary analysis of the data base…
Workpapers in English as a Second Language, [Volume III].
Bracy, Maryruth, Ed.
This volume contains the 1969 working papers on subjects related to teaching English as a second language (TESL) and abstracts of Masters Theses completed by students studying TESL. Several articles discuss teaching and learning a second language and practical considerations in second language learning such as reading and writing skills, the use…
Council on Anthropology and Education Newsletter. Volume III, Number 1.
Singleton, John Ed.
General information on format, included, materials, broad concerns, objectives, and availability of the newsletter are described in Volume I, ED 048 049. This issue focuses on ethnology, offering two papers presented at the American Anthropological Association symposiums. The lead paper presents a psycho-cultural developmental approach to the…
Albanian: Basic Course. Volume III, Lessons 27-36.
Defense Language Inst., Monterey, CA.
This third of ten volumes of audiolingual classroom instruction in Albanian for adult students treats Albanian grammar, syntax, and usage in a series of exercises consisting of grammar perception drills, grammar analysis, translation exercises, readings, question-and-answer exercises, and dialogues illustrating specific grammatical features. A…
An Independent Scientific Assessment of Well Stimulation in California Volume III
Energy Technology Data Exchange (ETDEWEB)
Long, Jane C.S. [California Council on Science and Technology, Sacramento, CA (United States); Feinstein, Laura C. [California Council on Science and Technology, Sacramento, CA (United States); Birkholzer, Jens [California Council on Science and Technology, Sacramento, CA (United States); Foxall, William [California Council on Science and Technology, Sacramento, CA (United States); Houseworth, James [California Council on Science and Technology, Sacramento, CA (United States); Jordan, Preston [California Council on Science and Technology, Sacramento, CA (United States); Lindsey, Nathaniel [California Council on Science and Technology, Sacramento, CA (United States); Maddalena, Randy [California Council on Science and Technology, Sacramento, CA (United States); McKone, Thomas [California Council on Science and Technology, Sacramento, CA (United States); Stringfellow, William [California Council on Science and Technology, Sacramento, CA (United States); Ulrich, Craig [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Heberger, Matthew [Pacific Inst., Oakland, CA (United States); Shonkoff, Seth [PSE Healthy Energy, Berkeley, CA (United States); Brandt, Adam [Stanford Univ., CA (United States); Ferrar, Kyle [The FracTracker Alliance, Oakland, CA (United States); Gautier, Donald [DonGautier LLC., Palo Alto, CA (United States); Phillips, Scott [California State Univ. Stanislaus, Turlock, CA (United States); Greenfield, Ben [Univ. of California, Berkeley, CA (United States); Jerrett, Michael L.B. [Univ. of California, Los Angeles, CA (United States)
2015-07-01
This study is issued in three volumes. Volume I, issued in January 2015, describes how well stimulation technologies work, how and where operators deploy these technologies for oil and gas production in California, and where they might enable production in the future. Volume II, issued in July 2015, discusses how well stimulation could affect water, atmosphere, seismic activity, wildlife and vegetation, and human health. Volume II reviews available data, and identifies knowledge gaps and alternative practices that could avoid or mitigate these possible impacts. Volume III, this volume, presents case studies that assess environmental issues and qualitative risks for specific geographic regions. The Summary Report summarizes key findings, conclusions and recommendations of all three volumes.
Weisburd, Melvin I.
The Field Operations and Enforcement Manual for Air Pollution Control, Volume III, explains in detail the following: inspection procedures for specific sources, kraft pulp mills, animal rendering, steel mill furnaces, coking operations, petroleum refineries, chemical plants, non-ferrous smelting and refining, foundries, cement plants, aluminum…
Weisburd, Melvin I.
The Field Operations and Enforcement Manual for Air Pollution Control, Volume III, explains in detail the following: inspection procedures for specific sources, kraft pulp mills, animal rendering, steel mill furnaces, coking operations, petroleum refineries, chemical plants, non-ferrous smelting and refining, foundries, cement plants, aluminum…
Technical Reports (Part I). End of Project Report, 1968-1971, Volume III.
Western Nevada Regional Education Center, Lovelock.
The pamphlets included in this volume are technical reports prepared as outgrowths of the Student Information Systems of the Western Nevada Regional Education Center (WN-REC) funded by a Title III (Elementary and Secondary Education Act) grant. These reports describe methods of interpreting the printouts from the Student Information System;…
Condylar volume and condylar area in class I, class II and class III young adult subjects
Directory of Open Access Journals (Sweden)
Saccucci Matteo
2012-12-01
Full Text Available Abstract Aim Aim of this study was to compare the volume and the shape of mandibular condyles in a Caucasian young adult population, with different skeletal pattern. Material and methods 200 Caucasian patients (15–30 years old, 95 male and 105 females were classified in three groups on the base of ANB angle: skeletal class I (65 patients, skeletal class II (70 patients and skeletal class III (65 patients. Left and right TMJs of each subject were evaluated independently with CBCT (Iluma. TMJ evaluation included: condylar volume; condylar area; morphological index (MI. Condylar volumes were calculated by using the Mimics software. The condylar volume, the area and the morphological index (MI were compared among the three groups, by using non-parametric tests. Results The Kruskal-Wallis test and the Mann Whitney test revealed that: no significant difference was observed in the whole sample between the right and the left condylar volume; subjects in skeletal class III showed a significantly higher condylar volume, respect to class I and class II subjects (p 3 in males and 663.5 ± 81.3 mm3 in females; p 2 in males and 389.76 ± 61.15 mm2 in females; p Conclusion Skeletal class appeared to be associated to the mandibular condylar volume and to the mandibular condylar area in the Caucasian orthodontic population.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-10
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.
Occupational Survey Report. Volume III. Programming Specialty, AFS 511X1.
1980-05-01
ROGRAMMING 1SPECIALTY _ ".T\\ I , , ~AFPT 90-511-413 q ’VOLUME III OF III ON -Y 1980’ ’ q -ppT edfor public releaw; is: OCCUPATIONAL ANALYSIS PROGRAM ,"’ USAF...i I..... i l HI I . .. I Ij. ASSISTANT PROGRAMMING NCOICs (GRP308) PERCENT MEMBERS RF,-.N i:\\I’IVF ’ASKS PERFORMING L BEl k k ,,it’FR PROGRkM.S 96...EAVE OR LIBERfY 79 SilON,,, K NCOM ING PERSONNEl. 79 ODIF + UPDATE FXISI’ING COMPUTER PROGRAMS 75 REVIEW ,RA. SPECIFICATIONS 75 PREPARE PFIAl IEi) FLOW
Energy Technology Data Exchange (ETDEWEB)
Fulton, J.C.
1994-10-01
Volume I of the Hanford Spent Nuclear Fuel Project - Recommended Path Forward constitutes an aggressive series of projects to construct and operate systems and facilities to safely retrieve, package, transport, process, and store K Basins fuel and sludge. Volume II provided a comparative evaluation of four Alternatives for the Path Forward and an evaluation for the Recommended Path Forward. Although Volume II contained extensive appendices, six supporting documents have been compiled in Volume III to provide additional background for Volume II.
Condylar volume and condylar area in class I, class II and class III young adult subjects
Saccucci Matteo; D’Attilio Michele; Rodolfino Daria; Festa Felice; Polimeni Antonella; Tecco Simona
2012-01-01
Abstract Aim Aim of this study was to compare the volume and the shape of mandibular condyles in a Caucasian young adult population, with different skeletal pattern. Material and methods 200 Caucasian patients (15–30 years old, 95 male and 105 females) were classified in three groups on the base of ANB angle: skeletal class I (65 patients), skeletal class II (70 patients) and skeletal class III (65 patients). Left and right TMJs of each subject were evaluated independently with CBCT (Iluma). ...
Energy Technology Data Exchange (ETDEWEB)
None
1979-09-01
The appendices presented in this volume support and supplement Volume I of the Energy Extension Service Pilot Program Evaluation Report: The First Year. The appendices contain back-up data and detailed information on energy savings estimation and other analytic procedures. This volume also describes the data sources used for the evaluation. Appendix I presents the Btu estimation procedures used to calculate state-by-state energy savings. Appendix II contains details of the data sources used for the evaluation. Appendix III presents program activity data, budget, and cost per client analyses. Appendix IV, the Multivariate Analysis of EES Survey Data, provides the basis for the Integrating Statistical Analyses. Appendix V describes the rationale and exclusion rules for outlying data points. The final appendix presents program-by-program fuel costs and self-reported savings and investment.
World Energy Data System (WENDS). Volume III. Country data, LY-PO
Energy Technology Data Exchange (ETDEWEB)
None
1979-06-01
The World Energy Data System contains organized data on those countries and international organizations that may have critical impact on the world energy scene. Included in this volume, Vol. III, are Libya, Luxembourg, Malaysia, Mexico, Netherlands, New Zealand, Niger, Nigeria, Norway, Pakistan, Peru, Philippines, Poland, and Portugal. The following topics are covered for most of the countries: economic, demographic, and educational profiles; energy policy; indigenous energy resources and uses; forecasts, demand, exports, imports of energy supplies; environmental considerations of energy supplies; power production facilities; energy industries; commercial applications of energy; research and development activities of energy; and international activities.
Minerals Yearbook, volume III, Area Reports—International—Latin America and Canada
,
2017-01-01
The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.
Minerals Yearbook, volume III, Area Reports—International—Europe and Central Eurasia
Geological Survey, U.S.
2017-01-01
The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.
Minerals Yearbook, volume III, Area Reports—International—Africa and the Middle East
,
2017-01-01
The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.
Proceedings of the symposium to review Volume III of the Annual Report to Congress
Energy Technology Data Exchange (ETDEWEB)
Alt, F.; Norland, D.
1979-01-01
This report is a transcript of the proceedings of a two-day Symposium, held in the Fall of 1979 at the University of Maryland in order to independently review the 1978 Energy Information Administration (EIA) Annual Report to Congress (ARC), Volume III. Participants included energy forecasting experts from the academic community and the private sector; other Federal, State, and local government energy experts; and Office of Applied Analysis, EIA, staff members. The Symposium and its transcript are a critique of the underlying 1978 ARC assumptions, methodologies, and energy system projections. Discussions cover the short-, mid-, and long-term periods, national and international forecasts, source and consuming sectors and projected economic impacts. 27 figures, 22 tables.
Planning manual for energy resource development on Indian lands. Volume III. Manpower and training
Energy Technology Data Exchange (ETDEWEB)
1978-03-01
This volume addresses ways to bridge the gap between existing tribal skill levels and the skill levels required for higher-paying jobs in energy resource development projects. It addresses opportunities for technical, skilled, and semiskilled employment as well as professional positions, because it is important to have tribal participation at all levels of an operation. Section II, ''Energy-Related Employment Opportunities,'' covers three areas: (1) identification of energy-resource occupations; (2) description of these occupations; and (3) identification of skill requirements by type of occupation. Section III, ''Description of Training Programs,'' also covers three areas: (a) concept of a training-program model; (b) description of various training methods; and (c) an assessment of the cost of training, utilizing different programs. Section IV concentrates on development of a training program for target occupations, skills, and populations. Again this section covers three areas: (i) overview of the development of a skills training program; (ii) identification of target occupations, skills, and populations; and (iii) energy careers for younger tribal members.
American Sociological Association, Washington, DC. Medical Sociology Council.
Volume III of a study of teaching behavioral sciences in medical school presents perspectives on medical behavioral science from the viewpoints of the several behavioral disciplines (anthropology, psychology, sociology, political science, economics, behavioral biology and medical education). In addition, there is a discussion of translating…
Energy Technology Data Exchange (ETDEWEB)
1981-10-29
This report develops and demonstrates the methodology for the National Utility Regulatory (NUREG) Model developed under contract number DEAC-01-79EI-10579. It is accompanied by two supporting volumes. Volume II is a user's guide for operation of the NUREG software. This includes description of the flow of software and data, as well as the formats of all user data files. Finally, Volume III is a software description guide. It briefly describes, and gives a listing of, each program used in NUREG.
Energy Technology Data Exchange (ETDEWEB)
None
1996-10-01
Volume III of the documentation for the Phase I Data Analysis Task performed in support of the current Regional Flow Model, Transport Model, and Risk Assessment for the Nevada Test Site Underground Test Area Subproject contains the data covering groundwater recharge and discharge. Because of the size and complexity of the model area, a considerable quantity of data was collected and analyzed in support of the modeling efforts. The data analysis task was consequently broken into eight subtasks, and descriptions of each subtask's activities are contained in one of the eight volumes that comprise the Phase I Data Analysis Documentation.
Energy Technology Data Exchange (ETDEWEB)
Giramonti, A.J.; Lessard, R.D.; Merrick, D.; Hobson, M.J.
1981-09-01
A technical and economic assessment of fluidized bed combustion augmented compressed air energy storage systems is presented. The results of this assessment effort are presented in three volumes. Volume III - Preconceptual Design contains the system analysis which led to the identification of a preferred component configuration for a fluidized bed combustion augmented compressed air energy storage system, the results of the effort which transformed the preferred configuration into preconceptual power plant design, and an introductory evaluation of the performance of the power plant system during part-load operation and while load following.
Downside Variance Risk Premium
Feunou, Bruno; Jahan-Parvar, Mohammad R.; Okou, Cédric
2015-01-01
We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...
Inside Out. Writings from the Prison Literacy Project. Volumes I-II.
Prison Literacy Project, Philadelphia, PA.
These two volumes contain writings designed for the new reader who is in prison. Written by both inmates and external volunteers, the material in these volumes includes poems, stories, and short essays that deal with subjects of interest to prison inmates. To help the new reader, easier-to-read pieces are presented first. Titles in volume I are as…
How To Set Up Your Own Small Business. Volumes I-II and Overhead Transparencies.
Fallek, Max
This two-volume textbook and collection of overhead transparency masters is intended for use in a course in setting up a small business. The following topics are covered in the first volume: getting off to a good start, doing market research, forecasting sales, financing a small business, understanding the different legal needs of different types…
AIR QUALITY CRITERIA FOR PARTICULATE MATTER, VOLUMES I-III, (EXTERNAL REVIEW DRAFT, 1995)
There is no abstract available for these documents. If further information is requested, please refer to the bibliographic citation and contact the Technical Information Staff at the number listed above.Air Quality Criteria for Particulate Matter, Volume I, Extern...
Field Surveys, IOC Valleys. Volume III, Part I. Cultural Resources Survey, Dry Lake Valley, Nevada.
1981-08-01
Artemisia nova) but also include cliffrose (Cowania mexicana ) and broom snakeweed (Gutierrezia sarothreae) as dominant species. Other species include... CULTURA Ale ~~REOUC SURVEYa AREASczCAvE L CU 11U CUUI 3-2 E-TR-48-III-I 69 was used because it is considered intensive by the Bureau of Land Management and
ICPP calcined solids storage facility closure study. Volume III: Engineering design files
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-02-01
The following information was calculated to support cost estimates and radiation exposure calculations for closure activities at the Calcined Solids Storage Facility (CSSF). Within the estimate, volumes were calculated to determine the required amount of grout to be used during closure activities. The remaining calcine on the bin walls, supports, piping, and floor was also calculated to approximate the remaining residual calcine volumes at different stages of the removal process. The estimates for remaining calcine and vault void volume are higher than what would actually be experienced in the field, but are necessary for bounding purposes. The residual calcine in the bins may be higher than was is experienced in the field as it was assumed that the entire bin volume is full of calcine before removal activities commence. The vault void volumes are higher as the vault roof beam volumes were neglected. The estimations that follow should be considered rough order of magnitude, due to the time constraints as dictated by the project`s scope of work. Should more accurate numbers be required, a new analysis would be necessary.
Compact high order finite volume method on unstructured grids III: Variational reconstruction
Wang, Qian; Ren, Yu-Xin; Pan, Jianhua; Li, Wanai
2017-05-01
This paper presents a variational reconstruction for the high order finite volume method in solving the two-dimensional Navier-Stokes equations on arbitrary unstructured grids. In the variational reconstruction, an interfacial jump integration is defined to measure the jumps of the reconstruction polynomial and its spatial derivatives on each cell interface. The system of linear equations to determine the reconstruction polynomials is derived by minimizing the total interfacial jump integration in the computational domain using the variational method. On each control volume, the derived equations are implicit relations between the coefficients of the reconstruction polynomials defined on a compact stencil involving only the current cell and its direct face-neighbors. The reconstruction and time integration coupled iteration method proposed in our previous paper is used to achieve high computational efficiency. A problem-independent shock detector and the WBAP limiter are used to suppress non-physical oscillations in the simulation of flow with discontinuities. The advantages of the finite volume method using the variational reconstruction over the compact least-squares finite volume method proposed in our previous papers are higher accuracy, higher computational efficiency, more flexible boundary treatment and non-singularity of the reconstruction matrix. A number of numerical test cases are solved to verify the accuracy, efficiency and shock-capturing capability of the finite volume method using the variational reconstruction.
Energy Technology Data Exchange (ETDEWEB)
None
1980-08-01
The Sixth International Conference on Fluidized Bed Combustion was held April 9-11, 1980, at the Atlanta Hilton, Atlanta, Georgia. It was sponsored by the US Department of Energy, the Electric Power Research Institute, the US Environmental Protection Agency, and the Tennessee Valley Authority. Forty-five papers from Vol. III of the proceedings have been entered individually into EDB and ERA. Two papers had been entered previously from other sources. (LTN)
Recent regulatory experience of low-Btu coal gasification. Volume III. Supporting case studies
Energy Technology Data Exchange (ETDEWEB)
Ackerman, E.; Hart, D.; Lethi, M.; Park, W.; Rifkin, S.
1980-02-01
The MITRE Corporation conducted a five-month study for the Office of Resource Applications in the Department of Energy on the regulatory requirements of low-Btu coal gasification. During this study, MITRE interviewed representatives of five current low-Btu coal gasification projects and regulatory agencies in five states. From these interviews, MITRE has sought the experience of current low-Btu coal gasification users in order to recommend actions to improve the regulatory process. This report is the third of three volumes. It contains the results of interviews conducted for each of the case studies. Volume 1 of the report contains the analysis of the case studies and recommendations to potential industrial users of low-Btu coal gasification. Volume 2 contains recommendations to regulatory agencies.
LoMauro, Antonella; Romei, Marianna; Priori, Rita; Laviola, Marianna; D'Angelo, Maria Grazia; Aliverti, Andrea
2014-06-15
Spinal muscular atrophy (SMA) is characterized by degeneration of motor neurons resulting in muscle weakness. For the mild type III form, a sub-classification into type IIIA and IIIB, based on age of motor impairment, was recently proposed. To investigate if SMA IIIA (more severe) and IIIB differ also in terms of respiratory function, thoracoabdominal kinematics was measured during quiet breathing, inspiration preceding cough and inspiratory capacity on 5 type IIIA and 9 type IIIB patients. Four patients with SMA II (more severe than types III) and 19 healthy controls were also studied. Rib cage motion was similar in SMA IIIB and controls. Conversely, in SMA IIIA and SMA II it was significantly reduced and sometime paradoxical during quiet breathing in supine position. Our results suggest that in SMA IIIA intercostal muscles are weakened and the diaphragm is preserved similarly to SMA II, while in SMA IIIB the action of all inspiratory muscles is maintained. Sub-classification of type III seems feasible also for respiratory function.
Recensione a "Collodi. Edizione Nazionale delle Opere di Carlo Lorenzini. Volume III"
Directory of Open Access Journals (Sweden)
Pina Paone
2013-06-01
Full Text Available Si presenta il terzo volume della collana Collodi, Edizione Nazionale delle Opere di Carlo Lorenzini, Giunti, Firenze, 2012, con Prefazione di Mario Vargas Llosa e Introduzione di Daniela Marcheschi. Il volume contiene il famosissimo Le Avventure di Pinocchio, sintesi del percorso artistico dello scrittore toscano ed espressione più compiuta della sua abilità e consapevolezza narrativa. La recensione ripercorrerà i tratti dell’opera, inserendola nel generale e più ampio contesto dell’attività letteraria di Collodi.
Secretarial Science. Curriculum Guides for Two-Year Postsecondary Programs. Volume III.
North Carolina State Dept. of Community Colleges, Raleigh.
The third of three volumes in a postsecondary secretarial science curriculum, this manual contains course syllabi for thirteen secretarial science technical courses. Course titles include Shorthand 1-3; Shorthand Dictation and Transcription, 1-3; Terminology and Vocabulary: Business, Legal, Medical; Typewriting, 1-5; and Word Processing. Each…
Energy Technology Data Exchange (ETDEWEB)
Love, C G
1976-08-23
These appendixes are referenced in Volume II of this report. They contain the detailed electrical distribution equipment requirements and input material requirements forecasts. Forecasts are given for three electric energy usage scenarios. Also included are data on worldwide reserves and demand for 30 raw materials required for the manufacture of electrical distribution equipment.
Kim, Min-Ah; Park, Yang-Ho
2014-01-01
The purpose of this study was to assess the pharyngeal airway volume change after bimaxillary surgery in patients with skeletal Class III malocclusion and evaluate the difference in postoperative pharyngeal airway space between upper premolar extraction cases and nonextraction cases. Cone-beam computed tomographic scans were obtained for 23 patients (13 in extraction group and 10 in nonextraction group) who were diagnosed with mandibular prognathism before surgery (T0) and then 2 months (T2) and 6 months after surgery (T3). Using InVivoDental 3-dimensional imaging software, volumetric changes in the pharyngeal airway space were assessed at T0, T2, and T3. The Wilcoxon signed-rank test was used to determine whether there were significant changes in pharyngeal airway volume between time points. The Mann-Whitney U test was used to determine whether there were significant differences in volumetric changes between the extraction and nonextraction groups. Volumes in all subsections of the pharyngeal airway were decreased (P bimaxillary surgery. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Economic evaluation of the annual cycle energy system (ACES). Final report. Volume III, appendices
Energy Technology Data Exchange (ETDEWEB)
1980-06-01
This volume consists of seven appendices related to ACES, the first three of which are concerned with computer programs. The appendices are entitled: (A) ACESIM: Residential Program Listing; (B) Typical Inputs and Outputs of ACESIM; (C) CACESS: Commercial Building Program Listing; (D) Typical Weather-Year Selection Requirements; (E) Building Characteristics; (F) List of Major Variables Used in the Computer Programs; and (G) Bibliography. 79 references.
Freud on Holiday. Volume III. The Forgetting of a Foreign Name
Kivland, Sharon
2011-01-01
The third volume in the series Freud on Holiday describes a number of holiday possibilities, the problem of deciding where to go and when, the matters of cost and convenience, of appropriate companions and correct context. There are descriptions of train itineraries, of hotel rooms and restaurant menus, but the name of one restaurant resists recall for most of the book. There is a surprising connection with hysteria and another name is forgotten en route, accompanied by an embarrassing error ...
Energy Technology Data Exchange (ETDEWEB)
None
1980-01-01
The overall, long term objective of the Solar Central Receiver Hybrid Power System is to identify, characterize, and ultimately demonstrate the viability and cost effectiveness of solar/fossil, steam Rankine cycle, hybrid power systems that: (1) consist of a combined solar central receiver energy source and a nonsolar energy source at a single, common site, (2) may operate in the base, intermediate, and peaking capacity modes, (3) produce the rated output independent of variations in solar insolation, (4) provide a significant savings (50% or more) in fuel consumpton, and (5) produce power at the minimum possible cost in mills/kWh. It is essential that these hybrid concepts be technically feasible and economically competitive with other systems in the near to mid-term time period (1985-1990) on a commercial scale. The program objective for Phase I is to identify and conceptually characterize solar/fossil steam Rankine cycle, commercial-scale, power plant systems that are economically viable and technically feasible. This volume contains appendices to the conceptual design and systems analysis studies gien in Volume II, Books 1 and 2. (WHK)
Energy Technology Data Exchange (ETDEWEB)
Ormsby, L. S.; Sawyer, T. G.; Brown, Dr., M. L.; Daviet, II, L. L; Weber, E. R.; Brown, J. E.; Arlidge, J. W.; Novak, H. R.; Sanesi, Norman; Klaiman, H. C.; Spangenberg, Jr., D. T.; Groves, D. J.; Maddox, J. D.; Hayslip, R. M.; Ijams, G.; Lacy, R. G.; Montgomery, J.; Carito, J. A.; Ballance, J. W.; Bluemle, C. F.; Smith, D. N.; Wehrey, M. C.; Ladd, K. L.; Evans, Dr., S. K.; Guild, D. H.; Brodfeld, B.; Cleveland, J. A.; Hicks, K. L.; Noga, M. W.; Ross, A. M.
1979-12-01
The purpose of this project is to provide information to DOE which can be used to establish its plans for accelerated commercialization and market penetration of solar electric generating plants in the southwestern region of the United States. The area of interest includes Arizona, California, Colorado, Nevada, New Mexico, Utah, and sections of Oklahoma and Texas. The system integration study establishes the investment that utilities could afford to make in solar thermal, photovoltaic, and wind energy systems, and to assess the sensitivity of the break-even cost to critical variables including fuel escalation rates, fixed charge rates, load growth rates, cloud cover, number of sites, load shape, and energy storage. This information will be used as input to Volume IV, Institutional Studies, one objective of which will be to determine the incentives required to close the gap between the break-even investment for the utilities of the Southwest and the estimated cost of solar generation.
Solar Pilot Plant, Phase I. Preliminary design report. Volume III. Collector subsystem. CDRL item 2
Energy Technology Data Exchange (ETDEWEB)
None
1977-05-01
The Honeywell collector subsystem features a low-profile, multifaceted heliostat designed to provide high reflectivity and accurate angular and spatial positioning of the redirected solar energy under all conditions of wind load and mirror attitude within the design operational envelope. The heliostats are arranged in a circular field around a cavity receiver on a tower halfway south of the field center. A calibration array mounted on the receiver tower provides capability to measure individual heliostat beam location and energy periodically. This information and weather data from the collector field are transmitted to a computerized control subsystem that addresses the individual heliostat to correct pointing errors and determine when the mirrors need cleaning. This volume contains a detailed subsystem design description, a presentation of the design process, and the results of the SRE heliostat test program.
Energy Technology Data Exchange (ETDEWEB)
None
1977-12-01
The evaluation of the energy impacts of regulations and tariffs is structured around three sequential steps: identification of agencies and organizations that impact the commercial marine transportation industry; identification of existing or proposed regulations that were perceived to have a significant energy impact; and quantification of the energy impacts. Following the introductory chapter, Chapter II describes the regulatory structure of the commercial marine transportation industry and includes a description of the role of each organization and the legislative basis for their jurisdiction and an identification of major areas of regulation and those areas that have an energy impact. Chapters III through IX each address one of the 7 existing or proposed regulatory or legislative actions that have an energy impact. Energy impacts of the state of Washington's tanker regulations, of tanker segregated ballast requirements, of inland waterway user charges, of cargo pooling and service rationalization, of the availability of intermodal container transportation services, of capacity limitations at lock and dam 26 on the Mississippi River and the energy implications of the transportation alternatives available for the West Coast crude oil supplies are discussed. (MCW)
Wang, Huiyuan; Yang, Xiaohu; Zhang, Youcai; Shi, JingJing; Jing, Y P; Liu, Chengze; Li, Shijie; Kang, Xi; Gao, Yang
2016-01-01
A method we developed recently for the reconstruction of the initial density field in the nearby Universe is applied to the Sloan Digital Sky Survey Data Release 7. A high-resolution N-body constrained simulation (CS) of the reconstructed initial condition, with $3072^3$ particles evolved in a 500 Mpc/h box, is carried out and analyzed in terms of the statistical properties of the final density field and its relation with the distribution of SDSS galaxies. We find that the statistical properties of the cosmic web and the halo populations are accurately reproduced in the CS. The galaxy density field is strongly correlated with the CS density field, with a bias that depend on both galaxy luminosity and color. Our further investigations show that the CS provides robust quantities describing the environments within which the observed galaxies and galaxy systems reside. Cosmic variance is greatly reduced in the CS so that the statistical uncertainties can be controlled effectively even for samples of small volumes...
OTEC modular experiment cold water pipe concept evaluation. Volume III. Appendices
Energy Technology Data Exchange (ETDEWEB)
1979-04-01
The Cold Water Pipe System Design Study was undertaken to evaluate the diverse CWP concepts, recommend the most viable alternatives for a 1984 deployment of the 10 to 40 MWe MEP, and carry out preliminary designs of three concepts. The concept evaluation phase reported involved a systems analysis of design alternatives in the broad categories of rigid walled (with hinges), compliant walled, stockade and bottom mounted buoyant. Quantitative evaluations were made of concept performance, availability, deployment schedule, technical feasibility and cost. CWP concepts were analyzed to determine if they met or could be made to meet established system requirements and could be deployed by 1984. Fabrication, construction and installation plans were developed for successful concepts, and costs were determined in a WBS format. Evaluations were performed on the basis of technical and cost risk. This volume includes the following appendices: (A) materials and associated design criteria; (B) summary of results of dynamic flow and transportation analysis; (C) CWP sizing analysis; (D) CWP thermal performance; and (E) investigation of the APL/ABAM CWP design. (WHK)
NOVEL CONCEPTS FOR THE COMPRESSION OF LARGE VOLUMES OF CARBON DIOXIDE-PHASE III
Energy Technology Data Exchange (ETDEWEB)
Moore, J. Jeffrey; Allison, Timothy; Evans, Neal; Moreland, Brian; Hernandez, Augusto; Day, Meera; Ridens, Brandon
2014-06-30
successfully demonstrated good performance and mechanical behavior. In Phase III, a pilot compression plant consisting of a multi-stage centrifugal compressor with cooled diaphragm technology has been designed, constructed, and tested. Comparative testing of adiabatic and cooled tests at equivalent inlet conditions shows that the cooled diaphragms reduce power consumption by 3-8% when the compressor is operated as a back-to-back unit and by up to 9% when operated as a straight-though compressor with no intercooler. The power savings, heat exchanger effectiveness, and temperature drops for the cooled diaphragm were all slightly higher than predicted values but showed the same trends.
DEFF Research Database (Denmark)
Posthuma, Daniëlle; Baare, Wim F.C.; Hulshoff Pol, Hilleke E.;
2003-01-01
to cerebellar volume. Verbal Comprehension was not related to any of the three brain volumes. It is concluded that brain volumes are genetically related to intelligence which suggests that genes that influence brain volume may also be important for intelligence. It is also noted however, that the direction......We recently showed that the correlation of gray and white matter volume with full scale IQ and the Working Memory dimension are completely mediated by common genetic factors (Posthuma et al., 2002). Here we examine whether the other WAIS III dimensions (Verbal Comprehension, Perceptual Organization...... to Working Memory capacity (r = 0.27). This phenotypic correlation is completely due to a common underlying genetic factor. Processing Speed was genetically related to white matter volume (r(g) = 0.39). Perceptual Organization was both genetically (r(g) = 0.39) and environmentally (r(e) = -0.71) related...
Novel concepts for the compression of large volumes of carbon dioxide-phase III
Energy Technology Data Exchange (ETDEWEB)
Moore, J. Jeffrey [Southwest Research Inst., San Antonio, TX (United States); Allison, Timothy C. [Southwest Research Inst., San Antonio, TX (United States); Evans, Neal D. [Southwest Research Inst., San Antonio, TX (United States); Moreland, Brian [Southwest Research Inst., San Antonio, TX (United States); Hernandez, Augusto J. [Southwest Research Inst., San Antonio, TX (United States); Day, Meera [Southwest Research Inst., San Antonio, TX (United States); Ridens, Brandon L. [Southwest Research Inst., San Antonio, TX (United States)
2014-06-30
and tested in a closed loop compressor facility using CO_{2} . Both test programs successfully demonstrated good performance and mechanical behavior. In Phase III, a pilot compression plant consisting of a multi-stage centrifugal compressor with cooled diaphragm technology has been designed, constructed, and tested. Comparative testing of adiabatic and cooled tests at equivalent inlet conditions shows that the cooled diaphragms reduce power consumption by 3-8% when the compressor is operated as a back-to-back unit and by up to 9% when operated as a straight-though compressor with no intercooler. The power savings, heat exchanger effectiveness, and temperature drops for the cooled diaphragm were all slightly higher than predicted values but showed the same trends.
Kremer, Antoine
1981-01-01
Cette étude se propose de suivre l’évolution des composantes de la variance phénotypique et génotypique de l’accroissement en hauteur en utilisant le coefficient de variation exprimé au niveau de chaque composante, l’accroissement en hauteur étant estimé par le cumul des accroissements successifs. Le coefficient de variation de l’erreur, de l’effet station, de l'interaction famille X station et de l’interaction mère X père décroît avec le temps alors que la variance des effets princip...
Conversations across Meaning Variance
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
Simultaneous optimal estimates of fixed effects and variance components in the mixed model
Institute of Scientific and Technical Information of China (English)
WU Mixia; WANG Songgui
2004-01-01
For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.
Energy Technology Data Exchange (ETDEWEB)
Rinne, R.L. [ed.
1994-02-01
This conference was organized to study and analyze the role of simulation, analysis, modeling, and exercises in the history of NATO policy. The premise was not that the results of past studies will apply to future policy, but rather that understanding what influenced the decision process-and how-would be of value. The structure of the conference was built around discussion panels. The panels were augmented by a series of papers and presentations focusing on particular TNF events, issues, studies, or exercises. The conference proceedings consist of three volumes. Volume 1 contains the conference introduction, agenda, biographical sketches of principal participants, and analytical summary of the presentations and discussion panels. Volume 2 contains a short introduction and the papers and presentations from the conference. This volume contains selected papers by Brig. Gen. Robert C. Richardson III (Ret.).
Energy Technology Data Exchange (ETDEWEB)
Krawiec, F.; Thomas, T.; Jackson, F.; Limaye, D.R.; Isser, S.; Karnofsky, K.; Davis, T.D.
1980-11-01
An examination is made of the current and future energy demands, and uses, and cost to characterize typical applications and resulting services in the US and industrial sectors of 15 selected states. Volume III presents tables containing data on selected states' manufacturing subsector energy consumption, functional uses, and cost in 1974 and 1976. Alabama, California, Illinois, Indiana, Louisiana, Michigan, Missouri, New Jersey, New York, Ohio, Oregon, Pennsylvania, Texas, West Virginia, and Wisconsin were chosen as having the greatest potential for replacing conventional fuel with solar energy. Basic data on the quantities, cost, and types of fuel and electric energy purchased by industr for heat and power were obtained from the 1974 and 1976 Annual Survey of Manufacturers. The specific indutrial energy servic cracteristics developed for each selected state include. 1974 and 1976 manufacturing subsector fuels and electricity consumption by 2-, 3-, and 4-digit SIC and primary fuel (quantity and relative share); 1974 and 1976 manufacturing subsector fuel consumption by 2-, 3-, and 4-digit SIC and primary fuel (quantity and relative share); 1974 and 1976 manufacturing subsector average cost of purchsed fuels and electricity per million Btu by 2-, 3-, and 4-digit SIC and primary fuel (in 1976 dollars); 1974 and 1976 manufacturing subsector fuels and electric energy intensity by 2-, 3-, and 4-digit SIC and primary fuel (in 1976 dollars); manufacturing subsector average annual growth rates of (1) fuels and electricity consumption, (2) fuels and electric energy intensity, and (3) average cost of purchased fuels and electricity (1974 to 1976). Data are compiled on purchased fuels, distillate fuel oil, residual ful oil, coal, coal, and breeze, and natural gas. (MCW)
DEFF Research Database (Denmark)
Posthuma, Daniëlle; Baare, Wim F.C.; Hulshoff Pol, Hilleke E.
2003-01-01
We recently showed that the correlation of gray and white matter volume with full scale IQ and the Working Memory dimension are completely mediated by common genetic factors (Posthuma et al., 2002). Here we examine whether the other WAIS III dimensions (Verbal Comprehension, Perceptual Organization...... to Working Memory capacity (r = 0.27). This phenotypic correlation is completely due to a common underlying genetic factor. Processing Speed was genetically related to white matter volume (r(g) = 0.39). Perceptual Organization was both genetically (r(g) = 0.39) and environmentally (r(e) = -0.71) related...
Nominal analysis of "variance".
Weiss, David J
2009-08-01
Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.
Energy Technology Data Exchange (ETDEWEB)
1979-12-01
Volume III explores resources and fuel cycle facilities. Chapters are devoted to: estimates of US uranium resources and supply; comparison of US uranium demands with US production capability forecasts; estimates of foreign uranium resources and supply; comparison of foreign uranium demands with foreign production capability forecasts; and world supply and demand for other resources and fuel cycle services. An appendix gives uranium, fissile material, and separative work requirements for selected reactors and fuel cycles.
Energy Technology Data Exchange (ETDEWEB)
None
1978-01-01
This report presents the results of Task I of Phase I in the form of a Conceptual Design and Evaluation of Commercial Plant report. The report is presented in four volumes as follows: I - Executive Summary, II - Commercial Plant Design, III - Economic Analyses, IV - Demonstration Plant Recommendations. Volume III presents the economic analyses for the commercial plant and the supporting data. General cost and financing factors used in the analyses are tabulated. Three financing modes are considered. The product gas cost calculation procedure is identified and appendices present computer inputs and sample computer outputs for the MLGW, Utility, and Industry Base Cases. The results of the base case cost analyses for plant fenceline gas costs are as follows: Municipal Utility, (e.g. MLGW), $3.76/MM Btu; Investor Owned Utility, (25% equity), $4.48/MM Btu; and Investor Case, (100% equity), $5.21/MM Btu. The results of 47 IFG product cost sensitivity cases involving a dozen sensitivity variables are presented. Plant half size, coal cost, plant investment, and return on equity (industrial) are the most important sensitivity variables. Volume III also presents a summary discussion of the socioeconomic impact of the plant and a discussion of possible commercial incentives for development of IFG plants.
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
Oster, Sharon; And Others
This study reviews the existing literature on a series of issues associated with the defintion and measurement of poverty, and it consists of a summary report covering this research (Volume I), and an annotated bibliography (Volume II). Eleven specific issues were identified and reviewed in this study: (1) the historical definitions of poverty,…
Nout, Erik; van Bezooijen, Jine S; Koudstaal, Maarten J; Veenland, Jifke F; Hop, Wim C J; Wolvius, Eppo B; van der Wal, Karel G H
2012-04-01
Patients with syndromic craniosynostosis suffering from shallow orbits due to midface hypoplasia can be treated with a Le Fort III advancement osteotomy. This study evaluates the influence of Le Fort III advancement on orbital volume, position of the infra-orbital rim and globe. In pre- and post-operative CT-scans of 18 syndromic craniosynostosis patients, segmentation of the left and right orbit was performed and the infra-orbital rim and globe were marked. By superimposing the pre- and post-operative scans and by creating a reference coordinate system, movements of the infra-orbital rim and globe were assessed. Orbital volume increased significantly, by 27.2% for the left and 28.4% for the right orbit. Significant anterior movements of the left infra-orbital rim of 12.0mm (SD 4.2) and right infra-orbital rim of 12.8mm (SD 4.9) were demonstrated. Significant medial movements of 1.7mm (SD 2.2) of the left globe and 1.5mm (SD 1.9) of the right globe were demonstrated. There was a significant correlation between anterior infra-orbital rim movement and the increase in orbital volume. Significant orbital volume increase has been demonstrated following Le Fort III advancement. The position of the infra-orbital rim was moved forward significantly, whereas the globe position remained relatively unaffected. Copyright Â© 2011 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Marvis, Barbara J.
As part of a five-volume series written at a reading level for grades five to six and as a tribute to the contributions Asian Americans have made to the United States, this volume presents biographical sketches of Asian Americans who can serve as role models for today's youth. The profiles in the series show the triumph of the human spirit. Volume…
Fixed effects analysis of variance
Fisher, Lloyd; Birnbaum, Z W; Lukacs, E
1978-01-01
Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi
This volume discusses Nashua Corporation's Omaha facility, a label and label stock manufacturing facility that no longer uses solvent-based adhesives. Information obtained includes issues related to the technical, economic, and environmental barriers and opportunities associated ...
Energy Technology Data Exchange (ETDEWEB)
Trueba, C.; Millam, R.; Schmid, T.; Roquero, C.; Magister, M.
1998-12-01
The soil vulnerability determines the sensitivity of the soil after an accidental radioactive contamination due to Cs-137 and Sr-90. The Departamento de Impacto Ambiental de la Energia of CIEMAT is carrying out an assessment of the radiological vulnerability of the different Spanish soils found on the Iberian Peninsula. This requires the knowledge of the soil properties for the various types of existing soils. In order to achieve this aim, a bibliographical compilation of soil profiles has been made to characterize the different soil types and create a database of their properties. Depending on the year of publication and the type of documentary source, the information compiled from the available bibliography is very heterogeneous. Therefore, an important effort has been made to normalize and process the information prior to its incorporation to the database. This volume presents the criteria applied to normalize and process the data as well as the soil properties of the various soil types belonging to the Comunidad Autonoma de Extremadura. (Author) 50 refs.
Energy Technology Data Exchange (ETDEWEB)
1979-01-01
The Alaska Regional Energy Resources Planning Project is presented in three volumes. This volume, Vol. III, considers alternative energies and the regional assessment inventory update. The introductory chapter, Chapter 12, examines the historical background, current technological status, environmental impact, applicability to Alaska, and siting considerations for a number of alternative systems. All of the systems considered use or could use renewable energy resources. The chapters that follow are entitled: Very Small Hydropower (about 12 kW or less for rural and remote villages); Low-Temperature Geothermal Space Heating; Wind; Fuel Cells; Siting Criteria and Preliminary Screening of Communities for Alternate Energy Use; Wood Residues; Waste Heat; and Regional Assessment Invntory Update. (MCW)
Statistical inference on variance components
Verdooren, L.R.
1988-01-01
In several sciences but especially in animal and plant breeding, the general mixed model with fixed and random effects plays a great role. Statistical inference on variance components means tests of hypotheses about variance components, constructing confidence intervals for them, estimating them,
Minimum Variance Portfolios in the Brazilian Equity Market
Directory of Open Access Journals (Sweden)
Alexandre Rubesam
2013-03-01
Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.
1981-08-01
including horse, camel, mammoth, Ertm E-TR-48-III-II 20 musk ox, and certain species of bison, goat, and bear, which had previously inhabited the marsh and...34 - - -9,$.. Im I I I Si to * Location lype/Contents Affiliation 42B@644 rid e over cr ek - P/J depression, cleared areas, Fr elon (f4-5-18-92) ground
Energy Technology Data Exchange (ETDEWEB)
Stupka, Richard C.; Sharma, Rajendra K.
1977-03-01
Impingement of fish at cooling-water intakes of 32 power plants, located on estuaries and coastal waters has been surveyed and data are presented. Descriptions of site, plant, and intake design and operation are provided. Reports in this volume summarize impingement data for individual plants in tabular and histogram formats. Information was available from differing sources such as the utilities themselves, public documents, regulatory agencies, and others. Thus, the extent of detail in the reports varies greatly from plant to plant. Histogram preparation involved an extrapolation procedure that has inadequacies. The reader is cautioned in the use of information presented in this volume to determine intake-design acceptability or intensity of impacts on ecosystems. No conclusions are presented herein; data comparisons are made in Volume IV.
Abt Associates, Inc., Cambridge, MA.
Conducted on over 3,000 fourth, fifth, and sixth grade children in six states, this study documents changes in nutrition-related knowledge and behaviors which can be related to participating in the Mulligan Stew television series. The case studies which comprise this volume function as a brief organizational analysis of the Mulligan Stew effort at…
Energy Technology Data Exchange (ETDEWEB)
1980-02-01
Volume 1 of the conference proceedings contains sessions on neutrino physics and weak interactions, e/sup +/e/sup -/ physics, and theory. Five of the papers have already been cited in ERA, and can be found by reference to the entry CONF-790642-- in the Report Number Index. The remaining 30 will be processed as they are received on the Atomindex tape. (RWR)
Failure mode analysis for lime/limestone FGD system. Volume III. Plant profiles. Part 1 of 3
Energy Technology Data Exchange (ETDEWEB)
Kenney, S.M.; Rosenberg, H.S.; Nilsson, L.I.O.; Oxley, J.H.
1984-08-01
This volume contains plant profiles for: Petersburg 3; Hawthorn 3, 4; La Cygne 1; Jeffry 1, 2; Lawrence 4, 5; Green River 1-3; Cane Run 4, 5; Mill Creek 1, 3; Paddy's Run 6; Clay Boswell 4; Milton R. Young 2; Pleasants 1, 2; and Colstrip 1, 2. (DLC)
Marvis, Barbara J.
The biographies in this projected eight volume series for elementary school children represent the diversity of Hispanic heritage in the United States. Those featured are contemporary figures with national origins in the United States or Latin America, with careers that cover many aspects of contemporary life. Every person profiled in the series…
Energy Technology Data Exchange (ETDEWEB)
Shriner, C.R.; Peck, L.J.; Miller, C.E.
1978-07-01
This users guide was prepared to provide interested persons access to, via computer terminals, federally funded energy-related environment and safety research projects for FY 1977. Although this information is also available in hardbound volumes, this on-line searching capability is expected to reduce the time required to answer ad hoc questions and, at the same time, produce meaningful reports.
1990-01-01
Florence Wyckoff's three-volume oral history documents her remarkable, lifelong work as a social activist, during which she has become nationally recognized as an advocate of migrant families and children. From the depression years through the 1970s, she pursued grassroots, democratic, community-building efforts in the service of improving public health standards and providing health care, education, and housing for migrant families. Major legislative milestones in her career of advocacy were...
Energy Technology Data Exchange (ETDEWEB)
DeRouen, L.R.; Hann, R.W.; Casserly, D.M.; Giammona, C.; Lascara, V.J. (eds.)
1983-02-01
The Department of Energy's Strategic Petroleum Reserve Program began discharging brine into the Gulf of Mexico from its West Hackberry site near Cameron, Louisiana in May 1981. The brine originates from underground salt domes being leached with water from the Intracoastal Waterway, making available vast underground storage caverns for crude oil. The effects of brine discharge on aquatic organisms are presented in this volume. The topics covered are: benthos; nekton; phytoplankton; zooplankton; and data management.
Energy Technology Data Exchange (ETDEWEB)
Warner, J.A.; Morlok, E.K.; Gimm, K.K.; Zandi, I.
1976-07-01
In order to examine the feasibility of an intercity freight pipeline, it was necessary to develop cost equations for various competing transportation modes. This volume presents cost-estimating equations for rail carload, trailer-on-flatcar, truck, and freight pipeline. Section A presents mathematical equations that approximate the fully allocated and variable costs contained in the ICC cost tables for rail carload, trailer-on-flatcar (TOFC) and truck common-carrier intercity freight movements. These equations were developed to enable the user to approximate the ICC costs quickly and easily. They should find use in initial studies of costs where exact values are not needed, such as in consideration of rate changes, studies of profitability, and in general inter-modal comparisons. Section B discusses the development of a set of engineering cost equations for pneumo-capsule pipelines. The development was based on an analysis of system components and can readily be extended to other types of pipeline. The model was developed for the purpose of a feasibility study. It employs a limited number of generalized parameters and its use is recommended when sufficient detailed and specific engineering information is lacking. These models were used in the comparison of modes presented in Volume I and hence no conclusions regarding relative costs or service of the modes are presented here. The primary conclusion is that the estimates of costs resulting from these models is subject to considerable uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Miller, C. E.; Barker, Janice F.
1979-12-01
This users' guide was prepared to provide interested persons access to, via computer terminals, federally funded energy-related environmental and safety research projects for FY 1978. Although this information is also available in hardbound volumes, this on-line searching capability is expected to reduce the time required to answer ad hoc questions and, at the same time, produce meaningful reports. The data contained in this data base are not exhaustive and represent research reported by the following agencies: Department of Agriculture, Department of Commerce, Department of Defense, Department of Energy, Department of Health, Education, and Welfare, Department of the Interior, Department of Transportation, Federal Energy Administration, National Aeronautics and Space Administration, National Science Foundation, Nuclear Regulatory Commission, Tennessee Valley Authority, U.S. Coast Guard, and the U.S. Environmental Protection Agency.
Energy Technology Data Exchange (ETDEWEB)
1977-01-01
Volume 2 covers major activities of the Artificial Heart Development program that supported the design, fabrication, and test of the system demonstration units. Section A.1.0 provides a listing beyond that of the body of the report on the components needed for an implantation. It also presents glove box sterilization calibration results and results of an extensive mock circulation calibration. Section A.2.0 provides detailed procedures for assembly, preparing for use, and the use of the system and major components. Section A.3.0 covers the component research and development activities undertaken to improve components of the existing system units and to prepare for a future prototype system. Section A.4.0 provides a listing of the top assembly drawings of the major systems variations fabricated and tested.
Modelling volatility by variance decomposition
DEFF Research Database (Denmark)
Amado, Cristina; Teräsvirta, Timo
on the multiplicative decomposition of the variance is developed. It is heavily dependent on Lagrange multiplier type misspecification tests. Finite-sample properties of the strategy and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns...... illustrate the functioning and properties of our modelling strategy in practice. The results show that the long memory type behaviour of the sample autocorrelation functions of the absolute returns can also be explained by deterministic changes in the unconditional variance....
Revision: Variance Inflation in Regression
Directory of Open Access Journals (Sweden)
D. R. Jensen
2013-01-01
the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.
Energy Technology Data Exchange (ETDEWEB)
Slatick, Emil; Ringe, R.R.; Zaugg, Waldo S. (Northwest and Alaska Fisheries Science Center, Coastal Zone and Estuarine Studies Division, Seattle, WA)
1988-02-02
The main functions of the National Marine Fisheries Service (NMFS) aquaculture task biologists and contractual scientists involved in the 1978 homing studies were primarily a surveillance of fish physiology, disease, and relative survival during culture in marine net-pens, to determine if there were any unusual factors that might affect imprinting and homing behavior. The studies were conducted with little background knowledge of the implications of disease and physiology on imprinting and homing in salmonids. The health status or the stocks were quite variable as could be expected. The Dworshak and Wells Hatcheries steelhead suffered from some early stresses in seawater, probably osmoregulatory. The incidences of latent BKD in the Wells and Chelan Hatcheries steelhead and Kooskia Hatchery spring chinook salmon were extremely high, and how these will affect survival in the ocean is not known. Gill enzyme activity in the Dworshak and Chelan Hatcheries steelhead at release was low. Of the steelhead, survival in the Tucannon Hatchery stock will probably be the highest, with Dworshak Hatchery stock the lowest. This report contains the data for the narratives in Volume I.
Energy Technology Data Exchange (ETDEWEB)
Grace, T.M.; Frederick, W.J.; Salcudean, M.; Wessel, R.A.
1998-08-01
This project was initiated in October 1990, with the objective of developing and validating a new computer model of a recovery boiler furnace using a computational fluid dynamics (CFD) code specifically tailored to the requirements for solving recovery boiler flows, and using improved submodels for black liquor combustion based on continued laboratory fundamental studies. The key tasks to be accomplished were as follows: (1) Complete the development of enhanced furnace models that have the capability to accurately predict carryover, emissions behavior, dust concentrations, gas temperatures, and wall heat fluxes. (2) Validate the enhanced furnace models, so that users can have confidence in the predicted results. (3) Obtain fundamental information on aerosol formation, deposition, and hardening so as to develop the knowledge base needed to relate furnace model outputs to plugging and fouling in the convective sections of the boiler. (4) Facilitate the transfer of codes, black liquid submodels, and fundamental knowledge to the US kraft pulp industry. Volume 3 contains the following appendix sections: Formation and destruction of nitrogen oxides in recovery boilers; Sintering and densification of recovery boiler deposits laboratory data and a rate model; and Experimental data on rates of particulate formation during char bed burning.
Analysis of variance: Comfortless questions
L.V. Nedorezov
2017-01-01
In this paper the simplest variant of analysis of variance is under consideration. Three examples from textbooks by Lakin (1990) and Rokitsky (1973) were re-considered. It was obtained that traditional one-way ANOVA and Kruskal - Wallis criterion can lead to unreal results about factor's influence on value of characteristics. Alternative way to solution of the same problem is under consideration too.
Analysis of Variance: Variably Complex
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…
Energy Technology Data Exchange (ETDEWEB)
1979-07-01
The objective of the work described in this volume was to conceptualize suitable designs for solar total energy systems for the following residential market segments: single-family detached homes, single-family attached units (townhouses), low-rise apartments, and high-rise apartments. Conceptual designs for the total energy systems are based on parabolic trough collectors in conjunction with a 100 kWe organic Rankine cycle heat engine or a flat-plate, water-cooled photovoltaic array. The ORC-based systems are designed to operate as either independent (stand alone) systems that burn fossil fuel for backup electricity or as systems that purchase electricity from a utility grid for electrical backup. The ORC designs are classified as (1) a high temperature system designed to operate at 600/sup 0/F and (2) a low temperature system designed to operate at 300/sup 0/F. The 600/sup 0/F ORC system that purchases grid electricity as backup utilizes the thermal tracking principle and the 300/sup 0/F ORC system tracks the combined thermal and electrical loads. Reject heat from the condenser supplies thermal energy for heating and cooling. All of the ORC systems utilize fossil fuel boilers to supply backup thermal energy to both the primary (electrical generating) cycle and the secondary (thermal) cycle. Space heating is supplied by a central hot water (hydronic) system and a central absorption chiller supplies the space cooling loads. A central hot water system supplies domestic hot water. The photovoltaic system uses a central electrical vapor compression air conditioning system for space cooling, with space heating and domestic hot water provided by reject heat from the water-cooled array. All of the systems incorporate low temperature thermal storage (based on water as the storage medium) and lead--acid battery storage for electricity; in addition, the 600/sup 0/F ORC system uses a therminol-rock high temperature storage for the primary cycle. (WHK)
Energy Technology Data Exchange (ETDEWEB)
Bharat L. Bhatt
1999-06-01
Slurry phase Fischer-Tropsch technology was successfully demonstrated in DOE's Alternative Fuels Development Unit (AFDU) at LaPorte, Texas. Earlier work at LaPorte, with iron catalysts in 1992 and 1994, had established proof-of-concept status for the slurry phase process. The third campaign (Fischer-Tropsch III), in 1996, aimed at aggressively extending the operability of the slurry reactor using a proprietary cobalt catalyst. Due to an irreversible plugging of catalyst-wax separation filters as a result of unexpected catalyst fines generation, the operations had to be terminated after seven days on-stream. Following an extensive post-run investigation by the participants, the campaign was successfully completed in March-April 1998, with an improved proprietary cobalt catalyst. These runs were sponsored by the U. S. Department of Energy (DOE), Air Products & Chemicals, Inc., and Shell Synthetic Fuels, Inc. (SSFI). A productivity of approximately 140 grams (gm) of hydrocarbons (HC)/ hour (hr)-liter (lit) of expanded slurry volume was achieved at reasonable system stability during the second trial (Fischer-Tropsch IV). The productivity ranged from 110-140 at various conditions during the 18 days of operations. The catalyst/wax filters performed well throughout the demonstration, producing a clean wax product. For the most part, only one of the four filter housings was needed for catalyst/wax filtration. The filter flux appeared to exceed the design flux. A combination of use of a stronger catalyst and some innovative filtration techniques were responsible for this success. There was no sign of catalyst particle attrition and very little erosion of the slurry pump was observed, in contrast to the Fischer-Tropsch III operations. The reactor operated hydrodynamically stable with uniform temperature profile and gas hold-ups. Nuclear density and differential pressure measurements indicated somewhat higher than expected gas hold-up (45 - 50 vol%) during Fischer
Variance based OFDM frame synchronization
Directory of Open Access Journals (Sweden)
Z. Fedra
2012-04-01
Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.
Variance decomposition in stochastic simulators
Le Maître, O. P.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance-based uncertainty relations
Huang, Yichen
2010-01-01
It is hard to overestimate the fundamental importance of uncertainty relations in quantum mechanics. In this work, I propose state-independent variance-based uncertainty relations for arbitrary observables in both finite and infinite dimensional spaces. We recover the Heisenberg uncertainty principle as a special case. By studying examples, we find that the lower bounds provided by our new uncertainty relations are optimal or near-optimal. I illustrate the uses of our new uncertainty relations by showing that they eliminate one common obstacle in a sequence of well-known works in entanglement detection, and thus make these works much easier to access in applications.
Patent Abstract Digest. Volume III.
1981-09-01
FOR TIlE MULTIPURPOSE 4.122,675 10/1978 Polyak ........................... 60/641 X UTILIZATION OF SOLAR ENERGY FOREIGN PATENT DOCUMENTS 1761 Inventor...contained in Ohio 40"menf .of em W arat thot such use be fro* ffe Pivately owned riht. A 00300 AFSC ar*P7 79c R&LD RECORD (PatentI Abet...., lv PATENT
Meliolales of India - Volume III
Directory of Open Access Journals (Sweden)
V.B. Hosagoudar
2013-04-01
Full Text Available This work, is the continuation of my preceding two works on Meliolales of India, gives an account of 123 fungal species belonging to five genera, Amazonia (3, Appendiculella (1, Asteridiella (22, Ectendomeliola (1, Irenopsis (8 and Meliola (88, infecting 120 host plants belonging to 49 families. Generic key, digital formula, synoptic key to the species is provided. In the key, all the species are arranged under their alphabetically arranged host families. Description of the individual species is provided with the citation, detailed description, materials examined and their details including their herbarium details. Each species is supplemented with line drawings. Host and the species index is provided at the end. This work includes five new species: Meliola arippaensis, M. calycopteridis, M. cariappae, M. harpullicola and M. mutabilidis; a new variety: Irenopsis hiptages Yamam. Var. indica and two new names: Asteridiella micheliifolia (based on A. micheliae and Meliola strombosiicola (based on Meliola strombosiae
Neutrino mass without cosmic variance
LoVerde, Marilena
2016-01-01
Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological datasets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias $b(k)$ and the linear growth parameter $f(k)$ inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on $b(k)$ and $f(k)$ continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via $b(k)$ and $f(k)$. The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high density limit, using multiple tracers allows cosmic-variance to be beaten and the forecasted errors on neutrino mass shrink dramatically. In...
Asymptotic variance of grey-scale surface area estimators
DEFF Research Database (Denmark)
Svane, Anne Marie
Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....
Warped functional analysis of variance.
Gervini, Daniel; Carter, Patrick A
2014-09-01
This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.
Speed Variance and Its Influence on Accidents.
Garber, Nicholas J.; Gadirau, Ravi
A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…
Variance optimal stopping for geometric Levy processes
DEFF Research Database (Denmark)
Gad, Kamille Sofie Tågholt; Pedersen, Jesper Lund
2015-01-01
The main result of this paper is the solution to the optimal stopping problem of maximizing the variance of a geometric Lévy process. We call this problem the variance problem. We show that, for some geometric Lévy processes, we achieve higher variances by allowing randomized stopping. Furthermore...
Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data
DEFF Research Database (Denmark)
Greve, Douglas N; Svarer, Claus; Fisher, Patrick M
2014-01-01
intersubject variance than when volume smoothing was used. This translates into more than 4 times fewer subjects needed in a group analysis to achieve similarly powered statistical tests. Surface-based smoothing has less bias and variance because it respects cortical geometry by smoothing the PET data only...
Bergeron, Jacques C., Ed.; And Others
The Proceedings of PME-XI has been published in three separate volumes because of the large total of 161 individual conference papers reported. Volume I contains four plenary papers, all on the subject of "constructivism," and 44 commented papers arranged under 4 themes. Volume II contains 56 papers (39 commented; 17 uncommented)…
Hirabayashi, Ichiei, Ed.; And Others
The Proceedings of PME-XVII has been published in three volumes because of the large number of papers presented at the conference. Volume I contains a brief Plenary Panel report, 4 full-scale Plenary Addresses, the brief reports of 10 Working Groups and 4 Discussion Groups, and a total of 23 Research Reports grouped under 4 themes. Volume II…
Linear Minimum variance estimation fusion
Institute of Scientific and Technical Information of China (English)
ZHU Yunmin; LI Xianrong; ZHAO Juan
2004-01-01
This paper shows that a general mulitisensor unbiased linearly weighted estimation fusion essentially is the linear minimum variance (LMV) estimation with linear equality constraint, and the general estimation fusion formula is developed by extending the Gauss-Markov estimation to the random paramem of distributed estimation fusion in the LMV setting.In this setting ,the fused estimator is a weighted sum of local estimatess with a matrix quadratic optimization problem subject to a convex linear equality constraint. Second, we present a unique solution to the above optimization problem, which depends only on the covariance matrixCK. Third, if a priori information, the expectation and covariance, of the estimated quantity is unknown, a necessary and sufficient condition for the above LMV fusion becoming the best unbiased LMV estimation with dnown prior information as the above is presented. We also discuss the generality and usefulness of the LMV fusion formulas developed. Finally, we provied and off-line recursion of Ck for a class of multisensor linear systems with coupled measurement noises.
Computing the Expected Value and Variance of Geometric Measures
DEFF Research Database (Denmark)
Staals, Frank; Tsirogiannis, Constantinos
2017-01-01
points in P. This problem is a crucial part of modern ecological analyses; each point in P represents a species in d-dimensional trait space, and the goal is to compute the statistics of a geometric measure on this trait space, when subsets of species are selected under random processes. We present...... efficient exact algorithms for computing the mean and variance of several geometric measures when point sets are selected under one of the described random distributions. More specifically, we provide algorithms for the following measures: the bounding box volume, the convex hull volume, the mean pairwise...
Generalized analysis of molecular variance.
Directory of Open Access Journals (Sweden)
Caroline M Nievergelt
2007-04-01
Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by
The phenotypic variance gradient - a novel concept.
Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton
2014-11-01
Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.
Expected Stock Returns and Variance Risk Premia
DEFF Research Database (Denmark)
Bollerslev, Tim; Zhou, Hao
predicting high (low) future returns. The magnitude of the return predictability of the variance risk premium easily dominates that afforded by standard predictor variables like the P/E ratio, the dividend yield, the default spread, and the consumption-wealth ratio (CAY). Moreover, combining the variance...... risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
Geeslin, William, Ed.; Graham, Karen, Ed.
The Proceedings of PME-XVI has been published in three volumes because of the large number of papers presented at the conference. Volume 1 contains: (1) brief reports from each of the 11 standing Working Groups on their respective roles in organizing PME-XVI; (2) brief reports from 6 Discussion Groups; and (3) 35 research reports covering authors…
Influence of Family Structure on Variance Decomposition
DEFF Research Database (Denmark)
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs... PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for... shall modify the tag, label, or other certification required by § 1010.2 to state: (1) That the...
Analysis of variance for model output
Jansen, M.J.W.
1999-01-01
A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va
The Correct Kriging Variance Estimated by Bootstrapping
den Hertog, D.; Kleijnen, J.P.C.; Siem, A.Y.D.
2004-01-01
The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments.This paper proves that this formula is wrong.Furthermore, it shows that the formula underestimates the Kriging variance in expectation.The paper develops parametric bootstrappi
Influence of Family Structure on Variance Decomposition
DEFF Research Database (Denmark)
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...
Nonlinear Epigenetic Variance: Review and Simulations
Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.
2010-01-01
We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…
Variance Risk Premia on Stocks and Bonds
DEFF Research Database (Denmark)
Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea
is different from the equity variance risk premium. Third, the conditional correlation between stock and bond market variance risk premium switches sign often and ranges between -60% and +90%. We then show that these stylized facts pose a challenge to standard consumption-based asset pricing models....
Portfolio optimization with mean-variance model
Hoe, Lam Weng; Siew, Lam Weng
2016-06-01
Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.
Expected Stock Returns and Variance Risk Premia
DEFF Research Database (Denmark)
Bollerslev, Tim; Zhou, Hao
We find that the difference between implied and realized variation, or the variance risk premium, is able to explain more than fifteen percent of the ex-post time series variation in quarterly excess returns on the market portfolio over the 1990 to 2005 sample period, with high (low) premia...... predicting high (low) future returns. The magnitude of the return predictability of the variance risk premium easily dominates that afforded by standard predictor variables like the P/E ratio, the dividend yield, the default spread, and the consumption-wealth ratio (CAY). Moreover, combining the variance...... risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...
Rhode, William E.; And Others
Basic cost estimates for selected instructional media are tabled in this document, Part II (Appendix III) of the report "Analysis and Approach to the Development of an Advanced Multimedia Instructional System" by William E. Rhode and others. Learning materials production costs are given for motion pictures, still visuals, videotapes, live…
Fox, Mary Kay; Cole, Nancy
2004-01-01
Data from the Third National Health and Nutrition Examination Survey (NHANES-III), conducted in 1988-94, were used to compare the nutrition and health characteristics of the Nation's school-age children--boys and girls ages 5-18. Three groups of children were compared based on household income: income at or below 130 percent of poverty (lowest…
Portfolio optimization using median-variance approach
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Energy Technology Data Exchange (ETDEWEB)
None
1977-12-01
Tasks III and IV measure the characteristics of potential research and development programs that could be applied to the maritime industry. It was necessary to identify potential operating scenarios for the maritime industry in the year 2000 and determine the energy consumption that would result given those scenarios. After the introductory chapter the operational, regulatory, and vessel-size scenarios for the year 2000 are developed in Chapter II. In Chapter III, future cargo flows and expected levels of energy use for the baseline 2000 projection are determined. In Chapter IV, the research and development programs are introduced into the future US flag fleet and the energy-savings potential associated with each is determined. The first four appendices (A through D) describe each of the generic technologies. The fifth appendix (E) contains the baseline operating and cost parameters against which 15 program areas were evaluated. (MCW)
DEFF Research Database (Denmark)
Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander
2013-01-01
of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic...
Reducing variance in batch partitioning measurements
Energy Technology Data Exchange (ETDEWEB)
Mariner, Paul E.
2010-08-11
The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.
Grammatical and lexical variance in English
Quirk, Randolph
2014-01-01
Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.
78 FR 14122 - Revocation of Permanent Variances
2013-03-04
... Occupational Safety and Health Administration Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Notice of revocation. SUMMARY: With this notice, OSHA is... into consideration these newly corrected cross references. DATES: The effective date of the...
Importance Sampling Variance Reduction in GRESS ATMOSIM
Energy Technology Data Exchange (ETDEWEB)
Wakeford, Daniel Tyler [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-26
This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law....
Variance components in discrete force production tasks.
Varadhan, S K M; Zatsiorsky, Vladimir M; Latash, Mark L
2010-09-01
The study addresses the relationships between task parameters and two components of variance, "good" and "bad", during multi-finger accurate force production. The variance components are defined in the space of commands to the fingers (finger modes) and refer to variance that does ("bad") and does not ("good") affect total force. Based on an earlier study of cyclic force production, we hypothesized that speeding-up an accurate force production task would be accompanied by a drop in the regression coefficient linking the "bad" variance and force rate such that variance of the total force remains largely unaffected. We also explored changes in parameters of anticipatory synergy adjustments with speeding-up the task. The subjects produced accurate ramps of total force over different times and in different directions (force-up and force-down) while pressing with the four fingers of the right hand on individual force sensors. The two variance components were quantified, and their normalized difference was used as an index of a total force stabilizing synergy. "Good" variance scaled linearly with force magnitude and did not depend on force rate. "Bad" variance scaled linearly with force rate within each task, and the scaling coefficient did not change across tasks with different ramp times. As a result, a drop in force ramp time was associated with an increase in total force variance, unlike the results of the study of cyclic tasks. The synergy index dropped 100-200 ms prior to the first visible signs of force change. The timing and magnitude of these anticipatory synergy adjustments did not depend on the ramp time. Analysis of the data within an earlier model has shown adjustments in the variance of a timing parameter, although these adjustments were not as pronounced as in the earlier study of cyclic force production. Overall, we observed qualitative differences between the discrete and cyclic force production tasks: Speeding-up the cyclic tasks was associated with
Discrimination of frequency variance for tonal sequencesa)
Byrne, Andrew J.; Viemeister, Neal F.; Stellmack, Mark A.
2014-01-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTA...
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
The Variance Composition of Firm Growth Rates
Directory of Open Access Journals (Sweden)
Luiz Artur Ledur Brito
2009-04-01
Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.
Frick, Theodore W.; And Others
The document is part of the final report on Project STEEL (Special Teacher Education and Evaluation Laboratory) intended to extend the utilization of technology in the training of preservice special education teachers. This volume focuses on the third of four project objectives, the development and implementation of a computer-based testing…
Gutmanis, Ivars; And Others
The report presents the methodology used by the National Planning Association (NPA), under contract to the Federal Energy Administration (FEA), to estimate direct labor usage coefficients in some sixty different occupational categories involved in construction, operation, and maintenance of energy facilities. Volume 1 presents direct labor usage…
Variance optimal sampling based estimation of subset sums
Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel
2008-01-01
From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.
Kim, Min-Ah; Kim, Bo-Ram; Choi, Jin-Young; Youn, Jong-Kuk; Kim, Yoon-Ji R; Park, Yang-Ho
2013-07-01
To evaluate longitudinal changes of the hyoid bone position and pharyngeal airway space after bimaxillary surgery in mandibular prognathism patients. Cone-beam computed tomography scans were taken for 25 mandibular prognathism patients before surgery (T0), 2 months after surgery (T1), and 6 months after surgery (T2). The positional displacement of the hyoid bone was assessed using the coordinates at T0, T1, and T2. Additionally, the volume of each subject's pharyngeal airway was measured. The mean amount of posterior maxilla impaction was 3.76 ± 1.33 mm as the palatal plane rotated 2.04° ± 2.28° in a clockwise direction as a result of bimaxillary surgery. The hyoid bone moved backward (P .05, P bimaxillary surgery. The decrease in the pharyngeal airway volume was correlated to the changes in the palatal plane inclination and the positional change of the hyoid bone.
Energy Technology Data Exchange (ETDEWEB)
Hallet, Jr., R. W.; Gervais, R. L.
1977-10-01
The central receiver system consists of a field of heliostats, a central receiver, a thermal storage unit, an electrical power generation system, and balance of plant. This volume discusses the collector field geometry, requirements and configuration. The development of the collector system and subsystems are discussed and the selection rationale outlined. System safety and availability are covered. Finally, the plans for collector portion of the central receiver system are reviewed.
Energy Technology Data Exchange (ETDEWEB)
Slemmons, A J
1980-04-01
The conceptual design, parametric analysis, cost and performance analysis, and commercial assessment of a 100-MWe line-focus solar central receiver power plant are reported. This volume contains the appendices: (a) methods of determination of molten salt heat-transfer coefficients and tube-wall temperatures, (b) inputs for STEAEC programs, (c) description of system analysis computer program, (d) receiver analysis program, and (e) heliostat production plan and design methodology. (WHK)
Discrimination of frequency variance for tonal sequences.
Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A
2014-12-01
Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) > σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.
Ossenkopf, V; Stutzki, J
2008-01-01
The Delta-variance analysis is an efficient tool for measuring the structural scaling behaviour of interstellar turbulence in astronomical maps. In paper I we proposed essential improvements to the Delta-variance analysis. In this paper we apply the improved Delta-variance analysis to i) a hydrodynamic turbulence simulation with prominent density and velocity structures, ii) an observed intensity map of rho Oph with irregular boundaries and variable uncertainties of the different data points, and iii) a map of the turbulent velocity structure in the Polaris Flare affected by the intensity dependence on the centroid velocity determination. The tests confirm the extended capabilities of the improved Delta-variance analysis. Prominent spatial scales were accurately identified and artifacts from a variable reliability of the data were removed. The analysis of the hydrodynamic simulations showed that the injection of a turbulent velocity structure creates the most prominent density structures are produced on a sca...
Nougier, JP
1991-01-01
As is well known, Silicon widely dominates the market of semiconductor devices and circuits, and in particular is well suited for Ultra Large Scale Integration processes. However, a number of III-V compound semiconductor devices and circuits have recently been built, and the contributions in this volume are devoted to those types of materials, which offer a number of interesting properties. Taking into account the great variety of problems encountered and of their mutual correlations when fabricating a circuit or even a device, most of the aspects of III-V microelectronics, from fundamental p
Maximum Variance Hashing via Column Generation
Directory of Open Access Journals (Sweden)
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
Estimating quadratic variation using realized variance
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2002-01-01
This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process is a semimar......This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....
Spoelstra, Femke O B; Senan, Suresh; Le Péchoux, Cecile; Ishikura, Satoshi; Casas, Francesc; Ball, David; Price, Allan; De Ruysscher, Dirk; van Sörnsen de Koste, John R
2010-03-15
Postoperative radiotherapy (PORT) in patients with completely resected non-small-cell lung cancer with mediastinal involvement is controversial because of the failure of earlier trials to demonstrate a survival benefit. Improved techniques may reduce toxicity, but the treatment fields used in routine practice have not been well studied. We studied routine target volumes used by international experts and evaluated the impact of a contouring protocol developed for a new prospective study, the Lung Adjuvant Radiotherapy Trial (Lung ART). Seventeen thoracic radiation oncologists were invited to contour their routine clinical target volumes (CTV) for 2 representative patients using a validated CD-ROM-based contouring program. Subsequently, the Lung ART study protocol was provided, and both cases were contoured again. Variations in target volumes and their dosimetric impact were analyzed. Routine CTVs were received for each case from 10 clinicians, whereas six provided both routine and protocol CTVs for each case. Routine CTVs varied up to threefold between clinicians, but use of the Lung ART protocol significantly decreased variations. Routine CTVs in a postlobectomy patient resulted in V(20) values ranging from 12.7% to 54.0%, and Lung ART protocol CTVs resulted in values of 20.6% to 29.2%. Similar results were seen for other toxicity parameters and in the postpneumectomy patient. With the exception of upper paratracheal nodes, protocol contouring improved coverage of the required nodal stations. Even among experts, significant interclinician variations are observed in PORT fields. Inasmuch as contouring variations can confound the interpretation of PORT results, mandatory quality assurance procedures have been incorporated into the current Lung ART study. Copyright 2010 Elsevier Inc. All rights reserved.
Integrating Variances into an Analytical Database
Sanchez, Carlos
2010-01-01
For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.
Energy Technology Data Exchange (ETDEWEB)
Salmon, R.; Edwards, M.S.; Ulrich, W.C.
1977-06-01
This volume consists of individual block flowsheets for the various units of the Synthoil facility, showing the overall flows into and out of each unit. Material balances for the following units are incomplete because these are proprietary processes and the information was not provided by the respective vendors: Unit 24-Claus Sulfur Plant; Unit 25-Oxygen Plant; Unit 27-Sulfur Plant (Redox Type); and Unit 28-Sour Water Stripper and Ammonia Recovery Plant. The process information in this form was specifically requested by ERDA/FE for inclusion in the final report.
Sources of variance in ocular microtremor.
Sheahan, N F; Coakley, D; Bolger, C; O'Neill, D; Fry, G; Phillips, J; Malone, J F
1994-02-01
This study presents a preliminary investigation of the sources of variance in the measurement of ocular microtremor frequency in a normal population. When the results from both experienced and relatively inexperienced operators are pooled, factors that contribute significantly to the total variance include the measurement procedure (p < 0.001), day-to-day variations within subjects (p < 0.001), and inter-subject differences (p < 0.01). Operator experience plays a role in determining the measurement precision: the intra-subject coefficient of variation is about 5% for a very experienced operator, and about 14% for a relatively inexperienced operator.
Managing product inherent variance during treatment
Verdenius, F.
1996-01-01
The natural variance of agricultural product parameters complicates recipe planning for product treatment, i.e. the process of transforming a product batch from its initial state to a prespecified final state. For a specific product P, recipes are currently composed by human experts on the basis of
The Variance of Language in Different Contexts
Institute of Scientific and Technical Information of China (English)
申一宁
2012-01-01
language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.
Regression calibration with heteroscedastic error variance.
Spiegelman, Donna; Logan, Roger; Grove, Douglas
2011-01-01
The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses' Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice.
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Formative Use of Intuitive Analysis of Variance
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…
Linear transformations of variance/covariance matrices
Parois, P.J.A.; Lutz, M.
2011-01-01
Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...
Energy Technology Data Exchange (ETDEWEB)
Powers, Patrick D.; Orsborn, John F.
1985-08-01
This volume covers the broad, though relatively short, historical basis for this project. The historical developments of certain design features, criteria and research activities are traced. Current design practices are summarized based on the results of an international survey and interviews with agency personnel and consultants. The fluid mechanics and hydraulics of fishway systems are discussed. Fishways (or fishpasses) can be classified in two ways: (1) on the basis of the method of water control (chutes, steps (ladders), or slots); and (2) on the basis of the degree and type of water control. This degree of control ranges from a natural waterfall to a totally artificial environment at a hatchery. Systematic procedures for analyzing fishways based on their configuration, species, and hydraulics are presented. Discussions of fish capabilities, energy expenditure, attraction flow, stress and other factors are included.
Energy Technology Data Exchange (ETDEWEB)
1978-01-01
The variability of energy output inherent in wind energy conversion systems (WECS) has led to the investigation of energy storage as a means of managing the available energy when immediate, direct use is not possible or desirable. This portion of the General Electric study was directed at an evaluation of those energy storage technologies deemed best suited for use in conjunction with a wind energy conversion system in utility, residential and intermediate applications. Break-even cost goals are developed for several storage technologies in each application. These break-even costs are then compared with cost projections presented in Volume I of this report to show technologies and time frames of potential economic viability. The report summarizes the investigations performed and presents the results, conclusions and recommendations pertaining to use of energy storage with wind energy conversion systems.
Energy Technology Data Exchange (ETDEWEB)
Durham, C.O. Jr.; O' Brien, F.D.; Rodgers, R.W. (eds.)
1985-01-01
This report presents the results of the testing of Sand 3 (15,245 to 15,280 feet in depth) which occurred from November 1983 to March 1984 and evaluates these new data in comparison to results from the testing of Sand 5 (15,385 to 15,415 feet in depth) which occurred from June 1981 to February 1982. It also describes the reworking of the production and salt water disposal wells preparatory to the Sand 3 testing as well as the plug and abandon procedures requested to terminate the project. The volume contains two parts: Part 1 includes the text and accompanying plates, figures and tables; Part 2 consists of the appendixes including auxiliary reports and tabulations.
Matthews, Walter R.; And Others
Four volumes present materials and a training workshop on proposal writing. The materials aim to give people the skills and resources with which to translate their ideas into fully developed grant proposals for projects related to educational equity for women. However, the information is applicable to most other funding procedures. The first…
40 CFR 142.43 - Disposition of a variance request.
2010-07-01
... during the period of variance shall specify interim treatment techniques, methods and equipment, and... the specified treatment technique for which the variance was granted is necessary to protect...
Energy Technology Data Exchange (ETDEWEB)
Rosenthal, M.D.; Houck, F.
2010-01-01
In this section of the report, the development of INFCIRC/540 is traced by a compilation of citations from the IAEA documents presented to the Board of Governors and the records of discussions in the Board that took place prior to the establishment of Committee 24 as well as the documents and discussions of that committee. The evolution of the text is presented separately for each article or, for the more complex articles, for each paragraph or group of paragraphs of the article. This section covers all articles, including those involving no issues. Background, issues, interpretations and conclusions, which were addressed in Volumes I, II, and III are not repeated here. The comments by states that are included are generally limited to objections and suggested changes. Requests for clarification or elaboration have been omitted, although it is recognized that such comments were sometimes veiled objections.
Directory of Open Access Journals (Sweden)
Giuliano Reboa
2016-01-01
Full Text Available The clinical chart of 621 patients with III-IV haemorrhoids undergoing Stapled Hemorrhoidopexy (SH with CPH34 HV in 2012–2014 was consecutively reviewed to assess its safety and efficacy after at least 12 months of follow-up. Mean volume of prolapsectomy was significantly higher (13.0 mL; SD, 1.4 in larger prolapse (9.3 mL; SD, 1.2 (p<0.001. Residual or recurrent haemorrhoids occurred in 11 of 621 patients (1.8% and in 12 of 581 patients (1.9%, respectively. Relapse was correlated with higher preoperative Constipation Scoring System (CSS (p=0.000, Pescatori’s degree (p=0.000, Goligher’s grade (p=0.003, prolapse exceeding half of the length of the Circular Anal Dilator (CAD (p=0.000, and higher volume of prolapsectomy (p=0.000. At regression analysis, only the preoperative CSS, Pescatori’s degree, Goligher’s grade, and volume of resection were significantly predictive of relapse. A high level of satisfaction (VAS = 8.6; SD, 1.0 coupled with a reduction of 12-month CSS (Δ preoperative CSS/12 mo CSS = 3.4, SD, 2.0; p<0.001 was observed. The wider prolapsectomy achievable with CPH34 HV determined an overall 3.7% relapse rate in patients with high prevalence of large internal rectal prolapse, coupled with high satisfaction index, significant reduction of CSS, and very low complication rates.
Bias-variance decomposition in Genetic Programming
Directory of Open Access Journals (Sweden)
Kowaliw Taras
2016-01-01
Full Text Available We study properties of Linear Genetic Programming (LGP through several regression and classification benchmarks. In each problem, we decompose the results into bias and variance components, and explore the effect of varying certain key parameters on the overall error and its decomposed contributions. These parameters are the maximum program size, the initial population, and the function set used. We confirm and quantify several insights into the practical usage of GP, most notably that (a the variance between runs is primarily due to initialization rather than the selection of training samples, (b parameters can be reasonably optimized to obtain gains in efficacy, and (c functions detrimental to evolvability are easily eliminated, while functions well-suited to the problem can greatly improve performance—therefore, larger and more diverse function sets are always preferable.
Realized Variance and Market Microstructure Noise
DEFF Research Database (Denmark)
Hansen, Peter R.; Lunde, Asger
2006-01-01
We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...
Linear transformations of variance/covariance matrices.
Parois, Pascal; Lutz, Martin
2011-07-01
Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance matrix. For the transformation of second-rank tensors it is suggested that the 3 × 3 matrix is re-written into a 9 × 1 vector. The transformation of the corresponding variance/covariance matrix is then straightforward and easily implemented into computer software. This method is applied in the transformation of anisotropic displacement parameters, the calculation of equivalent isotropic displacement parameters, the comparison of refinements in different space-group settings and the calculation of standard uncertainties of eigenvalues.
Variance and covariance of accumulated displacement estimates.
Bayer, Matthew; Hall, Timothy J
2013-04-01
Tracking large deformations in tissue using ultrasound can enable the reconstruction of nonlinear elastic parameters, but poses a challenge to displacement estimation algorithms. Such large deformations have to be broken up into steps, each of which contributes an estimation error to the final accumulated displacement map. The work reported here measured the error variance for single-step and accumulated displacement estimates using one-dimensional numerical simulations of ultrasound echo signals, subjected to tissue strain and electronic noise. The covariance between accumulation steps was also computed. These simulations show that errors due to electronic noise are negatively correlated between steps, and therefore accumulate slowly, whereas errors due to tissue deformation are positively correlated and accumulate quickly. For reasonably low electronic noise levels, the error variance in the accumulated displacement estimates is remarkably constant as a function of step size, but increases with the length of the tracking kernel.
Realized Variance and Market Microstructure Noise
DEFF Research Database (Denmark)
Hansen, Peter R.; Lunde, Asger
2006-01-01
We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...
The Theory of Variances in Equilibrium Reconstruction
Energy Technology Data Exchange (ETDEWEB)
Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren
2008-01-14
The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.
Eigenvalue variance bounds for covariance matrices
Dallaporta, Sandrine
2013-01-01
This work is concerned with finite range bounds on the variance of individual eigenvalues of random covariance matrices, both in the bulk and at the edge of the spectrum. In a preceding paper, the author established analogous results for Wigner matrices and stated the results for covariance matrices. They are proved in the present paper. Relying on the LUE example, which needs to be investigated first, the main bounds are extended to complex covariance matrices by means of the Tao, Vu and Wan...
High-dimensional regression with unknown variance
Giraud, Christophe; Verzelen, Nicolas
2011-01-01
We review recent results for high-dimensional sparse linear regression in the practical case of unknown variance. Different sparsity settings are covered, including coordinate-sparsity, group-sparsity and variation-sparsity. The emphasize is put on non-asymptotic analyses and feasible procedures. In addition, a small numerical study compares the practical performance of three schemes for tuning the Lasso esti- mator and some references are collected for some more general models, including multivariate regression and nonparametric regression.
Fractional constant elasticity of variance model
Ngai Hang Chan; Chi Tim Ng
2007-01-01
This paper develops a European option pricing formula for fractional market models. Although there exist option pricing results for a fractional Black-Scholes model, they are established without accounting for stochastic volatility. In this paper, a fractional version of the Constant Elasticity of Variance (CEV) model is developed. European option pricing formula similar to that of the classical CEV model is obtained and a volatility skew pattern is revealed.
Fundamentals of exploratory analysis of variance
Hoaglin, David C; Tukey, John W
2009-01-01
The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.
Discussion on variance reduction technique for shielding
Energy Technology Data Exchange (ETDEWEB)
Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)
Applications of non-parametric statistics and analysis of variance on sample variances
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
The Parabolic variance (PVAR), a wavelet variance based on least-square fit
Vernotte, F; Bourgeois, P -Y; Rubiola, E
2015-01-01
The Allan variance (AVAR) is one option among the wavelet variances. However a milestone in the analysis of frequency fluctuations and in the long-term stability of clocks, and certainly the most widely used one, AVAR is not suitable when fast noise processes show up, chiefly because of the poor rejection of white phase noise. The modified Allan variance (MVAR) features high resolution in the presence of white PM noise, but it is poorer for slow phenomena because the wavelet spans over 50% longer time. This article introduces the Parabolic Variance (PVAR), a wavelet variance similar to the Allan variance, based on the Linear Regression (LR) of phase data. The PVAR relates to the Omega frequency counter, which is the topics of a companion article [the reference to the article, or to the ArXiv manuscript, will be provided later]. The PVAR wavelet spans over 2 tau, the same of the AVAR wavelet. After setting the theoretical framework, we analyze the degrees of freedom and the detection of weak noise processes in...
Visual SLAM Using Variance Grid Maps
Howard, Andrew B.; Marks, Tim K.
2011-01-01
An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance
DEFF Research Database (Denmark)
Lauridsen, Palle Schantz
2017-01-01
Kort analyse af Shakespeares Richard III med fokus på, hvordan denne skurk fremstilles, så tilskuere (og læsere) langt henad vejen kan føle sympati med ham. Med paralleller til Netflix-serien "House of Cards"......Kort analyse af Shakespeares Richard III med fokus på, hvordan denne skurk fremstilles, så tilskuere (og læsere) langt henad vejen kan føle sympati med ham. Med paralleller til Netflix-serien "House of Cards"...
A relation between information entropy and variance
Pandey, Biswajit
2016-01-01
We obtain an analytic relation between the information entropy and the variance of a distribution in the regime of small fluctuations. We use a set of Monte Carlo simulations of different homogeneous and inhomogeneous distributions to verify the relation and also test it in a set of cosmological N-body simulations. We find that the relation is in excellent agreement with the simulations and is independent of number density and the nature of the distributions. The relation would help us to relate entropy to other conventional measures and widen its scope.
The value of travel time variance
Fosgerau, Mogens; Engelson, Leonid
2010-01-01
This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...
Markov bridges, bisection and variance reduction
DEFF Research Database (Denmark)
Asmussen, Søren; Hobolth, Asger
Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints....... In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented...... where the methods of stratification, importance sampling and quasi Monte Carlo are investigated....
Gray, Brian R.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.
2012-01-01
Variance components may play multiple roles (cf. Cox and Solomon 2003). First, magnitudes and relative magnitudes of the variances of random factors may have important scientific and management value in their own right. For example, variation in levels of invasive vegetation among and within lakes may suggest causal agents that operate at both spatial scales – a finding that may be important for scientific and management reasons. Second, variance components may also be of interest when they affect precision of means and covariate coefficients. For example, variation in the effect of water depth on the probability of aquatic plant presence in a study of multiple lakes may vary by lake. This variation will affect the precision of the average depth-presence association. Third, variance component estimates may be used when designing studies, including monitoring programs. For example, to estimate the numbers of years and of samples per year required to meet long-term monitoring goals, investigators need estimates of within and among-year variances. Other chapters in this volume (Chapters 7, 8, and 10) as well as extensive external literature outline a framework for applying estimates of variance components to the design of monitoring efforts. For example, a series of papers with an ecological monitoring theme examined the relative importance of multiple sources of variation, including variation in means among sites, years, and site-years, for the purposes of temporal trend detection and estimation (Larsen et al. 2004, and references therein).
EXPLANATORY VARIANCE IN MAXIMAL OXYGEN UPTAKE
Directory of Open Access Journals (Sweden)
Jacalyn J. Robert McComb
2006-06-01
Full Text Available The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females, ages 18 - 24 years, underwent the following testing procedures: (a a 7-site skin fold assessment; (b a land VO2max running treadmill test; and (c a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants' head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF, height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27 of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF.
Dimension reduction based on weighted variance estimate
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
In this paper, we propose a new estimate for dimension reduction, called the weighted variance estimate (WVE), which includes Sliced Average Variance Estimate (SAVE) as a special case. Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension. And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR), SAVE, etc. Many methods such as SIR, SAVE, etc. usually put the same weight on each observation to estimate central subspace (CS). By introducing a weight function, WVE puts different weights on different observations according to distance of observations from CS. The weight function makes WVE have very good performance in general and complicated situations, for example, the distribution of regressor deviating severely from elliptical distribution which is the base of many methods, such as SIR, etc. And compared with many existing methods, WVE is insensitive to the distribution of the regressor. The consistency of the WVE is established. Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.
Dimension reduction based on weighted variance estimate
Institute of Scientific and Technical Information of China (English)
ZHAO JunLong; XU XingZhong
2009-01-01
In this paper,we propose a new estimate for dimension reduction,called the weighted variance estimate (WVE),which includes Sliced Average Variance Estimate (SAVE) as a special case.Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension.And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR),SAVE,etc.Many methods such as SIR,SAVE,etc.usually put the same weight on each observation to estimate central subspace (CS).By introducing a weight function,WVE puts different weights on different observations according to distance of observations from CS.The weight function makes WVE have very good performance in general and complicated situations,for example,the distribution of regressor deviating severely from elliptical distribution which is the base of many methods,such as SIR,etc.And compared with many existing methods,WVE is insensitive to the distribution of the regressor.The consistency of the WVE is established.Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.
A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del
Institute of Scientific and Technical Information of China (English)
Hou Ying-li; Liu Guo-xin; Jiang Chun-lan
2015-01-01
In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the eﬃcient strategies and eﬃcient frontier) are derived explicitly.
Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability
DEFF Research Database (Denmark)
Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco
We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data...... and realized variances, our model allows to infer the occurrence and size of extreme variance events, and construct indicators signalling agents sentiment towards future market conditions. Our results show that excess returns are to a large extent explained by fear or optimism towards future extreme variance...
The value of travel time variance
DEFF Research Database (Denmark)
Fosgerau, Mogens; Engelson, Leonid
2011-01-01
This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....
Power Estimation in Multivariate Analysis of Variance
Directory of Open Access Journals (Sweden)
Jean François Allaire
2007-09-01
Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.
Expected Stock Returns and Variance Risk Premia
DEFF Research Database (Denmark)
Bollerslev, Tim; Tauchen, George; Zhou, Hao
Motivated by the implications from a stylized self-contained general equilibrium model incorporating the effects of time-varying economic uncertainty, we show that the difference between implied and realized variation, or the variance risk premium, is able to explain a non-trivial fraction...... of the time series variation in post 1990 aggregate stock market returns, with high (low) premia predicting high (low) future returns. Our empirical results depend crucially on the use of "model-free," as opposed to Black- Scholes, options implied volatilities, along with accurate realized variation measures...... constructed from high-frequency intraday, as opposed to daily, data. The magnitude of the predictability is particularly strong at the intermediate quarterly return horizon, where it dominates that afforded by other popular predictor variables, like the P/E ratio, the default spread, and the consumption...
Age Dedifferentiation Hypothesis: Evidence form the WAIS III.
Juan-Espinosa, Manuel; Garcia, Luis F.; Escorial, Sergio; Rebollo, Irene; Colom, Roberto; Abad, Francisco J.
2002-01-01
Used the Spanish standardization of the Wechsler Adult Intelligence Scale III (WAIS III) (n=1,369) to test the age dedifferentiation hypothesis. Results show no changes in the percentage of variance accounted for by "g" and four group factors when restriction of range is controlled. Discusses an age indifferentation hypothesis. (SLD)
Commencement Bay Study. Volume III. Fish Wetlands.
1981-12-31
area. Amish (1976) studied the occurrence of Philometra americana in English sole and rock sole of central Puget Sound. Amish’s sampling locations...Fisheries Biologist, Washington Department of Fisheries. Personal communication. Amish , R.A., 1976. The occurrence of the bloodworm Philometra americana...wildlife as well as the people of the Puyallup Nation who then inhabited the study area. Six major wetland habitat types have been recognized in the
Design Options Study. Volume III. Qualitative Assessment.
1980-09-01
would be obtained for a 500,000 lb- or 600,000 lb-payload- aircraft is uncertain. Assesment of De3ign-Option Substitutien TO summnarize the preceding...exhaust smoke and prohibit fuel venting to the atmosphere. In accordance with APR 80-36, as discussed previously in conjunction with the noise...Laboratory in terms of combustor efficiency, specific NO Xvalues, and specific levels Of Visible smoke . In the Most recent EPA proposals. emission
Progress Report on Alzheimer Disease: Volume III.
National Inst. on Aging (DHHS/PHS), Bethesda, MD.
This report summarizes advances in the understanding of Alzheimer's disease, the major cause of mental disability among older Americans. The demography of the disease is discussed, noting that approximately 2.5 million American adults are afflicted with the disease and that the large increase in the number of Alzheimer's disease patients is due to…
Towboat Maneuvering Simulator. Volume III. Theoretical Description.
1979-05-01
overshoot or :igzag maneuver;I - 1,2,3 .. . 6FL F- _’ Flan"ing rudder deflection rate a _ __ Steering rudder deflection rate Ship propulsion ratlol " elh...used with the equations are for the ship propulsion point (n - 1.0). The equations are written in terms of the complete barge flotillia towboat
Great III - Cultural Resource Inventory. Volume 2
1982-05-01
Historical Sketch of St. Louis University. Patrick Fox, • St. Louis. Historical look at the St. Louis mound complex. Holmes, Nathaniel 1868 Loess...Saint Louis to Me. St. Louis, Missouri: Hawthorn Pub- lishing Company, 1978. 305 p., illus. £ates, Giwendolyn Lewis 1976 Historic Sites Inventory for...Watercolors by Marilynne Bradley. St. Louis: Hawthorn Publishing Company, c. 1977. 259 p., illus. (part color). Includes: Old Courthouse, Old
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.
Estimation of bias and variance of measurements made from tomography scans
Bradley, Robert S.
2016-09-01
Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.
Effect of window shape on the detection of hyperuniformity via the local number variance
Kim, Jaeuk; Torquato, Salvatore
2017-01-01
Hyperuniform many-particle systems in d-dimensional space {{{R}}d} , which includes crystals, quasicrystals, and some exotic disordered systems, are characterized by an anomalous suppression of density fluctuations at large length scales such that the local number variance within a ‘spherical’ observation window grows slower than the window volume. In usual circumstances, this direct-space condition is equivalent to the Fourier-space hyperuniformity condition that the structure factor vanishes as the wavenumber goes to zero. In this paper, we comprehensively study the effect of aspherical window shapes with characteristic size L on the direct-space condition for hyperuniform systems. For lattices, we demonstrate that the variance growth rate can depend on the shape as well as the orientation of the windows, and in some cases, the growth rate can be faster than the window volume (i.e. L d ), which may lead one to falsely conclude that the system is non-hyperuniform solely according to the direct-space condition. We begin by numerically investigating the variance of two-dimensional lattices using ‘superdisk’ windows, whose convex shapes continuously interpolate between circles (p = 1) and squares (p\\to ∞ ), as prescribed by a deformation parameter p, when the superdisk symmetry axis is aligned with the lattice. Subsequently, we analyze the variance for lattices as a function of the window orientation, especially for two-dimensional lattices using square windows (superdisk when p\\to ∞ ). Based on this analysis, we explain the reason why the variance for d = 2 can grow faster than the window area or even slower than the window perimeter (e.g. like \\ln (L) ). We then study the generalized condition of the window orientation, under which the variance can grow as fast as or faster than L d (window volume), to the case of Bravais lattices and parallelepiped windows in {{{R}}d} . In the case of isotropic disordered hyperuniform systems, we
Institute of Scientific and Technical Information of China (English)
Georgy Shevlyakov; Kiseon Kim
2005-01-01
A brief survey of former and recent results on Huber's minimax approach in robust statistics is given. The least informative distributions minimizing Fisher information for location over several distribution classes with upper-bounded variances and subranges are written down. These least informative distributions are qualitatively different from classical Huber's solution and have the following common structure: (i) with relatively small variances they are short-tailed, in particular normal; (ii) with relatively large variances they are heavytailed, in particular the Laplace; (iii) they are compromise with relatively moderate variances. These results allow to raise the efficiency of minimax robust procedures retaining high stability as compared to classical Huber's procedure for contaminated normal populations. In application to signal detection problems, the proposed minimax detection rule has proved to be robust and close to Huber's for heavy-tailed distributions and more efficient than Huber's for short-tailed ones both in asymptotics and on finite samples.
Genomic variance estimates: With or without disequilibrium covariances?
Lehermeier, C; de Los Campos, G; Wimmer, V; Schön, C-C
2017-06-01
Whole-genome regression methods are often used for estimating genomic heritability: the proportion of phenotypic variance that can be explained by regression on marker genotypes. Recently, there has been an intensive debate on whether and how to account for the contribution of linkage disequilibrium (LD) to genomic variance. Here, we investigate two different methods for genomic variance estimation that differ in their ability to account for LD. By analysing flowering time in a data set on 1,057 fully sequenced Arabidopsis lines with strong evidence for diversifying selection, we observed a large contribution of covariances between quantitative trait loci (QTL) to the genomic variance. The classical estimate of genomic variance that ignores covariances underestimated the genomic variance in the data. The second method accounts for LD explicitly and leads to genomic variance estimates that when added to error variance estimates match the sample variance of phenotypes. This method also allows estimating the covariance between sets of markers when partitioning the genome into subunits. Large covariance estimates between the five Arabidopsis chromosomes indicated that the population structure in the data led to strong LD also between physically unlinked QTL. By consecutively removing population structure from the phenotypic variance using principal component analysis, we show how population structure affects the magnitude of LD contribution and the genomic variance estimates obtained with the two methods. © 2017 Blackwell Verlag GmbH.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.
Gene set analysis using variance component tests
2013-01-01
Background Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. Results We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). Conclusion We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data. PMID:23806107
Functional analysis of variance for association studies.
Directory of Open Access Journals (Sweden)
Olga A Vsevolozhskaya
Full Text Available While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1 it tests for a joint effect of gene variants, including both common and rare; (2 it fully utilizes linkage disequilibrium and genetic position information; and (3 allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM, - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.
2015-01-01
Memorias, histórico, físicas, crítico, apologéticas de la América Meridional con unas breves advertencias y noticias útiles, a los que de orden de Su Majestad, hubiesen de viajar y describir aquellas vastas regiones. Reino Animal. Tomo III. Por un anónimo americano en Cádiz por los años de 1757. Primera Parte Prólogo Artículo 1°De los cuadrúpedos útiles al hombre a varios usos y a su sustento. Vaca Caballos Carneros de la tierra, especie de camellos Vicuña Guanacos Puercos monteses Artículo 2...
Kirk, David
1994-01-01
This sequel to Graphics Gems (Academic Press, 1990), and Graphics Gems II (Academic Press, 1991) is a practical collection of computer graphics programming tools and techniques. Graphics Gems III contains a larger percentage of gems related to modeling and rendering, particularly lighting and shading. This new edition also covers image processing, numerical and programming techniques, modeling and transformations, 2D and 3D geometry and algorithms,ray tracing and radiosity, rendering, and more clever new tools and tricks for graphics programming. Volume III also includes a
Anatomic variance of the iliopsoas tendon.
Philippon, Marc J; Devitt, Brian M; Campbell, Kevin J; Michalski, Max P; Espinoza, Chris; Wijdicks, Coen A; Laprade, Robert F
2014-04-01
The iliopsoas tendon has been implicated as a generator of hip pain and a cause of labral injury due to impingement. Arthroscopic release of the iliopsoas tendon has become a preferred treatment for internal snapping hips. Traditionally, the iliopsoas tendon has been considered the conjoint tendon of the psoas major and iliacus muscles, although anatomic variance has been reported. The iliopsoas tendon consists of 2 discrete tendons in the majority of cases, arising from both the psoas major and iliacus muscles. Descriptive laboratory study. Fifty-three nonmatched, fresh-frozen, cadaveric hemipelvis specimens (average age, 62 years; range, 47-70 years; 29 male and 24 female) were used in this study. The iliopsoas muscle was exposed via a Smith-Petersen approach. A transverse incision across the entire iliopsoas musculotendinous unit was made at the level of the hip joint. Each distinctly identifiable tendon was recorded, and the distance from the lesser trochanter was recorded. The prevalence of a single-, double-, and triple-banded iliopsoas tendon was 28.3%, 64.2%, and 7.5%, respectively. The psoas major tendon was consistently the most medial tendinous structure, and the primary iliacus tendon was found immediately lateral to the psoas major tendon within the belly of the iliacus muscle. When present, an accessory iliacus tendon was located adjacent to the primary iliacus tendon, lateral to the primary iliacus tendon. Once considered a rare anatomic variant, the finding of ≥2 distinct tendinous components to the iliacus and psoas major muscle groups is an important discovery. It is essential to be cognizant of the possibility that more than 1 tendon may exist to ensure complete release during endoscopy. Arthroscopic release of the iliopsoas tendon is a well-accepted surgical treatment for iliopsoas impingement. The most widely used site for tendon release is at the level of the anterior hip joint. The findings of this novel cadaveric anatomy study suggest that
Directory of Open Access Journals (Sweden)
Stanzel Sven
2007-06-01
Full Text Available Abstract Background The aim of the study was to determine the maximal tolerated dose (MTD of gemcitabine every two weeks concurrent to radiotherapy, administered during an aggressive program of sequential and simultaneous radiochemotherapy for locally advanced, unresectable non-small cell lung cancer (NSCLC and to evaluate the efficacy of this regime in a phase II study. Methods 33 patients with histologically confirmed NSCLC were enrolled in a combined radiochemotherapy protocol. 29 patients were assessable for evaluation of toxicity and tumor response. Treatment included two cycles of induction chemotherapy with gemcitabine (1200 mg/m2 and vinorelbine (30 mg/m2 at day 1, 8 and 22, 29 followed by concurrent radiotherapy (2.0 Gy/d; total dose 66.0 Gy and chemotherapy with gemcitabine every two weeks at day 43, 57 and 71. Radiotherapy planning included [18F] fluorodeoxyglucose positron emission tomography (FDG PET based target volume definition. 10 patients were included in the phase I study with an initial gemcitabine dose of 300 mg/m2. The dose of gemcitabine was increased in steps of 100 mg/m2 until the MTD was realized. Results MTD was defined for the patient group receiving gemcitabine 500 mg/m2 due to grade 2 (next to grade 3 esophagitis in all patients resulting in a mean body weight loss of 5 kg (SD = 1.4 kg, representing 8% of the initial weight. These patients showed persisting dysphagia 3 to 4 weeks after completing radiotherapy. In accordance with expected complications as esophagitis, dysphagia and odynophagia, we defined the MTD at this dose level, although no dose limiting toxicity (DLT grade 3 was reached. In the phase I/II median follow-up was 15.7 months (4.1 to 42.6 months. The overall response rate after completion of therapy was 64%. The median overall survival was 19.9 (95% CI: [10.1; 29.7] months for all eligible patients. The median disease-free survival for all patients was 8.7 (95% CI: [2.7; 14.6] months. Conclusion
40 CFR 190.11 - Variances for unusual operations.
2010-07-01
... PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...
Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.
Ashby, Neil; Patla, Bijunath
2016-04-01
Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling.
Directory of Open Access Journals (Sweden)
Ashton M Verdery
Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.
Multiperiod Mean-Variance Portfolio Optimization via Market Cloning
Energy Technology Data Exchange (ETDEWEB)
Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)
2011-08-15
The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.
RR-Interval variance of electrocardiogram for atrial fibrillation detection
Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.
2016-11-01
Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.
GPS Space Service Volume: Ensuring Consistent Utility Across GPS Design Builds for Space Users
Bauer, Frank H.; Parker, Joel Jefferson Konkl; Valdez, Jennifer Ellen
2015-01-01
GPS availability and signal strength originally specified for users on or near surface of Earth with transmitted power levels specified at edge-of-Earth, 14.3 degrees. Prior to the SSV specification, on-orbit performance of GPS varied from block build to block build (IIA, IIRM, IIF) due to antenna gain and beam width variances. Unstable on-orbit performance results in significant risk to space users. Side-lobe signals, although not specified, were expected to significantly boost GPS signal availability for users above the constellation. During GPS III Phase A, NASA noted significant discrepancies in power levels specified in GPS III specification documents, and measured on-orbit performance. To stabilize the signal for high altitude space users, NASA DoD team in 2003-2005 led the creation of new Space Service Volume (SSV) definition and specifications.
Muffly, Matthew K; Chen, Michael I; Claure, Rebecca E; Drover, David R; Efron, Bradley; Fitch, William L; Hammer, Gregory B
2017-10-01
regression model. Analysis of variance was used to determine whether the absolute log proportional error differed by the intended injection volume. Interindividual and intraindividual deviation from the intended injection volume was also characterized. As the intended injection volumes decreased, the absolute log proportional injection volume error increased (analysis of variance, P injection volumes between physicians and pediatric PACU nurses; however, the difference in absolute bias was significantly higher for nurses with a 2-sided significance of P = .03. Clinically significant dose variation occurs when injecting volumes ≤0.5 mL. Administering small volumes of medications may result in unintended medication administration errors.
Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata
2015-07-31
Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.
[Wavelength selection of the oximetry based on test analysis of variance].
Lin, Ling; Li, Wei; Zeng, Rui-Li; Liu, Rui-An; Li, Gang; Wu, Xiao-Rong
2014-07-01
In order to improve the precision and reliability of the spectral measurement of blood oxygen saturation, and enhance the validity of the measurement, the method of test analysis of variance was employed. Preferred wavelength combination was selected by the analysis of the distribution of the coefficient of oximetry at different wavelength combinations and rational use of statistical theory. Calculated by different combinations of wavelengths (660 and 940 nm, 660 and 805 nm and 805 and 940 nm) through the clinical data under different oxygen saturation, the single factor test analysis of variance model of the oxygen saturation coefficient was established, the relative preferabe wavelength combination can be selected by comparative analysis of different combinations of wavelengths from the photoelectric volume pulse to provide a reliable intermediate data for further modeling. The experiment results showed that the wavelength combination of 660 and 805 nm responded more significantly to the changes in blood oxygen saturation and the introduced noise and method error were relatively smaller of this combination than other wavelength combination, which could improve the measurement accuracy of oximetry. The study applied the test variance analysis to the selection of wavelength combination in the blood oxygen result measurement, and the result was significant. The study provided a new idea for the blood oxygen measurements and other related spectroscopy quantitative analysis. The method of test analysis of variance can help extract the valid information which represents the measured values from the spectrum.
Flow rate dependent extra-column variance from injection in capillary liquid chromatography.
Aggarwal, Pankaj; Liu, Kun; Sharma, Sonika; Lawson, John S; Dennis Tolley, H; Lee, Milton L
2015-02-01
Efficiency and resolution in capillary liquid chromatography (LC) can be significantly affected by extra-column band broadening, especially for isocratic separations. This is particularly a concern in evaluating column bed structure using non-retained test compounds. The band broadening due to an injector supplied with a commercially available capillary LC system was characterized from experimental measurements. The extra-column variance from the injection valve was found to have an extra-column contribution independent of the injection volume, showing an exponential dependence on flow rate. The overall extra-column variance from the injection valve was found to vary from 34 to 23 nL. A new mathematical model was derived that explains this exponential contribution of extra-column variance on chromatographic performance. The chromatographic efficiency was compromised by ∼130% for a non-retained analyte because of injection valve dead volume. The measured chromatographic efficiency was greatly improved when a new nano-flow pumping system with integrated injection valve was used.
National Oceanic and Atmospheric Administration, Department of Commerce — Zooplankton biomass data (displacement volume) collected in North Atlantic during ICNAF (International Convention for the Northwest Atlantic Fisheries) NORWESTLANT...
National Oceanic and Atmospheric Administration, Department of Commerce — Zooplankton biomass (displacement and settled volume) data collected during the International Cooperative Investigations of the Tropical Atlantic EQUALANT I,...
Mechatronic systems and materials III
Gosiewski, Zdzislaw
2009-01-01
This very interesting volume is divided into 24 sections; each of which covers, in detail, one aspect of the subject-matter: I. Industrial robots; II. Microrobotics; III. Mobile robots; IV. Teleoperation, telerobotics, teleoperated semi-autonomous systems; V. Sensors and actuators in mechatronics; VI. Control of mechatronic systems; VII. Analysis of vibration and deformation; VIII. Optimization, optimal design; IX. Integrated diagnostics; X. Failure analysis; XI. Tribology in mechatronic systems; XII. Analysis of signals; XIII. Measurement techniques; XIV. Multifunctional and smart materials;
An Analysis of Variance Framework for Matrix Sampling.
Sirotnik, Kenneth
Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…
Gender Variance and Educational Psychology: Implications for Practice
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
Gender Variance and Educational Psychology: Implications for Practice
Yavuz, Carrie
2016-01-01
The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…
Error Variance of Rasch Measurement with Logistic Ability Distributions.
Dimitrov, Dimiter M.
Exact formulas for classical error variance are provided for Rasch measurement with logistic distributions. An approximation formula with the normal ability distribution is also provided. With the proposed formulas, the additive contribution of individual items to the population error variance can be determined without knowledge of the other test…
A Broadband Beamformer Using Controllable Constraints and Minimum Variance
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Benesty, Jacob; Jensen, Jesper Rindom
2014-01-01
The minimum variance distortionless response (MVDR) and the linearly constrained minimum variance (LCMV) beamformers are two optimal approaches in the sense of noise reduction. The LCMV beamformer can also reject interferers using linear constraints at the expense of reducing the degree of freedom...
On the Endogeneity of the Mean-Variance Efficient Frontier.
Somerville, R. A.; O'Connell, Paul G. J.
2002-01-01
Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…
Delivery Time Variance Reduction in the Military Supply Chain
2010-03-01
DELIVERY TIME VARIANCE REDUCTION IN THE MILITARY SUPPLY CHAIN THESIS...IN THE MILITARY SUPPLY CHAIN THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering...March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-OR-MS-ENS-10-02 DELIVERY TIME VARIANCE IN THE MILITARY SUPPLY CHAIN Preston
The asymptotic variance of departures in critically loaded queues
A. Al Hanbali; M.R.H. Mandjes (Michel); Y. Nazarathy (Yoni); W. Whitt
2010-01-01
htmlabstractWe consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case that the system load rho equals 1, and prove that the asymptotic variance rate satisfies lim_t Var D(t)/t = lambda
76 FR 78698 - Proposed Revocation of Permanent Variances
2011-12-19
... Occupational Safety and Health Administration Proposed Revocation of Permanent Variances AGENCY: Occupational... short and plain statement detailing (1) how the proposed revocation would affect the requesting party..., subpart L. The following table provides information about the variances proposed for revocation by...
Adjustment for heterogeneous variances due to days in milk and ...
African Journals Online (AJOL)
ARC-IRENE
Adjustment of heterogeneous variances and a calving year effect in test-day ... Regression Test-Day Model (FRTDM), which assumes equal variances of the response variable at different .... random residual error .... records were included in the selection, while in the unadjusted data set, lactations consisting of six and more.
Characterizing the evolution of genetic variance using genetic covariance tensors.
Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W
2009-06-12
Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations.
Productive Failure in Learning the Concept of Variance
Kapur, Manu
2012-01-01
In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…
Time variance effects and measurement error indications for MLS measurements
DEFF Research Database (Denmark)
Liu, Jiyuan
1999-01-01
Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....
Confidence Intervals of Variance Functions in Generalized Linear Model
Institute of Scientific and Technical Information of China (English)
Yong Zhou; Dao-ji Li
2006-01-01
In this paper we introduce an appealing nonparametric method for estimating variance and conditional variance functions in generalized linear models (GLMs), when designs are fixed points and random variables respectively. Bias-corrected confidence bands are proposed for the (conditional) variance by local linear smoothers. Nonparametric techniques are developed in deriving the bias-corrected confidence intervals of the (conditional) variance. The asymptotic distribution of the proposed estimator is established and show that the bias-corrected confidence bands asymptotically have the correct coverage properties. A small simulation is performed when unknown regression parameter is estimated by nonparametric quasi-likelihood. The results are also applicable to nonparametric autoregressive times series model with heteroscedastic conditional variance.
Research on variance of subnets in network sampling
Institute of Scientific and Technical Information of China (English)
Qi Gao; Xiaoting Li; Feng Pan
2014-01-01
In the recent research of network sampling, some sam-pling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as wel as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor (CNN) model, random network and smal-world network to explore the variance in network sam-pling. As proved by the results, snowbal sampling obtains the most variance of subnets, but does wel in capturing the network struc-ture. The variance of networks sampled by the hub and random strategy are much smal er. The hub strategy performs wel in re-flecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.
Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.
2009-01-01
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo
Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.
2009-01-01
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sam
Stokes, George Gabriel; Larmor, Joseph
2010-06-01
Volume 1: Preface; Part I. Personal and Biographical; Part II. General Scientific Career; Part IIIa. Special Scientific Correspondence; Appendix; Index. Volume 2: Part. III. Special Scientific Correspondence; Index.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
DEFF Research Database (Denmark)
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric ...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation......We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...
Filtered kriging for spatial data with heterogeneous measurement error variances.
Christensen, William F
2011-09-01
When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.
Meta-analysis of ratios of sample variances.
Prendergast, Luke A; Staudte, Robert G
2016-05-20
When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile-quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set.
Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.
Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H
2014-06-01
Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.
Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation
Wu, Dong L.; Eckermann, Stephen D.
2008-01-01
The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.
Genetically Determined Variation in Lysis Time Variance in the Bacteriophage φX174.
Baker, Christopher W; Miller, Craig R; Thaweethai, Tanayott; Yuan, Jeffrey; Baker, Meghan Hollibaugh; Joyce, Paul; Weinreich, Daniel M
2016-04-07
Researchers in evolutionary genetics recently have recognized an exciting opportunity in decomposing beneficial mutations into their proximal, mechanistic determinants. The application of methods and concepts from molecular biology and life history theory to studies of lytic bacteriophages (phages) has allowed them to understand how natural selection sees mutations influencing life history. This work motivated the research presented here, in which we explored whether, under consistent experimental conditions, small differences in the genome of bacteriophage φX174 could lead to altered life history phenotypes among a panel of eight genetically distinct clones. We assessed the clones' phenotypes by applying a novel statistical framework to the results of a serially sampled parallel infection assay, in which we simultaneously inoculated each of a large number of replicate host volumes with ∼1 phage particle. We sequentially plated the volumes over the course of infection and counted the plaques that formed after incubation. These counts served as a proxy for the number of phage particles in a single volume as a function of time. From repeated assays, we inferred significant, genetically determined heterogeneity in lysis time and burst size, including lysis time variance. These findings are interesting in light of the genetic and phenotypic constraints on the single-protein lysis mechanism of φX174. We speculate briefly on the mechanisms underlying our results, and we discuss the potential importance of lysis time variance in viral evolution.
Genetically Determined Variation in Lysis Time Variance in the Bacteriophage φX174
Directory of Open Access Journals (Sweden)
Christopher W. Baker
2016-04-01
Full Text Available Researchers in evolutionary genetics recently have recognized an exciting opportunity in decomposing beneficial mutations into their proximal, mechanistic determinants. The application of methods and concepts from molecular biology and life history theory to studies of lytic bacteriophages (phages has allowed them to understand how natural selection sees mutations influencing life history. This work motivated the research presented here, in which we explored whether, under consistent experimental conditions, small differences in the genome of bacteriophage φX174 could lead to altered life history phenotypes among a panel of eight genetically distinct clones. We assessed the clones’ phenotypes by applying a novel statistical framework to the results of a serially sampled parallel infection assay, in which we simultaneously inoculated each of a large number of replicate host volumes with ∼1 phage particle. We sequentially plated the volumes over the course of infection and counted the plaques that formed after incubation. These counts served as a proxy for the number of phage particles in a single volume as a function of time. From repeated assays, we inferred significant, genetically determined heterogeneity in lysis time and burst size, including lysis time variance. These findings are interesting in light of the genetic and phenotypic constraints on the single-protein lysis mechanism of φX174. We speculate briefly on the mechanisms underlying our results, and we discuss the potential importance of lysis time variance in viral evolution.
Variance decomposition of apolipoproteins and lipids in Danish twins
DEFF Research Database (Denmark)
Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A
2007-01-01
OBJECTIVE: Twin studies are used extensively to decompose the variance of a trait, mainly to estimate the heritability of the trait. A second purpose of such studies is to estimate to what extent the non-genetic variance is shared or specific to individuals. To a lesser extent the twin studies have...... been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect...
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
DEFF Research Database (Denmark)
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...
Pricing Volatility Derivatives Under the Modified Constant Elasticity of Variance Model
Leunglung Chan; Eckhard Platen
2015-01-01
This paper studies volatility derivatives such as variance and volatility swaps, options on variance in the modified constant elasticity of variance model using the benchmark approach. The analytical expressions of pricing formulas for variance swaps are presented. In addition, the numerical solutions for variance swaps, volatility swaps and options on variance are demonstrated.
Early cephalometric characteristics in Class III malocclusion
Directory of Open Access Journals (Sweden)
Vanessa Costa Farias
2012-04-01
Full Text Available OBJECTIVE: Early identification of craniofacial morphological characteristics allows orthopedic segmented interventions to attenuate dentoskeletal discrepancies, which may be partially disguised by natural dental compensation. To investigate the morphological characteristics of Brazilian children with Class III malocclusion, in stages I and II of cervical vertebrae maturation and compare them with the characteristics of Class I control patients. METHODS: Pre-orthodontic treatment records of 20 patients with Class III malocclusion and 20 control Class I patients, matched by the same skeletal maturity index and sex, were selected. The craniofacial structures and their relationships were divided into different categories for analysis. Angular and linear measures were adopted from the analyses previously described by Downs, Jarabak, Jacobson and McNamara. The differences found between the groups of Class III patients and Class I control group, both subdivided according to the stage of cervical vertebrae maturation (I or II, were assessed by analysis of variance (ANOVA, complemented by Bonferroni's multiple mean comparisons test. RESULTS: The analysis of variance showed statistically significant differences in the different studied groups, between the mean values found for some angular (SNA, SNB, ANB and linear variables (Co - Gn, N - Perp Pog, Go - Me, Wits, S - Go, Ar - Go. CONCLUSION: Assessed children displaying Class III malocclusion show normal anterior base of skull and maxilla, and anterior positioning of the mandible partially related to increased posterior facial height with consequent mandibular counterclockwise rotation.
Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin
2009-02-09
Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.
minimum variance estimation of yield parameters of rubber tree with ...
African Journals Online (AJOL)
2013-03-01
Mar 1, 2013 ... STAMP, an OxMetric modular software system for time series analysis, was used to estimate the yield ... derlying regression techniques. .... Kalman Filter Minimum Variance Estimation of Rubber Tree Yield Parameters. 83.
Detecting Pulsars with Interstellar Scintillation in Variance Images
Dai, S; Bell, M E; Coles, W A; Hobbs, G; Ekers, R D; Lenc, E
2016-01-01
Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show th...
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.
40 CFR 141.4 - Variances and exemptions.
2010-07-01
... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a... maintenance of the distribution system. ...
Fundamental Indexes As Proxies For Mean-Variance Efficient Portfolios
National Research Council Canada - National Science Library
Kathleen Hodnett; Gearé Botes; Khumbudzo Daswa; Kimberly Davids; Emmanuel Che Fongwa; Candice Fortuin
2014-01-01
Mean-variance efficiency was first explained by Markowitz (1952) who derived an efficient frontier comprised of portfolios with the highest expected returns for a given level of risk borne by the investor...
TESTS FOR VARIANCE COMPONENTS IN VARYING COEFFICIENT MIXED MODELS
National Research Council Canada - National Science Library
Zaixing Li; Yuedong Wang; Ping Wu; Wangli Xu; Lixing Zhu
2012-01-01
.... To address the question of whether a varying coefficient mixed model can be reduced to a simpler varying coefficient model, we develop one-sided tests for the null hypothesis that all the variance components are zero...
Estimating the generalized concordance correlation coefficient through variance components.
Carrasco, Josep L; Jover, Lluís
2003-12-01
The intraclass correlation coefficient (ICC) and the concordance correlation coefficient (CCC) are two of the most popular measures of agreement for variables measured on a continuous scale. Here, we demonstrate that ICC and CCC are the same measure of agreement estimated in two ways: by the variance components procedure and by the moment method. We propose estimating the CCC using variance components of a mixed effects model, instead of the common method of moments. With the variance components approach, the CCC can easily be extended to more than two observers, and adjusted using confounding covariates, by incorporating them in the mixed model. A simulation study is carried out to compare the variance components approach with the moment method. The importance of adjusting by confounding covariates is illustrated with a case example.
Variance estimation in neutron coincidence counting using the bootstrap method
Energy Technology Data Exchange (ETDEWEB)
Dubi, C., E-mail: chendb331@gmail.com [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Ocherashvilli, A.; Ettegui, H. [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Via E. Fermi, 2749 JRC, Ispra (Italy)
2015-09-11
In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters.
Culture of Schools. Final Report. Volume IV.
American Anthropological Association, Washington, DC.
The final volume of this 4-volume report contains further selections from "Anthropological Perspectives on Education," a monograph to be published by Basic Books of New York. (Other selections are in Vol. III, SP 003 902.) Monograph selections appearing in this volume are: "Great Tradition, Little Tradition, and Formal Education;""Indians,…
The effect of errors-in-variables on variance component estimation
Xu, Peiliang
2016-08-01
negative values for variance components. VC estimation in EIV models remains difficult and challenging; and (iii) both the bias-corrected weighted LS estimate and the N-calibrated weighted LS estimate obviously outperform the weighted LS estimate. The intuitively N-calibrated weighted LS estimate is computationally less expensive and shown to statistically perform even better than the bias-corrected weighted LS estimate in producing an almost unbiased estimate of parameters.
Dimension free and infinite variance tail estimates on Poisson space
Breton, J. C.; Houdré, C.; Privault, N.
2004-01-01
Concentration inequalities are obtained on Poisson space, for random functionals with finite or infinite variance. In particular, dimension free tail estimates and exponential integrability results are given for the Euclidean norm of vectors of independent functionals. In the finite variance case these results are applied to infinitely divisible random variables such as quadratic Wiener functionals, including L\\'evy's stochastic area and the square norm of Brownian paths. In the infinite vari...
The asymptotic variance of departures in critically loaded queues
Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.
2011-01-01
We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +
Wavelet Variance Analysis of EEG Based on Window Function
Institute of Scientific and Technical Information of China (English)
ZHENG Yuan-zhuang; YOU Rong-yi
2014-01-01
A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
Multiperiod mean-variance efficient portfolios with endogenous liabilities
Markus LEIPPOLD; Trojani, Fabio; Vanini, Paolo
2011-01-01
We study the optimal policies and mean-variance frontiers (MVF) of a multiperiod mean-variance optimization of assets and liabilities (AL). This makes the analysis more challenging than for a setting based on purely exogenous liabilities, in which the optimization is only performed on the assets while keeping liabilities fixed. We show that, under general conditions for the joint AL dynamics, the optimal policies and the MVF can be decomposed into an orthogonal set of basis returns using exte...
Global Variance Risk Premium and Forex Return Predictability
Aloosh, Arash
2014-01-01
In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...
Estimating Income Variances by Probability Sampling: A Case Study
Directory of Open Access Journals (Sweden)
Akbar Ali Shah
2010-08-01
Full Text Available The main focus of the study is to estimate variability in income distribution of households by conducting a survey. The variances in income distribution have been calculated by probability sampling techniques. The variances are compared and relative gains are also obtained. It is concluded that the income distribution has been better as compared to first Household Income and Expenditure Survey (HIES conducted in Pakistan 1993-94.
Testing for Causality in Variance Usinf Multivariate GARCH Models
Christian M. Hafner; Herwartz, Helmut
2008-01-01
Tests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently, little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causality in var...
Testing for causality in variance using multivariate GARCH models
Hafner, Christian; Herwartz, H.
2004-01-01
textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual based testing is to specify a multivariate volatility model, such as multivariate GARCH (or BEKK), and construct a Wald test on noncausality in variance. We compare both approaches to testing causa...
The evolution and consequences of sex-specific reproductive variance.
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
Variance estimation in the analysis of microarray data
Wang, Yuedong
2009-04-01
Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.
The phenome-wide distribution of genetic variance.
Blows, Mark W; Allen, Scott L; Collet, Julie M; Chenoweth, Stephen F; McGuigan, Katrina
2015-07-01
A general observation emerging from estimates of additive genetic variance in sets of functionally or developmentally related traits is that much of the genetic variance is restricted to few trait combinations as a consequence of genetic covariance among traits. While this biased distribution of genetic variance among functionally related traits is now well documented, how it translates to the broader phenome and therefore any trait combination under selection in a given environment is unknown. We show that 8,750 gene expression traits measured in adult male Drosophila serrata exhibit widespread genetic covariance among random sets of five traits, implying that pleiotropy is common. Ultimately, to understand the phenome-wide distribution of genetic variance, very large additive genetic variance-covariance matrices (G) are required to be estimated. We draw upon recent advances in matrix theory for completing high-dimensional matrices to estimate the 8,750-trait G and show that large numbers of gene expression traits genetically covary as a consequence of a single genetic factor. Using gene ontology term enrichment analysis, we show that the major axis of genetic variance among expression traits successfully identified genetic covariance among genes involved in multiple modes of transcriptional regulation. Our approach provides a practical empirical framework for the genetic analysis of high-dimensional phenome-wide trait sets and for the investigation of the extent of high-dimensional genetic constraint.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. © 2011, The International Biometric Society.
Analytic variance estimates of Swank and Fano factors.
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank
2014-07-01
Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Why risk is not variance: an expository note.
Cox, Louis Anthony Tony
2008-08-01
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.
Analytic variance estimates of Swank and Fano factors
Energy Technology Data Exchange (ETDEWEB)
Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov [US Food and Drug Administration, Silver Spring, Maryland 20993 (United States)
2014-07-15
Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.
Genetic heterogeneity of residual variance in broiler chickens
Directory of Open Access Journals (Sweden)
Hill William G
2006-11-01
Full Text Available Abstract Aims were to estimate the extent of genetic heterogeneity in environmental variance. Data comprised 99 535 records of 35-day body weights from broiler chickens reared in a controlled environment. Residual variance within dam families was estimated using ASREML, after fitting fixed effects such as genetic groups and hatches, for each of 377 genetically contemporary sires with a large number of progeny (> 100 males or females each. Residual variance was computed separately for male and female offspring, and after correction for sampling, strong evidence for heterogeneity was found, the standard deviation between sires in within variance amounting to 15–18% of its mean. Reanalysis using log-transformed data gave similar results, and elimination of 2–3% of outlier data reduced the heterogeneity but it was still over 10%. The correlation between estimates for males and females was low, however. The correlation between sire effects on progeny mean and residual variance for body weight was small and negative (-0.1. Using a data set bigger than any yet presented and on a trait measurable in both sexes, this study has shown evidence for heterogeneity in the residual variance, which could not be explained by segregation of major genes unless very few determined the trait.
III-V semiconductor materials and devices
Malik, R J
1989-01-01
The main emphasis of this volume is on III-V semiconductor epitaxial and bulk crystal growth techniques. Chapters are also included on material characterization and ion implantation. In order to put these growth techniques into perspective a thorough review of the physics and technology of III-V devices is presented. This is the first book of its kind to discuss the theory of the various crystal growth techniques in relation to their advantages and limitations for use in III-V semiconductor devices.
Robertson, Brant E; Dunlop, James S; McLure, Ross J; Stark, Daniel P; McLeod, Derek
2014-01-01
Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this {\\it Letter}, we demonstrate there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ~35% at redshift z~7 to >~65% at z~10. Previous studies of high redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.
Robertson, Brant E.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; Stark, Dan P.; McLeod, Derek
2014-12-01
Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ~35% at redshift z ~ 7 to >~ 65% at z ~ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.
On the Design of Attitude-Heading Reference Systems Using the Allan Variance.
Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis
2016-04-01
The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).
CMB-S4 and the hemispherical variance anomaly
O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.
2017-09-01
Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.
Waste Isolation Pilot Plant No-migration variance petition. Addendum: Volume 7, Revision 1
Energy Technology Data Exchange (ETDEWEB)
1990-03-01
This report describes various aspects of the Waste Isolation Pilot Plant (WIPP) including design data, waste characterization, dissolution features, ground water hydrology, natural resources, monitoring, general geology, and the gas generation/test program.
How does variance in fertility change over the demographic transition?
Hruschka, Daniel J; Burger, Oskar
2016-04-19
Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).
Variance-based fingerprint distance adjustment algorithm for indoor localization
Institute of Scientific and Technical Information of China (English)
Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang
2015-01-01
The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.
Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions
Luhar, Ashok K.
2010-05-01
Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.
Application of variance components estimation to calibrate geoid error models.
Guo, Dong-Mei; Xu, Hou-Ze
2015-01-01
The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.
Analysis of Variance Components for Genetic Markers with Unphased Genotypes.
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.
Astuti, Valerio; Rovelli, Carlo
2016-01-01
Building on a technical result by Brunnemann and Rideout on the spectrum of the Volume operator in Loop Quantum Gravity, we show that the dimension of the space of the quadrivalent states --with finite-volume individual nodes-- describing a region with total volume smaller than $V$, has \\emph{finite} dimension, bounded by $V \\log V$. This allows us to introduce the notion of "volume entropy": the von Neumann entropy associated to the measurement of volume.
Sensitivity to Estimation Errors in Mean-variance Models
Institute of Scientific and Technical Information of China (English)
Zhi-ping Chen; Cai-e Zhao
2003-01-01
In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.
Expectation Values and Variance Based on Lp-Norms
Directory of Open Access Journals (Sweden)
George Livadiotis
2012-11-01
Full Text Available This analysis introduces a generalization of the basic statistical concepts of expectation values and variance for non-Euclidean metrics induced by Lp-norms. The non-Euclidean Lp means are defined by exploiting the fundamental property of minimizing the Lp deviations that compose the Lp variance. These Lp expectation values embody a generic formal scheme of means characterization. Having the p-norm as a free parameter, both the Lp-normed expectation values and their variance are flexible to analyze new phenomena that cannot be described under the notions of classical statistics based on Euclidean norms. The new statistical approach provides insights into regression theory and Statistical Physics. Several illuminating examples are examined.
CMB-S4 and the Hemispherical Variance Anomaly
O'Dwyer, Marcio; Knox, Lloyd; Starkman, Glenn D
2016-01-01
Cosmic Microwave Background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the northern and southern Ecliptic hemispheres. In this context, the northern hemisphere displays an anomalously low variance while the southern hemisphere appears unremarkable (consistent with expectations from the best-fitting theory, $\\Lambda$CDM). While this is a well established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground ba...
Variance inflation in high dimensional Support Vector Machines
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2013-01-01
Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... is not the full input space. Hence, when applying the model to future data the model is effectively blind to the missed orthogonal subspace. This can lead to an inflated variance of hidden variables estimated in the training set and when the model is applied to test data we may find that the hidden variables...... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...
Variance swap payoffs, risk premia and extreme market conditions
DEFF Research Database (Denmark)
Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco
This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic...... constraint. Our approach, only requiring option implied volatilities and daily returns for the underlying, provides measurement error free estimates of the part of the VRP related to normal market conditions, and allows constructing variables indicating agents' expectations under extreme market conditions....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....
Saturation of number variance in embedded random-matrix ensembles.
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
The positioning algorithm based on feature variance of billet character
Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang
2015-12-01
In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.
Saturation of number variance in embedded random-matrix ensembles
Prakash, Ravi; Pandey, Akhilesh
2016-05-01
We study fluctuation properties of embedded random matrix ensembles of noninteracting particles. For ensemble of two noninteracting particle systems, we find that unlike the spectra of classical random matrices, correlation functions are nonstationary. In the locally stationary region of spectra, we study the number variance and the spacing distributions. The spacing distributions follow the Poisson statistics, which is a key behavior of uncorrelated spectra. The number variance varies linearly as in the Poisson case for short correlation lengths but a kind of regularization occurs for large correlation lengths, and the number variance approaches saturation values. These results are known in the study of integrable systems but are being demonstrated for the first time in random matrix theory. We conjecture that the interacting particle cases, which exhibit the characteristics of classical random matrices for short correlation lengths, will also show saturation effects for large correlation lengths.
Lee, Yoonjung; Chun, Youn-Sic; Kang, Nara; Kim, Minji
2012-12-01
Postsurgical changes of the airway have become a great point of interest and often have been reported to be a predisposing factor for obstructive sleep apnea after mandibular setback surgery. The purpose of this study was to evaluate the 3-dimensional volumetric changes in the upper airway space of patients who underwent bimaxillary surgery to correct Class III malocclusions. This study was performed retrospectively in a group of patients who underwent bimaxillary surgery for Class III malocclusion and had full cone-beam computed tomographic (CBCT) images taken before surgery and 1 day, 3 months, and 6 months after surgery. The upper and lower parts of the airway volume and the diameters of the airway were measured from 2 different levels. Presurgical measurements and the amount of surgical correction were evaluated for their effect on airway volume. Data analyses were performed by analysis of variance and multiple stepwise regression analysis. The subjects included 21 patients (6 men and 15 women; mean age, 22.7 yrs). The surgeries were Le Fort I impaction (5.27 ± 2.58 mm impaction from the posterior nasal spine) and mandibular setback surgery (9.20 ± 4.60 mm set back from the pogonion). No statistically significant differences were found in the total airway volume for all time points. In contrast, the volume of the upper part showed an increase (12.35%) and the lower part showed a decrease (14.07%), with a statistically significant difference 6 months after surgery (P Bimaxillary surgery for the correction of Class III malocclusion affected the morphology by increasing the upper part and decreasing the lower part of the airway, but not the total volume. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Variance squeezing and entanglement of the XX central spin model
Energy Technology Data Exchange (ETDEWEB)
El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)
2011-01-21
In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.
Recursive identification for multidimensional ARMA processes with increasing variances
Institute of Scientific and Technical Information of China (English)
CHEN Hanfu
2005-01-01
In time series analysis, almost all existing results are derived for the case where the driven noise {wn} in the MA part is with bounded variance (or conditional variance). In contrast to this, the paper discusses how to identify coefficients in a multidimensional ARMA process with fixed orders, but in its MA part the conditional moment E(‖wn‖β| Fn-1), β＞ 2 Is possible to grow up at a rate of a power of logn. The wellknown stochastic gradient (SG) algorithm is applied to estimating the matrix coefficients of the ARMA process, and the reasonable conditions are given to guarantee the estimate to be strongly consistent.
Levine's guide to SPSS for analysis of variance
Braver, Sanford L; Page, Melanie
2003-01-01
A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi
Variance components for body weight in Japanese quails (Coturnix japonica
Directory of Open Access Journals (Sweden)
RO Resende
2005-03-01
Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.
Precise Asymptotics of Error Variance Estimator in Partially Linear Models
Institute of Scientific and Technical Information of China (English)
Shao-jun Guo; Min Chen; Feng Liu
2008-01-01
In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known principle of least-squares. With this method the estimation of the (co)variance components is based on a linear model of observation equations. The method is flexible since it works with a user-defined we...
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.
On Variance and Covariance for Bounded Linear Operators
Institute of Scientific and Technical Information of China (English)
Chia Shiang LIN
2001-01-01
In this paper we initiate a study of covariance and variance for two operators on a Hilbert space, proving that the c-v (covariance-variance) inequality holds, which is equivalent to the CauchySchwarz inequality. As for applications of the c-v inequality we prove uniformly the Bernstein-type incqualities and equalities, and show the generalized Heinz-Kato-Furuta-type inequalities and equalities,from which a generalization and sharpening of Reid's inequality is obtained. We show that every operator can be expressed as a p-hyponormal-type, and a hyponornal-type operator. Finally, some new characterizations of the Furuta inequality are given.
Luiijf, H.A.M.; et al
2010-01-01
Projectteam Cyber Storm III - De Verenigde Staten organiseerden de afgelopen jaren een reeks grootschalige ICT-crisisoefeningen met de naam Cyber Storm. Cyber Storm III is de derde oefening in de reeks. Het scenario van Cyber Storm III staat in het teken van grootschalige ICT-verstoringen, waarbij n
Characterizing the Variance of Mechanical Properties of Sunflower Bark for Biocomposite Applications
Directory of Open Access Journals (Sweden)
Shengnan Sun
2013-12-01
Full Text Available Characterizing the variance of material properties of natural fibers is of growing concern due to a wide range of new engineering applications when utilizing these natural fibers. The aim of this study was to evaluate the variance of the Young’s modulus of sunflower bark by (i determining its statistical probability distribution, (ii investigating its relationship with relative humidity, and (iii characterizing its relationship with the specimen extraction location. To this end, specimens were extracted at three different locations along the stems. They were also preconditioned in three different relative humidity environments. The x2-test was used for hypothesis testing with normal, Weibull, and log-normal distributions. Results show that the Young’s modulus follows a normal distribution. Two-sample t-test results reveal that the Young’s modulus of sunflower stem bark strongly depends on the conditioning’s relative humidity and the specimen’s extraction location; it significantly decreased as the relative humidity increased and significantly increased from the bottom to the top of the stem. The correlation coefficients between the Young’s modulus of different relative humidity values and of specimen extraction locations were determined. The calculation of correlation coefficients shows a linear relation between the Young's modulus and the relative humidity for a given location.
Global Positioning System III (GPS III)
2015-12-01
Military Operations in Urban Terrain; Defense-Wide Mission Support; Air Mobility; and Space Launch Orbital Support. For military users, the GPS III...program provides Precise Positioning Service (PPS) to military operations and force enhancement. It also provides increased anti-jam power to the earth ...to be modified . On January 31, 2016, USD(AT&L) signed the GPS III revised APB. This Change 1 to the APB was due to both cost and schedule breaches
An entropy approach to size and variance heterogeneity
Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.
2012-01-01
In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity si
Analysis of Variance: What Is Your Statistical Software Actually Doing?
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
Gender variance in Asia: discursive contestations and legal implications
Wieringa, S.E.
2010-01-01
A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the
Permutation tests for multi-factorial analysis of variance
Anderson, M.J.; Braak, ter C.J.F.
2003-01-01
Several permutation strategies are often possible for tests of individual terms in analysis-of-variance (ANOVA) designs. These include restricted permutations, permutation of whole groups of units, permutation of some form of residuals or some combination of these. It is unclear, especially for
A Hold-out method to correct PCA variance inflation
DEFF Research Database (Denmark)
Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai
2012-01-01
In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure was int...
Similarities Derived from 3-D Nonlinear Psychophysics: Variance Distributions.
Gregson, Robert A. M.
1994-01-01
The derivation of the variance of similarity judgments is made from the 3-D process in nonlinear psychophysics. The idea of separability of dimensions in metric space theories of similarity is replaced by one parameter that represents the degree of a form of interdimensional cross-sampling. (SLD)
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
A mean-variance frontier in discrete and continuous time
Bekker, Paul A.
2004-01-01
The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation
Properties of realized variance under alternative sampling schemes
Oomen, R.C.A.
2006-01-01
This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative
20 CFR 901.40 - Proof; variance; amendment of pleadings.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901.40 Section 901.40 Employees' Benefits JOINT BOARD FOR THE ENROLLMENT OF ACTUARIES REGULATIONS GOVERNING THE PERFORMANCE OF ACTUARIAL SERVICES UNDER THE EMPLOYEE RETIREMENT INCOME SECURITY ACT OF...
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...
Vertical velocity variances and Reynold stresses at Brookhaven
DEFF Research Database (Denmark)
Busch, Niels E.; Brown, R.M.; Frizzola, J.A.
1970-01-01
Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...... velocity component....
Estimation of dominance variance in purebred Yorkshire swine.
Culbertson, M S; Mabry, J W; Misztal, I; Gengler, N; Bertrand, J K; Varona, L
1998-02-01
We used 179,485 Yorkshire reproductive and 239,354 Yorkshire growth records to estimate additive and dominance variances by Method Fraktur R. Estimates were obtained for number born alive (NBA), 21-d litter weight (LWT), days to 104.5 kg (DAYS), and backfat at 104.5 kg (BF). The single-trait models for NBA and LWT included the fixed effects of contemporary group and regression on inbreeding percentage and the random effects mate within contemporary group, animal permanent environment, animal additive, and parental dominance. The single-trait models for DAYS and BF included the fixed effects of contemporary group, sex, and regression on inbreeding percentage and the random effects litter of birth, dam permanent environment, animal additive, and parental dominance. Final estimates were obtained from six samples for each trait. Regression coefficients for 10% inbreeding were found to be -.23 for NBA, -.52 kg for LWT, 2.1 d for DAYS, and 0 mm for BF. Estimates of additive and dominance variances expressed as a percentage of phenotypic variances were, respectively, 8.8 +/- .5 and 2.2 +/- .7 for NBA, 8.1 +/- 1.1 and 6.3 +/- .9 for LWT, 33.2 +/- .4 and 10.3 +/- 1.5 for DAYS, and 43.6 +/- .9 and 4.8 +/- .7 for BF. The ratio of dominance to additive variances ranged from .78 to .11.
Common Persistence and Error-Correction Mode in Conditional Variance
Institute of Scientific and Technical Information of China (English)
LI Han-dong; ZHANG Shi-ying
2001-01-01
We firstly define the persistence and common persistence of vector GARCH process from the point of view of the integration, and then discuss the sufficient and necessary condition of the copersistence in variance. In the end of this paper, we give the properties and the error correction model of vector GARCH process under the condition of the co-persistence.
Bounds for Tail Probabilities of the Sample Variance
Directory of Open Access Journals (Sweden)
V. Bentkus
2009-01-01
Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.
Variance Ranklets : Orientation-selective rank features for contrast modulations
Azzopardi, George; Smeraldi, Fabrizio
2009-01-01
We introduce a novel type of orientation–selective rank features that are sensitive to contrast modulations (second–order stimuli). Variance Ranklets are designed in close analogy with the standard Ranklets, but use the Siegel–Tukey statistics for dispersion instead of the Wilcoxon statistics. Their
A note on minimum-variance theory and beyond
Energy Technology Data Exchange (ETDEWEB)
Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)
2004-04-30
We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.
Properties of realized variance under alternative sampling schemes
Oomen, R.C.A.
2006-01-01
This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative s
Average local values and local variances in quantum mechanics
Muga, J G; Sala, P R
1998-01-01
Several definitions for the average local value and local variance of a quantum observable are examined and compared with their classical counterparts. An explicit way to construct an infinite number of these quantities is provided. It is found that different classical conditions may be satisfied by different definitions, but none of the quantum definitions examined is entirely consistent with all classical requirements.
Hedging with stock index futures: downside risk versus the variance
Brouwer, F.; Nat, van der M.
1995-01-01
In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to Fis
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...
Multivariate variance targeting in the BEKK-GARCH model
DEFF Research Database (Denmark)
Pedersen, Rasmus S.; Rahbæk, Anders
2014-01-01
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...
A comparison between temporal and subband minimum variance adaptive beamforming
DEFF Research Database (Denmark)
Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.
2014-01-01
This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated...
CAIXA. II. AGNs from excess variance analysis (Ponti+, 2012) [Dataset
Ponti, G.; Papadakis, I.E.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.
2012-01-01
We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray var
Gender variance in Asia: discursive contestations and legal implications
Wieringa, S.E.
2010-01-01
A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the impli
CAIXA. II. AGNs from excess variance analysis (Ponti+, 2012) [Dataset
Ponti, G.; Papadakis, I.E.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.
2012-01-01
We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray var
Infinite variance in fermion quantum Monte Carlo calculations
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Testing for causality in variance using multivariate GARCH models
C.M. Hafner (Christian); H. Herwartz
2004-01-01
textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual
Variance Components for NLS: Partitioning the Design Effect.
Folsom, Ralph E., Jr.
This memorandum demonstrates a variance components methodology for partitioning the overall design effect (D) for a ratio mean into stratification (S), unequal weighting (W), and clustering (C) effects, so that D = WSC. In section 2, a sample selection scheme modeled after the National Longitudinal Study of the High School Class of 1972 (NKS)…
Perspective projection for variance pose face recognition from camera calibration
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
2016-04-01
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
Heterogeneity of variances for carcass traits by percentage Brahman inheritance.
Crews, D H; Franke, D E
1998-07-01
Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P carcass weight, longissimus muscle area, weight-adjusted longissimus muscle area, total lean yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic
DEFF Research Database (Denmark)
Carrasco, J; Giralt, M; Molinero, A
1999-01-01
Metallothionein-III is a low molecular weight, heavy-metal binding protein expressed mainly in the central nervous system. First identified as a growth inhibitory factor (GIF) of rat cortical neurons in vitro, it has subsequently been shown to be a member of the metallothionein (MT) gene family...... and renamed as MT-III. In this study we have raised polyclonal antibodies in rabbits against recombinant rat MT-III (rMT-III). The sera obtained reacted specifically against recombinant zinc-and cadmium-saturated rMT-III, and did not cross-react with native rat MT-I and MT-II purified from the liver of zinc...... injected rats. The specificity of the antibody was also demonstrated in immunocytochemical studies by the elimination of the immunostaining by preincubation of the antibody with brain (but not liver) extracts, and by the results obtained in MT-III null mice. The antibody was used to characterize...
Convergence of Recursive Identification for ARMAX Process with Increasing Variances
Institute of Scientific and Technical Information of China (English)
JIN Ya; LUO Guiming
2007-01-01
The autoregressive moving average exogenous (ARMAX) model is commonly adopted for describing linear stochastic systems driven by colored noise. The model is a finite mixture with the ARMA component and external inputs. In this paper we focus on a paramete estimate of the ARMAX model. Classical modeling methods are usually based on the assumption that the driven noise in the moving average (MA) part has bounded variances, while in the model considered here the variances of noise may increase by a power of log n. The plant parameters are identified by the recursive stochastic gradient algorithm. The diminishing excitation technique and some results of martingale difference theory are adopted in order to prove the convergence of the identification. Finally, some simulations are given to show the theoretical results.
PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS
Directory of Open Access Journals (Sweden)
Daniel Menezes Cavalcante
2016-07-01
Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.
Climate variance influence on the non-stationary plankton dynamics.
Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine
2013-08-01
We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems.
Multivariate Variance Targeting in the BEKK-GARCH Model
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding to these ......This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...... to these two steps. Strong consistency is established under weak moment conditions, while sixth order moment restrictions are imposed to establish asymptotic normality. Included simulations indicate that the multivariately induced higher-order moment constraints are indeed necessary....
Response variance in functional maps: neural darwinism revisited.
Directory of Open Access Journals (Sweden)
Hirokazu Takahashi
Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Validation technique using mean and variance of kriging model
Energy Technology Data Exchange (ETDEWEB)
Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)
2007-07-01
To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.
Explaining the Prevalence, Scaling and Variance of Urban Phenomena
Gomez-Lievano, Andres; Hausmann, Ricardo
2016-01-01
The prevalence of many urban phenomena changes systematically with population size. We propose a theory that unifies models of economic complexity and cultural evolution to derive urban scaling. The theory accounts for the difference in scaling exponents and average prevalence across phenomena, as well as the difference in the variance within phenomena across cities of similar size. The central ideas are that a number of necessary complementary factors must be simultaneously present for a phenomenon to occur, and that the diversity of factors is logarithmically related to population size. The model reveals that phenomena that require more factors will be less prevalent, scale more superlinearly and show larger variance across cities of similar size. The theory applies to data on education, employment, innovation, disease and crime, and it entails the ability to predict the prevalence of a phenomenon across cities, given information about the prevalence in a single city.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Sample variance and Lyman-alpha forest transmission statistics
Rollinde, Emmanuel; Schaye, Joop; Pâris, Isabelle; Petitjean, Patrick
2012-01-01
We compare the observed probability distribution function of the transmission in the \\HI\\ Lyman-alpha forest, measured from the UVES 'Large Programme' sample at redshifts z=[2,2.5,3], to results from the GIMIC cosmological simulations. Our measured values for the mean transmission and its PDF are in good agreement with published results. Errors on statistics measured from high-resolution data are typically estimated using bootstrap or jack-knife resampling techniques after splitting the spectra into chunks. We demonstrate that these methods tend to underestimate the sample variance unless the chunk size is much larger than is commonly the case. We therefore estimate the sample variance from the simulations. We conclude that observed and simulated transmission statistics are in good agreement, in particular, we do not require the temperature-density relation to be 'inverted'.
Variance reduction methods applied to deep-penetration problems
Energy Technology Data Exchange (ETDEWEB)
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course.
Automated Extraction of Archaeological Traces by a Modified Variance Analysis
Directory of Open Access Journals (Sweden)
Tiziana D'Orazio
2015-03-01
Full Text Available This paper considers the problem of detecting archaeological traces in digital aerial images by analyzing the pixel variance over regions around selected points. In order to decide if a point belongs to an archaeological trace or not, its surrounding regions are considered. The one-way ANalysis Of VAriance (ANOVA is applied several times to detect the differences among these regions; in particular the expected shape of the mark to be detected is used in each region. Furthermore, an effect size parameter is defined by comparing the statistics of these regions with the statistics of the entire population in order to measure how strongly the trace is appreciable. Experiments on synthetic and real images demonstrate the effectiveness of the proposed approach with respect to some state-of-the-art methodologies.
Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy
Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.
2016-08-01
We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.
Analysis of variance in spectroscopic imaging data from human tissues.
Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit
2012-01-17
The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.
The return of the variance: intraspecific variability in community ecology.
Violle, Cyrille; Enquist, Brian J; McGill, Brian J; Jiang, Lin; Albert, Cécile H; Hulshof, Catherine; Jung, Vincent; Messier, Julie
2012-04-01
Despite being recognized as a promoter of diversity and a condition for local coexistence decades ago, the importance of intraspecific variance has been neglected over time in community ecology. Recently, there has been a new emphasis on intraspecific variability. Indeed, recent developments in trait-based community ecology have underlined the need to integrate variation at both the intraspecific as well as interspecific level. We introduce new T-statistics ('T' for trait), based on the comparison of intraspecific and interspecific variances of functional traits across organizational levels, to operationally incorporate intraspecific variability into community ecology theory. We show that a focus on the distribution of traits at local and regional scales combined with original analytical tools can provide unique insights into the primary forces structuring communities.
Analysis of Variance in the Modern Design of Experiments
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
Seasonal variance in P system models for metapopulations
Institute of Scientific and Technical Information of China (English)
Daniela Besozzi; Paolo Cazzaniga; Dario Pescini; Giancarlo Mauri
2007-01-01
Metapopulations are ecological models describing the interactions and the behavior of populations living in fragmented habitats. In this paper, metapopulations are modelled by means of dynamical probabilistic P systems, where additional structural features have been defined (e. g., a weighted graph associated with the membrane structure and the reduction of maximal parallelism). In particular, we investigate the influence of stochastic and periodic resource feeding processes, owing to seasonal variance, on emergent metapopulation dynamics.
Estimating High-Frequency Based (Co-) Variances: A Unified Approach
DEFF Research Database (Denmark)
Voev, Valeri; Nolte, Ingmar
We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... frequency derived in Bandi & Russell (2005a) and Bandi & Russell (2005b). For a realistic trading scenario, the efficiency gains resulting from our approach are in the range of 35% to 50%....
VARIANCE OF NONLINEAR PHASE NOISE IN FIBER-OPTIC SYSTEM
RANJU KANWAR; SAMEKSHA BHASKAR
2013-01-01
In communication system, the noise process must be known, in order to compute the system performance. The nonlinear effects act as strong perturbation in long- haul system. This perturbation effects the signal, when interact with amplitude noise, and results in random motion of the phase of the signal. Based on the perturbation theory, the variance of nonlinear phase noise contaminated by both self- and cross-phase modulation, is derived analytically for phase-shift- keying system. Through th...
Recombining binomial tree for constant elasticity of variance process
Hi Jun Choe; Jeong Ho Chu; So Jeong Shin
2014-01-01
The theme in this paper is the recombining binomial tree to price American put option when the underlying stock follows constant elasticity of variance(CEV) process. Recombining nodes of binomial tree are decided from finite difference scheme to emulate CEV process and the tree has a linear complexity. Also it is derived from the differential equation the asymptotic envelope of the boundary of tree. Conducting numerical experiments, we confirm the convergence and accuracy of the pricing by ou...
PARAMETER-ESTIMATION FOR ARMA MODELS WITH INFINITE VARIANCE INNOVATIONS
MIKOSCH, T; GADRICH, T; KLUPPELBERG, C; ADLER, RJ
We consider a standard ARMA process of the form phi(B)X(t) = B(B)Z(t), where the innovations Z(t) belong to the domain of attraction of a stable law, so that neither the Z(t) nor the X(t) have a finite variance. Our aim is to estimate the coefficients of phi and theta. Since maximum likelihood
Relationship between Allan variances and Kalman Filter parameters
Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.
1984-01-01
A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.
Dynamic Programming Using Polar Variance for Image Segmentation.
Rosado-Toro, Jose A; Altbach, Maria I; Rodriguez, Jeffrey J
2016-10-06
When using polar dynamic programming (PDP) for image segmentation, the object size is one of the main features used. This is because if size is left unconstrained the final segmentation may include high-gradient regions that are not associated with the object. In this paper, we propose a new feature, polar variance, which allows the algorithm to segment objects of different sizes without the need for training data. The polar variance is the variance in a polar region between a user-selected origin and a pixel we want to analyze. We also incorporate a new technique that allows PDP to segment complex shapes by finding low-gradient regions and growing them. The experimental analysis consisted on comparing our technique with different active contour segmentation techniques on a series of tests. The tests consisted on robustness to additive Gaussian noise, segmentation accuracy with different grayscale images and finally robustness to algorithm-specific parameters. Experimental results show that our technique performs favorably when compared to other segmentation techniques.
Variance Analysis and Adaptive Sampling for Indirect Light Path Reuse
Institute of Scientific and Technical Information of China (English)
Hao Qin; Xin Sun; Jun Yan; Qi-Ming Hou; Zhong Ren; Kun Zhou
2016-01-01
In this paper, we study the estimation variance of a set of global illumination algorithms based on indirect light path reuse. These algorithms usually contain two passes — in the first pass, a small number of indirect light samples are generated and evaluated, and they are then reused by a large number of reconstruction samples in the second pass. Our analysis shows that the covariance of the reconstruction samples dominates the estimation variance under high reconstruction rates and increasing the reconstruction rate cannot effectively reduce the covariance. We also find that the covariance represents to what degree the indirect light samples are reused during reconstruction. This analysis motivates us to design a heuristic approximating the covariance as well as an adaptive sampling scheme based on this heuristic to reduce the rendering variance. We validate our analysis and adaptive sampling scheme in the indirect light field reconstruction algorithm and the axis-aligned filtering algorithm for indirect lighting. Experiments are in accordance with our analysis and show that rendering artifacts can be greatly reduced at a similar computational cost.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.
Measuring primordial non-gaussianity without cosmic variance
Seljak, Uros
2008-01-01
Non-gaussianity in the initial conditions of the universe is one of the most powerful mechanisms to discriminate among the competing theories of the early universe. Measurements using bispectrum of cosmic microwave background anisotropies are limited by the cosmic variance, i.e. available number of modes. Recent work has emphasized the possibility to probe non-gaussianity of local type using the scale dependence of large scale bias from highly biased tracers of large scale structure. However, this power spectrum method is also limited by cosmic variance, finite number of structures on the largest scales, and by the partial degeneracy with other cosmological parameters that can mimic the same effect. Here we propose an alternative method that solves both of these problems. It is based on the idea that on large scales halos are biased, but not stochastic, tracers of dark matter: by correlating a highly biased tracer of large scale structure against an unbiased tracer one eliminates the cosmic variance error, wh...
Modality-Driven Classification and Visualization of Ensemble Variance
Energy Technology Data Exchange (ETDEWEB)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Nuclear volume and prognosis in ovarian cancer
DEFF Research Database (Denmark)
Mogensen, O.; Sørensen, Flemming Brandt; Bichel, P.
1992-01-01
The prognostic value of the volume-weighted mean nuclear volume (MNV) was investigated retrospectively in 100 ovarian cancer patients with FIGO-stage IB-II (n = 51) and stage III-IV (n = 49) serous tumors. No association was demonstrated between the MNV and the survival or between the MNV and two...
Nuclear volume and prognosis in ovarian cancer
DEFF Research Database (Denmark)
Mogensen, O.; Sørensen, Flemming Brandt; Bichel, P.;
1992-01-01
The prognostic value of the volume-weighted mean nuclear volume (MNV) was investigated retrospectively in 100 ovarian cancer patients with FIGO-stage IB-II (n = 51) and stage III-IV (n = 49) serous tumors. No association was demonstrated between the MNV and the survival or between the MNV and two...
Automated Variance Reduction Applied to Nuclear Well-Logging Problems
Energy Technology Data Exchange (ETDEWEB)
Wagner, John C [ORNL; Peplow, Douglas E. [ORNL; Evans, Thomas M [ORNL
2009-01-01
The Monte Carlo method enables detailed, explicit geometric, energy and angular representations, and hence is considered to be the most accurate method available for solving complex radiation transport problems. Because of its associated accuracy, the Monte Carlo method is widely used in the petroleum exploration industry to design, benchmark, and simulate nuclear well-logging tools. Nuclear well-logging tools, which contain neutron and/or gamma sources and two or more detectors, are placed in boreholes that contain water (and possibly other fluids) and that are typically surrounded by a formation (e.g., limestone, sandstone, calcites, or a combination). The response of the detectors to radiation returning from the surrounding formation is used to infer information about the material porosity, density, composition, and associated characteristics. Accurate computer simulation is a key aspect of this exploratory technique. However, because this technique involves calculating highly precise responses (at two or more detectors) based on radiation that has interacted with the surrounding formation, the transport simulations are computationally intensive, requiring significant use of variance reduction techniques, parallel computing, or both. Because of the challenging nature of these problems, nuclear well-logging problems have frequently been used to evaluate the effectiveness of variance reduction techniques (e.g., Refs. 1-4). The primary focus of these works has been on improving the computational efficiency associated with calculating the response at the most challenging detector location, which is typically the detector furthest from the source. Although the objective of nuclear well-logging simulations is to calculate the response at multiple detector locations, until recently none of the numerous variance reduction methods/techniques has been well-suited to simultaneous optimization of multiple detector (tally) regions. Therefore, a separate calculation is
A proxy for variance in dense matching over homogeneous terrain
Altena, Bas; Cockx, Liesbet; Goedemé, Toon
2014-05-01
Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low
Estimation of noise-free variance to measure heterogeneity.
Winkler, Tilo; Melo, Marcos F Vidal; Degani-Costa, Luiza H; Harris, R Scott; Correia, John A; Musch, Guido; Venegas, Jose G
2015-01-01
Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET) scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2)). The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r)(2)) for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t)(2)). We found that CV(t)(2) was only 5.4% higher than CV(r)2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13)NN-saline injection. The mean CV(t)(2) was 0.10 (range: 0.03-0.30), while the mean CV(2) including noise was 0.24 (range: 0.10-0.59). CV(t)(2) was in average 41.5% of the CV(2) measured including noise (range: 17.8-71.2%). The reproducibility of CV(t)(2) was evaluated using three repeated PET scans from five subjects. Individual CV(t)(2) were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t)(2) in PET scans, and may be useful for similar statistical problems in experimental data.
Industrial fuel gas demonstration plant program. Current working estimate. Phase III and III
Energy Technology Data Exchange (ETDEWEB)
1979-12-01
The United States Department of Energy (DOE) executed a contract with Memphis Light, Gas and Water Division (MLGW) which requires MLGW to perform process analysis, design, procurement, construction, testing, operation, and evaluation of a plant which will demonstrate the feasibility of converting high sulfur bituminous coal to industrial fuel gas with a heating value of 300 +- 30 Btu per standard cubic foot (SCF). The demonstration plant is based on the U-Gas process, and its product gas is to be used in commercial applications in Memphis, Tenn. The contract specifies that the work is to be conducted in three phases. The Phases are: Phase I - Program Development and Conceptual Design; Phase II - Demonstration Plant Final Design, Procurement and Construction; and Phase III - Demonstration Plant Operation. Under Task III of Phase I, a Cost Estimate for the Demonstration Plant was completed as well as estimates for other Phase II and III work. The output of this Estimate is presented in this volume. This Current Working Estimate for Phases II and III is based on the Process and Mechanical Designs presented in the Task II report (second issue) and the 12 volumes of the Task III report. In addition, the capital cost estimate summarized in the appendix has been used in the Economic Analysis (Task III) Report.
Energy Technology Data Exchange (ETDEWEB)
Oregon. Dept. of Fish and Wildlife; Mount Hood National Forest (Or.)
1985-06-01
Studies were conducted to describe current habitat conditions in the White River basin above White River Falls and to evaluate the potential to produce anadromous fish. An inventory of spawning and rearing habitats, irrigation diversions, and enhancement opportunities for anadromous fish in the White River drainage was conducted. Survival of juvenile fish at White River Falls was estimated by releasing juvenile chinook and steelhead above the falls during high and low flow periods and recapturing them below the falls in 1983 and 1984. Four alternatives to provide upstream passage for adult salmon and steelhead were developd to a predesign level. The cost of adult passage and the estimated run size of anadromous fish were used to determine the benefit/cost of the preferred alternative. Possible effects of the introduction of anadromous fish on resident fish and on nearby Oak Springs Hatchery were evaluated. This included an inventory of resident species, a genetic study of native rainbow, and the identification of fish diseases in the basin. This volume contains appendices of habitat survey data, potential production, resident fish population data, upstream passage designs, and benefit/cost calculations. (ACR)
DEFF Research Database (Denmark)
Sørensen, Anders Christian; Kristensen, Torsten Nygård; Loeschcke, Volker
2007-01-01
quantitative genetics model based on the infinitesimal model, and an extension of this model. In the extended model it is assumed that each individual has its own environmental variance and that this heterogeneity of variance has a genetic component. The heterogeneous variance model was favoured by the data......, indicating that the environmental variance is partly under genetic control. If this heterogeneous variance model also applies to livestock, it would be possible to select for animals with a higher uniformity of products across environmental regimes. Also for evolutionary biology the results are of interest...
Advanced Variance Reduction for Global k-Eigenvalue Simulations in MCNP
Energy Technology Data Exchange (ETDEWEB)
Edward W. Larsen
2008-06-01
The "criticality" or k-eigenvalue of a nuclear system determines whether the system is critical (k=1), or the extent to which it is subcritical (k<1) or supercritical (k>1). Calculations of k are frequently performed at nuclear facilities to determine the criticality of nuclear reactor cores, spent nuclear fuel storage casks, and other fissile systems. These calculations can be expensive, and current Monte Carlo methods have certain well-known deficiencies. In this project, we have developed and tested a new "functional Monte Carlo" (FMC) method that overcomes several of these deficiencies. The current state-of-the-art Monte Carlo k-eigenvalue method estimates the fission source for a sequence of fission generations (cycles), during each of which M particles per cycle are processed. After a series of "inactive" cycles during which the fission source "converges," a series of "active" cycles are performed. For each active cycle, the eigenvalue and eigenfunction are estimated; after N >> 1 active cycles are performed, the results are averaged to obtain estimates of the eigenvalue and eigenfunction and their standard deviations. This method has several disadvantages: (i) the estimate of k depends on the number M of particles per cycle, (iii) for optically thick systems, the eigenfunction estimate may not converge due to undersampling of the fission source, and (iii) since the fission source in any cycle depends on the estimated fission source from the previous cycle (the fission sources in different cycles are correlated), the estimated variance in k is smaller than the real variance. For an acceptably large number M of particles per cycle, the estimate of k is nearly independent of M; this essentially takes care of item (i). Item (ii) can be addressed by taking M sufficiently large, but for optically thick systems a sufficiently large M can easily be unrealistic. Item (iii) cannot be accounted for by taking M or N sufficiently large; it is an inherent deficiency due
Reif, Maria M; Hünenberger, Philippe H
2011-04-14
The raw single-ion solvation free energies computed from atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions (finite or periodic system, system or box size) and treatment of electrostatic interactions (Coulombic, lattice-sum, or cutoff-based) used during these simulations. However, as shown by Kastenholz and Hünenberger [J. Chem. Phys. 124, 224501 (2006)], correction terms can be derived for the effects of: (A) an incorrect solvent polarization around the ion and an incomplete or/and inexact interaction of the ion with the polarized solvent due to the use of an approximate (not strictly Coulombic) electrostatic scheme; (B) the finite-size or artificial periodicity of the simulated system; (C) an improper summation scheme to evaluate the potential at the ion site, and the possible presence of a polarized air-liquid interface or of a constraint of vanishing average electrostatic potential in the simulated system; and (D) an inaccurate dielectric permittivity of the employed solvent model. Comparison with standard experimental data also requires the inclusion of appropriate cavity-formation and standard-state correction terms. In the present study, this correction scheme is extended by: (i) providing simple approximate analytical expressions (empirically-fitted) for the correction terms that were evaluated numerically in the above scheme (continuum-electrostatics calculations); (ii) providing correction terms for derivative thermodynamic single-ion solvation properties (and corresponding partial molar variables in solution), namely, the enthalpy, entropy, isobaric heat capacity, volume, isothermal compressibility, and isobaric expansivity (including appropriate standard-state correction terms). The ability of the correction scheme to produce methodology-independent single-ion solvation free energies based on atomistic simulations is tested in the case of Na(+) hydration, and the nature and magnitude of the correction terms for
Loberg, A; Dürr, J W; Fikse, W F; Jorjani, H; Crooks, L
2015-10-01
The amount of variance captured in genetic estimations may depend on whether a pedigree-based or genomic relationship matrix is used. The purpose of this study was to investigate the genetic variance as well as the variance of predicted genetic merits (PGM) using pedigree-based or genomic relationship matrices in Brown Swiss cattle. We examined a range of traits in six populations amounting to 173 population-trait combinations. A main aim was to determine how using different relationship matrices affect variance estimation. We calculated ratios between different types of estimates and analysed the impact of trait heritability and population size. The genetic variances estimated by REML using a genomic relationship matrix were always smaller than the variances that were similarly estimated using a pedigree-based relationship matrix. The variances from the genomic relationship matrix became closer to estimates from a pedigree relationship matrix as heritability and population size increased. In contrast, variances of predicted genetic merits obtained using a genomic relationship matrix were mostly larger than variances of genetic merit predicted using pedigree-based relationship matrix. The ratio of the genomic to pedigree-based PGM variances decreased as heritability and population size rose. The increased variance among predicted genetic merits is important for animal breeding because this is one of the factors influencing genetic progress. © 2015 Blackwell Verlag GmbH.
Energy Technology Data Exchange (ETDEWEB)
Hopkins, R.H.; Davis, J.R.; Rohatgi, A.; Campbell, R.B.; Blais, P.D.; Rai-Choudhury, P.; Stapleton, R.E.; Mollenkopf, H.C.; McCormick, J.R.
1980-01-01
The object of Phase III of the program has been to investigate the effects of various processes, metal contaminants and contaminant-process interactions on the performance of terrestrial silicon solar cells. The study encompassed a variety of tasks including: (1) a detailed examination of thermal processing effects, such as HCl and POCl/sub 3/ gettering on impurity behavior, (2) completion of the data base and modeling for impurities in n-base silicon, (3) extension of the data base on p-type material to include elements likely to be introduced during the production, refining, or crystal growth of silicon, (4) effects on cell performance on anisotropic impurity distributions in large CZ crystals and silicon webs, and (5) a preliminary assessment of the permanence of the impurity effects. Two major topics are treated: methods to measure and evaluate impurity effects in silicon and comprehensive tabulations of data derived during the study. For example, discussions of deep level spectroscopy, detailed dark I-V measurements, recombination lifetime determination, scanned laser photo-response, and conventional solar cell I-V techniques, as well as descriptions of silicon chemical analysis are included. Considerable data are tabulated on the composition, electrical, and solar cell characteristics of impurity-doped silicon.
Dominance Genetic Variance for Traits Under Directional Selection in Drosophila serrata
Sztepanacz, Jacqueline L.; Blows, Mark W.
2015-01-01
In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait–fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700
Research in collegiate mathematics education III
Arcavi, A; Kaput, Jim; Dubinsky, Ed; Dick, Thomas
1998-01-01
Volume III of Research in Collegiate Mathematics Education (RCME) presents state-of-the-art research on understanding, teaching, and learning mathematics at the post-secondary level. This volume contains information on methodology and research concentrating on these areas of student learning: Problem solving. Included here are three different articles analyzing aspects of Schoenfeld's undergraduate problem-solving instruction. The articles provide new detail and insight on a well-known and widely discussed course taught by Schoenfeld for many years. Understanding concepts. These articles fe
Regression between earthquake magnitudes having errors with known variances
Pujol, Jose
2016-07-01
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.
Critical points of multidimensional random Fourier series: Variance estimates
Nicolaescu, Liviu I.
2016-08-01
We investigate the number of critical points of a Gaussian random smooth function uɛ on the m-torus Tm ≔ ℝm/ℤm approximating the Gaussian white noise as ɛ → 0. Let N(uɛ) denote the number of critical points of uɛ. We prove the existence of constants C, C' such that as ɛ goes to zero, the expectation of the random variable ɛmN(uɛ) converges to C, while its variance is extremely small and behaves like C'ɛm.
Generalized Minimum Variance Control for MDOF Structures under Earthquake Excitation
Directory of Open Access Journals (Sweden)
Lakhdar Guenfaf
2016-01-01
Full Text Available Control of a multi-degree-of-freedom structural system under earthquake excitation is investigated in this paper. The control approach based on the Generalized Minimum Variance (GMV algorithm is developed and presented. Our approach is a generalization to multivariable systems of the GMV strategy designed initially for single-input-single-output (SISO systems. Kanai-Tajimi and Clough-Penzien models are used to generate the seismic excitations. Those models are calculated using the specific soil parameters. Simulation tests using a 3DOF structure are performed and show the effectiveness of the control method.
Stable limits for sums of dependent infinite variance random variables
DEFF Research Database (Denmark)
Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas;
2011-01-01
The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most...... of these results are qualitative in the sense that the parameters of the limit distribution are expressed in terms of some limiting point process. In this paper we will be able to determine the parameters of the limiting stable distribution in terms of some tail characteristics of the underlying stationary...
Minimum Variance Beamforming for High Frame-Rate Ultrasound Imaging
DEFF Research Database (Denmark)
Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt
2007-01-01
This paper investigates the application of adaptive beamforming in medical ultrasound imaging. A minimum variance (MV) approach for near-field beamforming of broadband data is proposed. The approach is implemented in the frequency domain, and it provides a set of adapted, complex apodization...... weights for each frequency sub-band. As opposed to the conventional, Delay and Sum (DS) beamformer, this approach is dependent on the specific data. The performance of the proposed MV beamformer is tested on simulated synthetic aperture (SA) ultrasound data, obtained using Field II. For the simulations...
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Energy Technology Data Exchange (ETDEWEB)
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D. [and others
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
Multivariate variance targeting in the BEKK-GARCH model
DEFF Research Database (Denmark)
Pedersen, Rasmus S.; Rahbæk, Anders
2014-01-01
This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...... to these two steps. Strong consis-tency is established under weak moment conditions, while sixth-order moment restrictions are imposed to establish asymptotic normality. Included simulations indicate that the multivariately induced higher-order moment constraints are necessary...
A guide to SPSS for analysis of variance
Levine, Gustav
2013-01-01
This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce
Variance-optimal hedging for processes with stationary independent increments
DEFF Research Database (Denmark)
Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.
We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...... show that for this class of processes the optimal endowment and strategy can be expressed more explicitly. The corresponding formulas involve the moment resp. cumulant generating function of the underlying process and a Laplace- or Fourier-type representation of the contingent claim. An example...
Two-dimensional finite-element temperature variance analysis
Heuser, J. S.
1972-01-01
The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.
Local orbitals by minimizing powers of the orbital variance
DEFF Research Database (Denmark)
Jansik, Branislav; Høst, Stinne; Kristensen, Kasper;
2011-01-01
It is demonstrated that a set of local orthonormal Hartree–Fock (HF) molecular orbitals can be obtained for both the occupied and virtual orbital spaces by minimizing powers of the orbital variance using the trust-region algorithm. For a power exponent equal to one, the Boys localization function...... is obtained. For increasing power exponents, the penalty for delocalized orbitals is increased and smaller maximum orbital spreads are encountered. Calculations on superbenzene, C60, and a fragment of the titin protein show that for a power exponent equal to one, delocalized outlier orbitals may...
A Mean-Variance Portfolio Optimal Under Utility Pricing
Directory of Open Access Journals (Sweden)
HÃ¼rlimann Werner
2006-01-01
Full Text Available An expected utility model of asset choice, which takes into account asset pricing, is considered. The obtained portfolio selection problem under utility pricing is solved under several assumptions including quadratic utility, exponential utility and multivariate symmetric elliptical returns. The obtained unique solution, called optimal utility portfolio, is shown mean-variance efficient in the classical sense. Various questions, including conditions for complete diversification and the behavior of the optimal portfolio under univariate and multivariate ordering of risks as well as risk-adjusted performance measurement, are discussed.
Semiconductors. Subvol. A. New data and updates for I-VII, III-V, III-VI and IV-VI compounds
Energy Technology Data Exchange (ETDEWEB)
Roessler, U (ed.) [Regensburg Univ. (Germany). Inst. fuer Theoretische Physik; Dietl, T.; Dobrowolski, W.; Story, T. [Polish Academy of Sciences, Warszawa (Poland). Lab. for Cryogenic and Spintronic Research; Fernandes da Silva, E.C. [Universidade de Sao Paulo, SP (Brazil). Lab. de Novos Materiais Semiconductores; Hoenerlage, B. [IPCMS/GONLO, 67 - Strasbourg (France); Meyer, B.K. [Giessen Univ. (Germany). 1. Physikalisches Inst.
2008-07-01
The Landolt-Boernstein subvolumes III/44A and III/44B update the existing 8 volumes III/41 about Semiconductors and contain new Data and Updates for I-VII, III-V, III-VI, IV, VI and II-VI Compounds. The text, tables figures and references are provided in self-contained document files, each one dedicated to a substance and property. The first subvolume III/44A contains a ''Systematics of Semiconductor Properties'', which should help the non-specialist user to understand the meaning of the material parameters. Hyperlinked lists of substances and properties lead directly to the documents and make the electronic version an easy-to-use source of semiconductor data. In the new updates III/44A and III/44B, links to existing material in III/41 or to related documents for a specific substance are also included. (orig.)
Replica approach to mean-variance portfolio optimization
Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre
2016-12-01
We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r = N/T portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r = 1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1 - r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.
MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE
Directory of Open Access Journals (Sweden)
I GEDE ERY NISCAHYANA
2016-08-01
Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution. The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of BNLI stock, 0% of SMDM stock, 1% of SMGR stock.
Facial Feature Extraction Method Based on Coefficients of Variances
Institute of Scientific and Technical Information of China (English)
Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang
2007-01-01
Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.
Cosmic variance of the galaxy cluster weak lensing signal
Gruen, D; Becker, M R; Friedrich, O; Mana, A
2015-01-01
Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M_200m=10^14...10^15 h^-1 M_sol, z=0.25...0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate mass uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ~20 per cent uncertainty from cosmic variance alone at M_200m=10^15 h^-1 M_sol and z=0.25), but significant also...
Mean-Variance-Validation Technique for Sequential Kriging Metamodels
Energy Technology Data Exchange (ETDEWEB)
Lee, Tae Hee; Kim, Ho Sung [Hanyang University, Seoul (Korea, Republic of)
2010-05-15
The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean{sub 0} validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean{sub 0} validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels.
Infinite Variance in Fermion Quantum Monte Carlo Calculations
Shi, Hao
2015-01-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties, without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, lattice QCD calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied upon to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple sub-areas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations turn out to have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calc...
Deterministic mean-variance-optimal consumption and investment
DEFF Research Database (Denmark)
Christiansen, Marcus; Steffensen, Mogens
2013-01-01
In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...
The Variance of Energy Estimates for the Product Model
Directory of Open Access Journals (Sweden)
David Smallwood
2003-01-01
, is the product of a slowly varying random window, {w(t}, and a stationary random process, {g(t}, is defined. A single realization of the process will be defined as x(t. This is slightly different from the usual definition of the product model where the window is typically defined as deterministic. An estimate of the energy (the zero order temporal moment, only in special cases is this physical energy of the random process, {x(t}, is defined as m0=∫∞∞|x(t|2dt=∫−∞∞|w(tg(t|2dt Relationships for the mean and variance of the energy estimates, m0, are then developed. It is shown that for many cases the uncertainty (4π times the product of rms duration, Dt, and rms bandwidth, Df is approximately the inverse of the normalized variance of the energy. The uncertainty is a quantitative measure of the expected error in the energy estimate. If a transient has a significant random component, a small uncertainty parameter implies large error in the energy estimate. Attempts to resolve a time/frequency spectrum near the uncertainty limits of a transient with a significant random component will result in large errors in the spectral estimates.
Cosmic variance in the nanohertz gravitational wave background
Roebber, Elinore; Holz, Daniel; Warren, Michael
2015-01-01
We use large N-body simulations and empirical scaling relations between dark matter halos, galaxies, and supermassive black holes to estimate the formation rates of supermassive black hole binaries and the resulting low-frequency stochastic gravitational wave background (GWB). We find this GWB to be relatively insensitive ($\\lesssim10\\%$) to cosmological parameters, with only slight variation between WMAP5 and Planck cosmologies. We find that uncertainty in the astrophysical scaling relations changes the amplitude of the GWB by a factor of $\\sim 2$. Current observational limits are already constraining this predicted range of models. We investigate the Poisson variance in the amplitude of the GWB for randomly-generated populations of supermassive black holes, finding a scatter of order unity per frequency bin below 10 nHz, and increasing to a factor of $\\sim 10$ near 100 nHz. This variance is a result of the rarity of the most massive binaries, which dominate the signal, and acts as a fundamental uncertainty ...
Argentine Population Genetic Structure: Large Variance in Amerindian Contribution
Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.
2011-01-01
Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183
Worldwide variance in the potential utilization of Gamma Knife radiosurgery.
Hamilton, Travis; Dade Lunsford, L
2016-12-01
OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.
VARIANCE OF NONLINEAR PHASE NOISE IN FIBER-OPTIC SYSTEM
Directory of Open Access Journals (Sweden)
RANJU KANWAR
2013-04-01
Full Text Available In communication system, the noise process must be known, in order to compute the system performance. The nonlinear effects act as strong perturbation in long- haul system. This perturbation effects the signal, when interact with amplitude noise, and results in random motion of the phase of the signal. Based on the perturbation theory, the variance of nonlinear phase noise contaminated by both self- and cross-phase modulation, is derived analytically for phase-shift- keying system. Through this work, it is investigated that for longer transmission distance, 40-Gb/s systems are more sensitive to nonlinear phase noise as compared to 50-Gb/s systems. Also, when transmitting the data through the fiber optic link, bit errors are produced due to various effects such as noise from optical amplifiers and nonlinearity occurring in fiber. On the basis of the simulation results , we have compared the bit error rate based on 8-PSK with theoretical results, and result shows that in real time approach, the bit error rate is high for the same signal to noise ratio. MATLAB software is used to validate the analytical expressions for the variance of nonlinear phase noise.
Hidden temporal order unveiled in stock market volatility variance
Directory of Open Access Journals (Sweden)
Y. Shapira
2011-06-01
Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.
Energy Technology Data Exchange (ETDEWEB)
Bataille, Christian [Assemblee Nationale, Paris (France)
1999-02-11
The third volume of the Report on behalf of the Production and Exchange Commission on the draft of the law No. 1253 concerning the Revamping and Expanding Domestic Electricity Supply contains Appendices. The appendix number 1 presents the directive 96/92 CE of the European Parliament and Council of 19 December 1996, concerning common rules referring to the electricity internal market. It contains the chapters titled: 1. Field of application and definitions; 2. General rules for sector organization; 3. Production; 4. Exploitation of the transport grid; 5. Exploitation of the distribution grid; 6. Accounting dissociation and transparency; 7. Organization of the grid access; 8. Final dispositions. The appendix number 2 gives the law no. 46 - 628 of 8 April, modified, on the nationalization of the electricity and gas. The third appendix reproduces Decree no. 55 - 662 of 20 May 1955 concerning relationships between the establishments aimed by the articles 2 and 23 of the law of 8 April 1946 and the autonomous producers of electric energy. The appendix number 4 contains the notification of State Council of 7 July 1994 regarding the diversification of EDF and GDF activities. The fifth appendix is a chronological list of the European negotiations concerning the opening of the electricity market (1987 -1997). Finally, a list of following abbreviations is given: ART, ATR, CNES, CRE, CTE, DNN, FACE, FPE, GRT, IEG, INB, PPI, RAG and SICAE.
Brownian limits, local limits, extreme value and variance asymptotics for convex hulls in the ball
Calka, Pierre; Yukich, J E
2009-01-01
The paper of Schreiber and Yukich [40] establishes an asymptotic representation for random convex polytope geometry in the unit ball $\\B_d, d \\geq 2,$ in terms of the general theory of stabilizing functionals of Poisson point processes as well as in terms of the so-called generalized paraboloid growth process. This paper further exploits this connection, introducing also a dual object termed the paraboloid hull process. Via these growth processes we establish local functional and measure-level limit theorems for the properly scaled radius-vector and support functions as well as for curvature measures and $k$-face empirical measures of convex polytopes generated by high density Poisson samples. We use general techniques of stabilization theory to establish Brownian sheet limits for the defect volume and mean width functionals, and we provide explicit variance asymptotics and central limit theorems for the $k$-face and intrinsic volume functionals. We establish extreme value theorems for radius-vector and suppo...
Palcoux, Sébastien
2011-01-01
Using unusual objects in the theory of von Neumann algebra, as the chinese game Go or the Conway game of life (generalized on finitely presented groups), we are able to build, by hands, many type III factors.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Directory of Open Access Journals (Sweden)
Ling Huang
2017-02-01
Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-02-27
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10(16) electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed
Interdependence of NAFTA capital markets: A minimum variance portfolio approach
Directory of Open Access Journals (Sweden)
López-Herrera Francisco
2014-01-01
Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.
Estimation of measurement variance in the context of environment statistics
Maiti, Pulakesh
2015-02-01
The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.
Diffusion-Based Trajectory Observers with Variance Constraints
DEFF Research Database (Denmark)
Alcocer, Alex; Jouffroy, Jerome; Oliveira, Paulo
Diffusion-based trajectory observers have been recently proposed as a simple and efficient framework to solve diverse smoothing problems in underwater navigation. For instance, to obtain estimates of the trajectories of an underwater vehicle given position fixes from an acoustic positioning system...... and velocity measurements from a DVL. The observers are conceptually simple and can easily deal with the problems brought about by the occurrence of asynchronous measurements and dropouts. In its original formulation, the trajectory observers depend on a user-defined constant gain that controls the level...... of smoothing and is determined by resorting to trial and error. This paper presents a methodology to choose the observer gain by taking into account a priori information on the variance of the position measurement errors. Experimental results with data from an acoustic positioning system are presented...
Static models, recursive estimators and the zero-variance approach
Rubino, Gerardo
2016-01-07
When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.
INTERPRETING MAGNETIC VARIANCE ANISOTROPY MEASUREMENTS IN THE SOLAR WIND
Energy Technology Data Exchange (ETDEWEB)
TenBarge, J. M.; Klein, K. G.; Howes, G. G. [Department of Physics and Astronomy, University of Iowa, Iowa City, IA (United States); Podesta, J. J., E-mail: jason-tenbarge@uiowa.edu [Space Science Institute, Boulder, CO (United States)
2012-07-10
The magnetic variance anisotropy (A{sub m}) of the solar wind has been used widely as a method to identify the nature of solar wind turbulent fluctuations; however, a thorough discussion of the meaning and interpretation of the A{sub m} has not appeared in the literature. This paper explores the implications and limitations of using the A{sub m} as a method for constraining the solar wind fluctuation mode composition and presents a more informative method for interpreting spacecraft data. The paper also compares predictions of the A{sub m} from linear theory to nonlinear turbulence simulations and solar wind measurements. In both cases, linear theory compares well and suggests that the solar wind for the interval studied is dominantly Alfvenic in the inertial and dissipation ranges to scales of k{rho}{sub i} {approx_equal} 5.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
MARKOV-MODULATED MEAN-VARIANCE PROBLEM FOR AN INSURER
Institute of Scientific and Technical Information of China (English)
Wang Wei; Bi Junna
2011-01-01
In this paper, we consider an insurance company which has the option of investing in a risky asset and a risk-free asset, whose price parameters are driven by a finite state Markov chain. The risk process of the insurance company is modeled as a diffusion process whose diffusion and drift parameters switch over time according to the same Markov chain. We study the Markov-modulated mean-variance problem for the insurer and derive explicitly the closed form of the efficient strategy and efficient frontier. In the case of no regime switching, we can see that the efficient frontier in our paper coincides with that of [10] when there is no pure jump.
Variance component estimates for alternative litter size traits in swine.
Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T
2015-11-01
Litter size at d 5 (LS5) has been shown to be an effective trait to increase total number born (TNB) while simultaneously decreasing preweaning mortality. The objective of this study was to determine the optimal litter size day for selection (i.e., other than d 5). Traits included TNB, number born alive (NBA), litter size at d 2, 5, 10, 30 (LS2, LS5, LS10, LS30, respectively), litter size at weaning (LSW), number weaned (NW), piglet mortality at d 30 (MortD30), and average piglet birth weight (BirthWt). Litter size traits were assigned to biological litters and treated as a trait of the sow. In contrast, NW was the number of piglets weaned by the nurse dam. Bivariate animal models included farm, year-season, and parity as fixed effects. Number born alive was fit as a covariate for BirthWt. Random effects included additive genetics and the permanent environment of the sow. Variance components were plotted for TNB, NBA, and LS2 to LS30 using univariate animal models to determine how variances changed over time. Additive genetic variance was minimized at d 7 in Large White and at d 14 in Landrace pigs. Total phenotypic variance for litter size traits decreased over the first 10 d and then stabilized. Heritability estimates increased between TNB and LS30. Genetic correlations between TNB, NBA, and LS2 to LS29 with LS30 plateaued within the first 10 d. A genetic correlation with LS30 of 0.95 was reached at d 4 for Large White and at d 8 for Landrace pigs. Heritability estimates ranged from 0.07 to 0.13 for litter size traits and MortD30. Birth weight had an h of 0.24 and 0.26 for Large White and Landrace pigs, respectively. Genetic correlations among LS30, LSW, and NW ranged from 0.97 to 1.00. In the Large White breed, genetic correlations between MortD30 with TNB and LS30 were 0.23 and -0.64, respectively. These correlations were 0.10 and -0.61 in the Landrace breed. A high genetic correlation of 0.98 and 0.97 was observed between LS10 and NW for Large White and
From Means and Variances to Persons and Patterns
Directory of Open Access Journals (Sweden)
James W Grice
2015-07-01
Full Text Available A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation.
Mean and variance of coincidence counting with deadtime
Yu, D F
2002-01-01
We analyze the first and second moments of the coincidence-counting process for a system affected by paralyzable (extendable) deadtime with (possibly unequal) deadtimes in each singles channel. We consider both 'accidental' and 'genuine' coincidences, and derive exact analytical expressions for the first and second moments of the number of recorded coincidence events under various scenarios. The results include an exact form for the coincidence rate under the combined effects of decay, background, and deadtime. The analysis confirms that coincidence counts are not exactly Poisson, but suggests that the Poisson statistical model that is used for positron emission tomography image reconstruction is a reasonable approximation since the mean and variance are nearly equal.
Variance of indoor radon concentration: Major influencing factors.
Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M
2016-01-15
Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.
Risk Management - Variance Minimization or Lower Tail Outcome Elimination
DEFF Research Database (Denmark)
Aabo, Tom
2002-01-01
This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess...... on future cash flows (the budget), while risk managers concerned about costly lower tail outcomes will hedge (considerably) less depending on the level of uncertainty. A risk management strategy of lower tail outcome elimination is in line with theoretical recommendations in a corporate value......-adding perspective. A cross-case study of blue-chip industrial companies partly supports the empirical use of a risk management strategy of lower tail outcome elimination but does not exclude other factors from (co-)driving the observations....
Analysis of variance of an underdetermined geodetic displacement problem
Energy Technology Data Exchange (ETDEWEB)
Darby, D.
1982-06-01
It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.
Objective Bayesian Comparison of Constrained Analysis of Variance Models.
Consonni, Guido; Paroli, Roberta
2016-10-04
In the social sciences we are often interested in comparing models specified by parametric equality or inequality constraints. For instance, when examining three group means [Formula: see text] through an analysis of variance (ANOVA), a model may specify that [Formula: see text], while another one may state that [Formula: see text], and finally a third model may instead suggest that all means are unrestricted. This is a challenging problem, because it involves a combination of nonnested models, as well as nested models having the same dimension. We adopt an objective Bayesian approach, requiring no prior specification from the user, and derive the posterior probability of each model under consideration. Our method is based on the intrinsic prior methodology, suitably modified to accommodate equality and inequality constraints. Focussing on normal ANOVA models, a comparative assessment is carried out through simulation studies. We also present an application to real data collected in a psychological experiment.
Batch variation between branchial cell cultures: An analysis of variance
DEFF Research Database (Denmark)
Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.
2003-01-01
We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...
Correct use of repeated measures analysis of variance.
Park, Eunsik; Cho, Meehye; Ki, Chang-Seok
2009-02-01
In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).
Hodological resonance, hodological variance, psychosis and schizophrenia: A hypothetical model
Directory of Open Access Journals (Sweden)
Paul Brian eLawrie Birkett
2011-07-01
Full Text Available Schizophrenia is a disorder with a large number of clinical, neurobiological, and cognitive manifestations, none of which is invariably present. However it appears to be a single nosological entity. This article considers the likely characteristics of a pathology capable of such diverse consequences. It is argued that both deficit and psychotic symptoms can be manifestations of a single pathology. A general model of psychosis is proposed in which the informational sensitivity or responsivity of a network ("hodological resonance" becomes so high that it activates spontaneously, to produce a hallucination, if it is in sensory cortex, or another psychotic symptom if it is elsewhere. It is argued that this can come about because of high levels of modulation such as those assumed present in affective psychosis, or because of high levels of baseline resonance, such as those expected in deafferentation syndromes associated with hallucinations, for example, Charles Bonnet. It is further proposed that schizophrenia results from a process (probably neurodevelopmental causing widespread increases of variance in baseline resonance; consequently some networks possess high baseline resonance and become susceptible to spontaneous activation. Deficit symptoms might result from the presence of networks with increased activation thresholds. This hodological variance model is explored in terms of schizo-affective disorder, transient psychotic symptoms, diathesis-stress models, mechanisms of antipsychotic pharmacotherapy and persistence of genes predisposing to schizophrenia. Predictions and implications of the model are discussed. In particular it suggests a need for more research into psychotic states and for more single case-based studies in schizophrenia.
Analysis of variance with unbalanced data: an update for ecology & evolution.
Hector, Andy; von Felten, Stefanie; Schmid, Bernhard
2010-03-01
1. Factorial analysis of variance (anova) with unbalanced (non-orthogonal) data is a commonplace but controversial and poorly understood topic in applied statistics. 2. We explain that anova calculates the sum of squares for each term in the model formula sequentially (type I sums of squares) and show how anova tables of adjusted sums of squares are composite tables assembled from multiple sequential analyses. A different anova is performed for each explanatory variable or interaction so that each term is placed last in the model formula in turn and adjusted for the others. 3. The sum of squares for each term in the analysis can be calculated after adjusting only for the main effects of other explanatory variables (type II sums of squares) or, controversially, for both main effects and interactions (type III sums of squares). 4. We summarize the main recent developments and emphasize the shift away from the search for the 'right'anova table in favour of presenting one or more models that best suit the objectives of the analysis.
Energy Technology Data Exchange (ETDEWEB)
Robertson, Brant E.; Stark, Dan P. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Ellis, Richard S. [Department of Astronomy, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Dunlop, James S.; McLure, Ross J.; McLeod, Derek, E-mail: brant@email.arizona.edu [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom)
2014-12-01
Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ∼35% at redshift z ∼ 7 to ≳ 65% at z ∼ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.
López-Sanjuan, C; Hernández-Monteagudo, C; Varela, J; Molino, A; Arnalte-Mur, P; Ascaso, B; Castander, F J; Fernández-Soto, A; Huertas-Company, M; Márquez, I; Martínez, V J; Masegosa, J; Moles, M; Pović, M; Aguerri, J A L; Alfaro, E; Benítez, N; Broadhurst, T; Cabrera-Caño, J; Cepa, J; Cerviño, M; Cristóbal-Hornillos, D; Del Olmo, A; Delgado, R M González; Husillos, C; Infante, L; Perea, J; Prada, F; Quintana, J M
2014-01-01
Our goal is to estimate empirically, for the first time, the cosmic variance that affects merger fraction studies based on close pairs. We compute the merger fraction from photometric redshift close pairs with 10h^-1 kpc <= rp <= 50h^-1 kpc and Dv <= 500 km/s, and measure it in the 48 sub-fields of the ALHAMBRA survey. We study the distribution of the measured merger fractions, that follow a log-normal function, and estimate the cosmic variance sigma_v as the intrinsic dispersion of the observed distribution. We develop a maximum likelihood estimator to measure a reliable sigma_v and avoid the dispersion due to the observational errors (including the Poisson shot noise term). The cosmic variance of the merger fraction depends mainly on (i) the number density of the populations under study, both for the principal (n_1) and the companion (n_2) galaxy in the close pair, and (ii) the probed cosmic volume V_c. We find a significant dependence on neither the search radius used to define close companions, t...
Silver(II) Oxide or Silver(I,III) Oxide?
Tudela, David
2008-01-01
The often called silver peroxide and silver(II) oxide, AgO or Ag[subscript 2]O[subscript 2], is actually a mixed oxidation state silver(I,III) oxide. A thermochemical cycle, with lattice energies calculated within the "volume-based" thermodynamic approach, explain why the silver(I,III) oxide is more stable than the hypothetical silver(II) oxide.…
Continuous-Time Mean-Variance Portfolio Selection under the CEV Process
Hui-qiang Ma
2014-01-01
We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...
Understanding the influence of watershed storage caused by human interferences on ET variance
Zeng, R.; Cai, X.
2014-12-01
Understanding the temporal variance of evapotranspiration (ET) at the watershed scale remains a challenging task, because it is affected by complex climate conditions, soil properties, vegetation, groundwater and human activities. In a changing environment with extensive and intensive human interferences, understanding ET variance and its factors is important for sustainable water resources management. This study presents an analysis of the effect of storage change caused by human activities on ET variance Irrigation usually filters ET variance through the use of surface and groundwater; however, over-amount irrigation may cause the depletion of watershed storage, which changes the coincidence of water availability and energy supply for ET. This study develops a framework by incorporating the water balance and the Budyko Hypothesis. It decomposes the ET variance to the variances of precipitation, potential ET, catchment storage change, and their covariances. The contributions of ET variance from the various components are scaled by some weighting functions, expressed as long-term climate conditions and catchment properties. ET variance is assessed by records from 32 major river basins across the world. It is found that ET variance is dominated by precipitation variance under hot-dry condition and by evaporative demand variance under cool-wet condition; while the coincidence of water and energy supply controls ET variance under moderate climate condition. Watershed storage change plays an increasing important role in determining ET variance with relatively shorter time scale. By incorporating storage change caused by human interferences, this framework corrects the over-estimation of ET variance in hot-dry climate and under-estimation of ET variance in cool-wet climate. Furthermore, classification of dominant factors on ET variance shows similar patterns as geographic zonation.
Incremental validity of the WJ III COG: Limited predictive effects beyond the GIA-E.
McGill, Ryan J; Busse, R T
2015-09-01
This study is an examination of the incremental validity of Cattell-Horn-Carroll (CHC) broad clusters from the Woodcock-Johnson III Tests of Cognitive Abilities (WJ III COG) for predicting scores on the Woodcock-Johnson III Tests of Achievement (WJ III ACH). The participants were children and adolescents, ages 6-18 (n = 4,722), drawn from the WJ III standardization sample. The sample was nationally stratified and proportional to U.S. census estimates for race/ethnicity, parent education level, and geographic region. Hierarchical multiple regression analyses were used to assess for cluster-level effects after controlling for the variance accounted for by the General Intellectual Ability-Extended (GIA-E) composite score. The results were interpreted using the R²/ΔR² statistic as the effect size indicator. Consistent with previous studies, the GIA-E accounted for statistically and clinically significant portions of WJ III ACH cluster score variance, with R2 values ranging from .29 to .56. WJ III COG CHC cluster scores collectively provided statistically significant incremental variance beyond the GIA-E in all of the regression models, although the effect sizes were consistently negligible to small (Average ΔR2(CHC) = .06), with significant effects observed only in the Oral Expression model (ΔR²(CHC) = .23). Individually, the WJ III COG cluster scores accounted for mostly small portions of achievement variance across the prediction models, with a large effect found for the Comprehension-Knowledge cluster in the Oral Expression model (ΔR²(Gc) = .23). The potential clinical and theoretical implications of these results are discussed.
Analysis of variance and functional measurement a practical guide
Weiss, David J
2006-01-01
Chapter I. IntroductionChapter II. One-way ANOVAChapter III. Using the ComputerChapter IV. Factorial StructureChapter V. Two-way ANOVA Chapter VI. Multi-factor DesignsChapter VII. Error Purifying DesignsChapter VIII. Specific ComparisonsChapter IX. Measurement IssuesChapter X. Strength of Effect**Chapter XI. Nested Designs**Chapter XII. Missing Data**Chapter XIII. Confounded Designs**Chapter XIV. Introduction to Functional Measurement**Terms from Introductory Statistics References Subject Index Name Index
Kiviet, J.F.; Phillips, G.D.A.
2014-01-01
In dynamic regression models conditional maximum likelihood (least-squares) coefficient and variance estimators are biased. Using expansion techniques an approximation is obtained to the bias in variance estimation yielding a bias corrected variance estimator. This is achieved for both the standard
A New Approach for Predicting the Variance of Random Decrement Functions
DEFF Research Database (Denmark)
Asmussen, J. C.; Brincker, Rune
1998-01-01
technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...
The pricing of long and short run variance and correlation risk in stock returns
Cosemans, M.
2011-01-01
This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk
Modeling Heterogeneous Variance-Covariance Components in Two-Level Models
Leckie, George; French, Robert; Charlton, Chris; Browne, William
2014-01-01
Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…
The pricing of long and short run variance and correlation risk in stock returns
Cosemans, M.
2011-01-01
This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk
Estimation of genetic variation in residual variance in female and male broiler chickens
Mulder, H.A.; Hill, W.G.; Vereijken, A.; Veerkamp, R.F.
2009-01-01
In breeding programs, robustness of animals and uniformity of end product can be improved by exploiting genetic variation in residual variance. Residual variance can be defined as environmental variance after accounting for all identifiable effects. The aims of this study were to estimate genetic va
López-Sanjuan, C.; Cenarro, A. J.; Hernández-Monteagudo, C.; Varela, J.; Molino, A.; Arnalte-Mur, P.; Ascaso, B.; Castander, F. J.; Fernández-Soto, A.; Huertas-Company, M.; Márquez, I.; Martínez, V. J.; Masegosa, J.; Moles, M.; Pović, M.; Aguerri, J. A. L.; Alfaro, E.; Aparicio-Villegas, T.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Cepa, J.; Cerviño, M.; Cristóbal-Hornillos, D.; Del Olmo, A.; González Delgado, R. M.; Husillos, C.; Infante, L.; Perea, J.; Prada, F.; Quintana, J. M.
2014-04-01
Aims: Our goal is to estimate empirically the cosmic variance that affects merger fraction studies based on close pairs for the first time. Methods: We compute the merger fraction from photometric redshift close pairs with 10 h-1 kpc ≤ rp ≤ 50 h-1 kpc and Δv ≤ 500 km s-1 and measure it in the 48 sub-fields of the ALHAMBRA survey. We study the distribution of the measured merger fractions that follow a log-normal function and estimate the cosmic variance σv as the intrinsic dispersion of the observed distribution. We develop a maximum likelihood estimator to measure a reliable σv and avoid the dispersion due to the observational errors (including the Poisson shot noise term). Results: The cosmic variance σv of the merger fraction depends mainly on (i) the number density of the populations under study for both the principal (n1) and the companion (n2) galaxy in the close pair and (ii) the probed cosmic volume Vc. We do not find a significant dependence on either the search radius used to define close companions, the redshift, or the physical selection (luminosity or stellar mass) of the samples. Conclusions: We have estimated the cosmic variance that affects the measurement of the merger fraction by close pairs from observations. We provide a parametrisation of the cosmic variance with n1, n2, and Vc, σv ∝ n1-0.54Vc-0.48 (n_2/n_1)-0.37 . Thanks to this prescription, future merger fraction studies based on close pairs could properly account for the cosmic variance on their results. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie (MPIA) at Heidelberg and the Instituto de Astrofísica de Andalucía (IAA-CSIC).Appendix is available in electronic form at http://www.aanda.org
Estimation models of variance components for farrowing interval in swine
Directory of Open Access Journals (Sweden)
Aderbal Cavalcante Neto
2009-02-01
Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos
Walker, M. D.; Matthews, J. C.; Asselin, M.-C.; Watson, C. C.; Saleem, A.; Dickinson, C.; Charnley, N.; Julyan, P. J.; Price, P. M.; Jones, T.
2010-11-01
The precision of biological parameter estimates derived from dynamic PET data can be limited by the number of acquired coincidence events (prompts and randoms). These numbers are affected by the injected activity (A0). The benefits of optimizing A0 were assessed using a new model of data variance which is formulated as a function of A0. Seven cancer patients underwent dynamic [15O]H2O PET scans (32 scans) using a Biograph PET-CT scanner (Siemens), with A0 varied (142-839 MBq). These data were combined with simulations to (1) determine the accuracy of the new variance model, (2) estimate the improvements in parameter estimate precision gained by optimizing A0, and (3) examine changes in precision for different size regions of interest (ROIs). The new variance model provided a good estimate of the relative variance in dynamic PET data across a wide range of A0s and time frames for FBP reconstruction. Patient data showed that relative changes in estimate precision with A0 were in reasonable agreement with the changes predicted by the model: Pearson's correlation coefficients were 0.73 and 0.62 for perfusion (F) and the volume of distribution (VT), respectively. The between-scan variability in the parameter estimates agreed with the estimated precision for small ROIs (<5 mL). An A0 of 500-700 MBq was near optimal for estimating F and VT from abdominal [15O]H2O scans on this scanner. This optimization improved the precision of parameter estimates for small ROIs (<5 mL), with an injection of 600 MBq reducing the standard error on F by a factor of 1.13 as compared to the injection of 250 MBq, but by the more modest factor of 1.03 as compared to A0 = 400 MBq.
DEFF Research Database (Denmark)
Pontoppidan, Maria
2012-01-01
Artikel om den sidste slaviske Rügenfyrste, Wizlaw III (1265/68-1325), der traditionelt har været identificeret med minnesangeren Wizlaw den Unge. Om de bevarede sange og om minnesangens rolle ved det rügenske fyrstehof.......Artikel om den sidste slaviske Rügenfyrste, Wizlaw III (1265/68-1325), der traditionelt har været identificeret med minnesangeren Wizlaw den Unge. Om de bevarede sange og om minnesangens rolle ved det rügenske fyrstehof....
Gover, A. Rod; Waldron, Andrew
2017-09-01
We develop a universal distributional calculus for regulated volumes of metrics that are suitably singular along hypersurfaces. When the hypersurface is a conformal infinity we give simple integrated distribution expressions for the divergences and anomaly of the regulated volume functional valid for any choice of regulator. For closed hypersurfaces or conformally compact geometries, methods from a previously developed boundary calculus for conformally compact manifolds can be applied to give explicit holographic formulæ for the divergences and anomaly expressed as hypersurface integrals over local quantities (the method also extends to non-closed hypersurfaces). The resulting anomaly does not depend on any particular choice of regulator, while the regulator dependence of the divergences is precisely captured by these formulæ. Conformal hypersurface invariants can be studied by demanding that the singular metric obey, smoothly and formally to a suitable order, a Yamabe type problem with boundary data along the conformal infinity. We prove that the volume anomaly for these singular Yamabe solutions is a conformally invariant integral of a local Q-curvature that generalizes the Branson Q-curvature by including data of the embedding. In each dimension this canonically defines a higher dimensional generalization of the Willmore energy/rigid string action. Recently, Graham proved that the first variation of the volume anomaly recovers the density obstructing smooth solutions to this singular Yamabe problem; we give a new proof of this result employing our boundary calculus. Physical applications of our results include studies of quantum corrections to entanglement entropies.
Combustor Design Criteria Validation. Volume III. User’s Manual
1979-02-01
plenum annulus is condut .- ted based upon the qeneralized one-dimensional continuous flow- analysis approach of Shapiro . The analysis considers the...time. it was shown by the authors that a good correlation with the burning rate data could be obtained by taking thermal con- ductivity and CD as a...calculated using the coefficient given in Equation 39. II h - 2 K (l+0.3Pr 3Re2 H- ) (39)D~ m2 -K) where k is the thermal conductivity of fuel vapor
Snohomish Estuary Wetlands Study Volume III. Classification and Mapping
1978-07-01
an abundance of other flowering annuals and perennials are characteristic. S*312 Beach Grassland Strands of beach or dune grasses closely associated...Jetty Island, 27 September, 1977, Scientific Name Common Name A0,11lsa millefolium Commn Yarrow 4goseris sp. Valse- dandelion Alnus rubra Red Alder
Annotated Bibliography for Lake Erie. Volume III. Engineering,
1974-10-01
redistributing the heat gained through the surface. Benninghoff, W. S. - See: A. L. Stevenson , No. 527. Berg, D. W. - See: J. H. Balsillie, No. 79. 93. Berg, D. W...discharges on the water uses are discussed together with control measures required to protect the uses. 527. Stevenson , A. L. and W. S. Benninghoff...amorphous muck. Mesic site conditions with mull humus are indicated. The forest bed is overlain successively by fibrous (marsh?) peat, pond ooze, and
FY2000 End of Year Report: Volume III
2000-11-01
of one experiment informing and shaping the design of the next. Virtual & Constructive Simulations Discussions & War Games Constructive Model Case...Command Vietnam MLRS Multiple Launch Rocket System MOBA Military Operations in Built-up Areas MOUT Military Operations in Urban Terrain MRT Mobile...Operations in Urban Terrain,” by Jeb Stewart, looks at the role of engineers in the Army After Next and urban combat. During Army After Next war games the
Passive solar design handbook. Volume III. Passive solar design analysis
Energy Technology Data Exchange (ETDEWEB)
Jones, R.W.; Balcomb, J.D.; Kosiewicz, C.E.; Lazarus, G.S.; McFarland, R.D.; Wray, W.O.
1982-07-01
Simple analytical methods concerning the design of passive solar heating systems are presented with an emphasis on the average annual heating energy consumption. Key terminology and methods are reviewed. The solar load ratio (SLR) is defined, and its relationship to analysis methods is reviewed. The annual calculation, or Load Collector Ratio (LCR) method, is outlined. Sensitivity data are discussed. Information is presented on balancing conservation and passive solar strategies in building design. Detailed analysis data are presented for direct gain and sunspace systems, and details of the systems are described. Key design parameters are discussed in terms of their impact on annual heating performance of the building. These are the sensitivity data. The SLR correlations for the respective system types are described. The monthly calculation, or SLR method, based on the SLR correlations, is reviewed. Performance data are given for 9 direct gain systems and 15 water wall and 42 Trombe wall systems. (LEW)
Solar central receiver prototype heliostat. Volume III. Cost estimates
Energy Technology Data Exchange (ETDEWEB)
None
1978-06-01
The Boeing heliostat design can be produced and installed for a Capital Cost of $42 per square meter at high commercial plant quantities and rates. This is 14% less than the DOE cost target. Even at a low commercial plant production rate of 25,000 heliostats per year the Capital Cost of $48 per square meter is 2% less than the cost goal established by the DOE. Projected capital costs and 30 year maintenance costs for three scenarios of production and installation are presented: (1) commercial rate production of 25,000, 250,000, and 1,000,000 heliostats per year; (2) a one-time only production quantity of 2500 heliostats; and (3) commercial rate production of 25,000 heliostats per year with each plant (25,000 heliostats) installed at widely dispersed sites throughout the Southwestern United States. These three scenarios for solar plant locations and the manufacturing/installation processes are fully described, and detailed cost breakdowns for the three scenarios are provided.
Geopressured geothermal bibliography. Volume III. (Geopressure thesaurus). Second edition
Energy Technology Data Exchange (ETDEWEB)
Sepehrnoori, K.; Carter, F.; Schneider, R.; Street, S.; McGill, K.
1985-05-01
This thesaurus of terminology associated with the geopressured geothermal energy field has been developed as a part of the Geopressured Geothermal Information System data base. The subject scope includes: (1) geopressure resource assessment; (2) geology, hydrology, and geochemistry of geopressured systems; (3) geopressure exploration and exploration technology; (4) geopressured reservoir engineering and drilling technology; (5) economic aspects; (6) environmental aspects; (7) legal, institutional, and sociological aspects; (8) electrical and nonelectrical utilization; and (9) other energy sources, especially methane and other fossil fuel reserves, associated with geopressured reservoirs.
Wilderness Study Report : Volume III : Public Hearing Transcripts
US Fish and Wildlife Service, Department of the Interior — This document contains public hearing transcripts from the Kenai National Moose Range Wilderness Hearing. This hearing was held to obtain information relating to...
Photovoltaic venture analysis. Final report. Volume III. Appendices
Energy Technology Data Exchange (ETDEWEB)
Costello, D.; Posner, D.; Schiffel, D.; Doane, J.; Bishop, C.
1978-07-01
This appendix contains a brief summary of a detailed description of alternative future energy scenarios which provide an overall backdrop for the photovoltaic venture analysis. Also included is a summary of a photovoltaic market/demand workshop, a summary of a photovoltaic supply workshop which used cross-impact analysis, and a report on photovoltaic array and system prices in 1982 and 1986. The results of a sectorial demand analysis for photovoltaic power systems used in the residential sector (single family homes), the service, commercial, and institutional sector (schools), and in the central power sector are presented. An analysis of photovoltaics in the electric utility market is given, and a report on the industrialization of photovoltaic systems is included. A DOE information memorandum regarding ''A Strategy for a Multi-Year Procurement Initiative on Photovoltaics (ACTS No. ET-002)'' is also included. (WHK)
Training Career Ladder AFSC 751X2. Volume III.
1981-01-01
u0 -4V n on < .z - < .0=w x =Z 14 W! < " e- I.!;. ) En 2 E H C l>)n q 0 w0r M> > 35 JOB INTEREST AND IERCEIVED !Li ,N OF’ TALEN ... AN TRAINNG BY...PERSONNEL WHEN COMPUTERS MALFUNCTION 2.6 13 MAIL OFFICER EDUCATIONAL TRANSCRIPTS TO AIR FORCE INSTITUTE OF TECHNOLOGY 2.6 6 DESTROY TESTS 2.5 21 42
Measurement and modeling of advanced coal conversion processes, Volume III
Energy Technology Data Exchange (ETDEWEB)
Ghani, M.U.; Hobbs, M.L.; Hamblen, D.G. [and others
1993-08-01
A generalized one-dimensional, heterogeneous, steady-state, fixed-bed model for coal gasification and combustion is presented. The model, FBED-1, is a design and analysis tool that can be used to simulate a variety of gasification, devolatilization, and combustion processes. The model considers separate gas and solid temperatures, axially variable solid and gas flow rates, variable bed void fraction, coal drying, devolatilization based on chemical functional group composition, depolymerization, vaporization and crosslinking, oxidation, and gasification of char, and partial equilibrium in the gas phase.
Problems of Air Defense - and - Appedicies. Volumes I-III
1951-08-01
u6dertak~e a major project mi~ air defeuse prokbie.ma. nrespon~se to thlis rei~et an~d vih the ’scope s~omewhat broadene in z-ectognition of the...enti_- rou -~Preliminary oral presenitations of the?e con- clusions_ were gitven to -representatives of thjsos;LI gnce rt b l and Z-1 Jun-e in...Coast should be established at the earliest possible 4tine. TwUo-a.d•. oral ground radars are recomnnended in the Northh.vest. ..... A-_•N’ THEN PIh
Secretary's annual report to Congress. Volume III. Project summaries
Energy Technology Data Exchange (ETDEWEB)
None
1981-01-01
Progress and status of representative projects in each program within DOE are summarized. Subjects covered and the number of projects reported on are: conservation (2); fossil energy (11); nuclear energy (5); renewable energy resources (16); energy production and power marketing (3); general science (11); defense programs (7); contingency planning (3); and management and oversight (1). (MCW)
PISA 2015 Results: Students' Well-Being. Volume III
OECD Publishing, 2017
2017-01-01
The OECD Programme for International Student Assessment (PISA) examines not just what students know in science, reading and mathematics, but what they can do with what they know. This report is the product of a collaborative effort between the countries participating in PISA, the national and international experts and institutions working within…
Survey of biomass gasification. Volume III. Current technology and research
Energy Technology Data Exchange (ETDEWEB)
None
1980-04-01
This survey of biomass gasification was written to aid the Department of Energy and the Solar Energy Research Institute Biological and Chemical Conversion Branch in determining the areas of gasification that are ready for commercialization now and those areas in which further research and development will be most productive. Chapter 8 is a survey of gasifier types. Chapter 9 consists of a directory of current manufacturers of gasifiers and gasifier development programs. Chapter 10 is a sampling of current gasification R and D programs and their unique features. Chapter 11 compares air gasification for the conversion of existing gas/oil boiler systems to biomass feedstocks with the price of installing new biomass combustion equipment. Chapter 12 treats gas conditioning as a necessary adjunct to all but close-coupled gasifiers, in which the product is promptly burned. Chapter 13 evaluates, technically and economically, synthesis-gas processes for conversion to methanol, ammonia, gasoline, or methane. Chapter 14 compiles a number of comments that have been assembled from various members of the gasifier community as to possible roles of the government in accelerating the development of gasifier technology and commercialization. Chapter 15 includes recommendations for future gasification research and development.
Analyzing Global Interdependence. Volume III. Methodological Perspectives and Research Implications,
1974-11-01
Deutsch, The Nerves of Government: Models of Political Communication and Control (New York: The Free Press of Glencoe, 1963); and JUrgen Habermas ...cybernetics and Habermas ’ Marxian writings on communicative competence. They may make possible respecifications of mixed interest choice situations in ways
Biological Effects of Nonionizing Electromagnetic Radiation. Volume III, Number 3.
1979-03-01
NEURALGIA . (Eng.) Gregg, J. H. (Dental Re;. which is shorter by one decade than the 0.3 msec Center , The Univ. North Carolina , Chapel Hi l l , NC...paroxysmal trlgein m nal neur- other contactless stimulating method involved al gias to achieve pain relief. Facial pain was exc i ting the nerve by...after conventional microwave Irradiation . Regional to up to 500 mW/cm2 for 60 mm sustained facial differences in acety lchol i ne level s i n the
1974-12-01
S(4) IN COMMON BLOCK SPIRO BWC S(5) IN COMMON BLOCK SPIRO ADJLIM S(1) IN COMMON BLOCK SPIRO F5 S(3) IN COMMON BLOCK SPIRO FL S(2) IN COMMON BLOCK...NRZ 102 CO3QD 118 S or C 3 ot 4 BPP 103 GAUSS 1.19 EED 5 PPM 104 CHIRP 120 CASE 6 TEL 105 FSK 106 SIG CODE PAM 107 VO 115 RADAR 200 CV 116 AM 301 NO 117...code (source/receptor type code) MOPSIG CDoE SR CODE PDM 101. RE 1 NRZ 102 PO 2 BPP 103 S/C 3/4 PPM 104 EED 5 TEL 105 CASE 6 PAM 107 , ESPIKE 108 Units
Nuclear Blast Response Computer Program. Volume III. Program Listing.
1981-08-01
434 .. "" " - 14. MONITORING AGENCY NAME & ADDRESS(Il different from Controlltn Office) 15. SECURIT CLASS. of ths repot) Director Unclassified Defense...C -Cc 4u 41 (- 4, 4g - w bz 4 cca0 Cca Z~ CL 0 C 0. 200C Oi Zr LA 0.0. o, 0. -3 C7 < - 0 -< - -~ _I-_-..- . g a 1 1 -- 0-1-0 . a _ ’ _jg .-- gut
Photovoltaic venture analysis. Final report. Volume III. Appendices
Energy Technology Data Exchange (ETDEWEB)
Costello, D.; Posner, D.; Schiffel, D.; Doane, J.; Bishop, C.
1978-07-01
This appendix contains a brief summary of a detailed description of alternative future energy scenarios which provide an overall backdrop for the photovoltaic venture analysis. Also included is a summary of a photovoltaic market/demand workshop, a summary of a photovoltaic supply workshop which used cross-impact analysis, and a report on photovoltaic array and system prices in 1982 and 1986. The results of a sectorial demand analysis for photovoltaic power systems used in the residential sector (single family homes), the service, commercial, and institutional sector (schools), and in the central power sector are presented. An analysis of photovoltaics in the electric utility market is given, and a report on the industrialization of photovoltaic systems is included. A DOE information memorandum regarding ''A Strategy for a Multi-Year Procurement Initiative on Photovoltaics (ACTS No. ET-002)'' is also included. (WHK)
Adenomyosis and Its Variance: Adenomyoma and Female Fertility
Directory of Open Access Journals (Sweden)
Peng-Hui Wang
2009-09-01
Full Text Available Extensive adenomyosis (adenomyosis or its variance, localized adenomyosis (adenomyoma of the uterus, is often described as scattered, widely-distributed endometrial glands or stromal tissue found throughout the myometrium layer of the uterus. By definition, adenomyosis consists of epithelial as well as stromal elements, and is situated at least 2.5 mm below the endometrial–myometrial junction. However, the diagnosis and clinical significance of uterine adenomyosis and/or adenomyoma remain somewhat enigmatic. The relationship between infertility and uterine adenomyosis and/or adenomyoma is still uncertain, but severe endometriosis impairs the chances of successful pregnancy when using artificial reproductive techniques. To date, there is no uniform agreement on the most appropriate therapeutic methods for managing women with uterine adenomyosis and/or adenomyoma who want to preserve their fertility. Fertility has been restored after successful treatment of adenomyosis using multiple modalities, including hormonal therapy and conservative surgical therapy via laparoscopy or exploratory laparotomy, uterine artery embolization, and other methods, including a potential but under- investigated procedure, magnetic resonance-guided focused ultrasound. This review will explore recent publications that have addressed the use of different approaches in the management of subfertile women with uterine adenomyosis and adenomyoma.