WorldWideScience

Sample records for volume iii variance

  1. Draft no-migration variance petition. Volume 1

    International Nuclear Information System (INIS)

    1995-01-01

    The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2,6 million cubic feet of these waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is Volume 1 which discusses the regulatory frame work, site characterization, facility description, waste description, environmental impact analysis, monitoring, quality assurance, long-term compliance analysis, and regulatory compliance assessment

  2. No-migration variance petition: Draft. Volume 4, Appendices DIF, GAS, GCR (Volume 1)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-31

    The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2.6 million cubic feet of the se waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is volume 4 of the petition which presents details about the transport characteristics across drum filter vents and polymer bags; gas generation reactions and rates during long-term WIPP operation; and geological characterization of the WIPP site.

  3. No-migration variance petition: Draft. Volume 4, Appendices DIF, GAS, GCR (Volume 1)

    International Nuclear Information System (INIS)

    1995-01-01

    The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2.6 million cubic feet of the se waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is volume 4 of the petition which presents details about the transport characteristics across drum filter vents and polymer bags; gas generation reactions and rates during long-term WIPP operation; and geological characterization of the WIPP site

  4. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... probability in a physical disector and Cavalieri's direct estimator of volume, the unbiased, number-weighted mean nuclear volume, nuclear vN, of the same benign and malignant nuclear populations is also estimated. Having obtained estimates of nuclear volume in both the volume- and number distribution...... to the larger malignant nuclei. Finally, the variance in the volume distribution of nuclear volume is estimated by shape-independent estimates of the volume-weighted second moment of the nuclear volume, vv2, using both a manual and a computer-assisted approach. The working procedure for the description of 3-D...

  5. No-migration variance petition. Volume 3, Revision 1: Appendix B, Attachments A through D

    Energy Technology Data Exchange (ETDEWEB)

    1990-03-01

    Volume III contains the following attachments: TRUPACT-II content codes (TRUCON); TRUPACT-II chemical list; chemical compatibility analysis for Rocky Flats Plant waste forms (Appendix 2.10.12 of TRUPACT-II safety analysis report); and chemical compatibility analyses for waste forms across all sites.

  6. CAIXA: a catalogue of AGN in the XMM-Newton archive. III. Excess variance analysis

    NARCIS (Netherlands)

    Ponti, G.; Papadakis, I.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.

    2012-01-01

    Context. We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10 ks in pointed observations, which is the largest sample used so far to study

  7. Draft no-migration variance petition. Volume 7, Appendices: SUM, SURV, VOC, WAP

    International Nuclear Information System (INIS)

    1995-01-01

    The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2,6 million cubic feet of these waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This report is volume 7 of the petition which presents details about the waste analysis plan for WIPP. VOC screening methodologies; and a summary of the site characterization studies conducted from 1983 through 1987 at WIPP

  8. Draft no-migration variance petition. Volume 6, Appendices: QAPD, REG, RM, SCR, SER

    International Nuclear Information System (INIS)

    1995-01-01

    The Department of Energy is responsible for the disposition of transuranic (TRU) waste generated by national defense-related activities. Approximately 2,6 million cubic feet of these waste have been generated and are stored at various facilities across the country. The Waste Isolation Pilot Plant (WIPP), was sited and constructed to meet stringent disposal requirements. In order to permanently dispose of TRU waste, the DOE has elected to petition the US EPA for a variance from the Land Disposal Restrictions of RCRA. This document fulfills the reporting requirements for the petition. This is Volume 6 of the petition which presents details about the 1993 site environmental report; metorite impact; damage from rock mechanics; regulatory interpretations; and the quality assurance program

  9. Olympic Training Film Profiles. Volume III.

    Science.gov (United States)

    1971

    Approximately 250 instructional films are described in Volume Three (1970-1971) of this review. After an introduction which considers film discussions from the instructor's point of view and offers some ideas for conducting a film showing and ordering the films, profiles of the films are presented grouped under such areas as management…

  10. DART II documentation. Volume III. Appendices

    Energy Technology Data Exchange (ETDEWEB)

    1979-10-01

    The DART II is a remote, interactive, microprocessor-based data acquistion system suitable for use with air monitors. This volume of DART II documentation contains the following appendixes: adjustment and calibration procedures; mother board signature list; schematic diagrams; device specification sheets; ROM program listing; 6800 microprocessor instruction list, octal listing; and cable lists. (RWR)

  11. Technology transfer package on seismic base isolation - Volume III

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-02-14

    This Technology Transfer Package provides some detailed information for the U.S. Department of Energy (DOE) and its contractors about seismic base isolation. Intended users of this three-volume package are DOE Design and Safety Engineers as well as DOE Facility Managers who are responsible for reducing the effects of natural phenomena hazards (NPH), specifically earthquakes, on their facilities. The package was developed as part of DOE's efforts to study and implement techniques for protecting lives and property from the effects of natural phenomena and to support the International Decade for Natural Disaster Reduction. Volume III contains supporting materials not included in Volumes I and II.

  12. Introduction to "Global Tsunami Science: Past and Future, Volume III"

    Science.gov (United States)

    Rabinovich, Alexander B.; Fritz, Hermann M.; Tanioka, Yuichiro; Geist, Eric L.

    2018-04-01

    Twenty papers on the study of tsunamis are included in Volume III of the PAGEOPH topical issue "Global Tsunami Science: Past and Future". Volume I of this topical issue was published as PAGEOPH, vol. 173, No. 12, 2016 and Volume II as PAGEOPH, vol. 174, No. 8, 2017. Two papers in Volume III focus on specific details of the 2009 Samoa and the 1923 northern Kamchatka tsunamis; they are followed by three papers related to tsunami hazard assessment for three different regions of the world oceans: South Africa, Pacific coast of Mexico and the northwestern part of the Indian Ocean. The next six papers are on various aspects of tsunami hydrodynamics and numerical modelling, including tsunami edge waves, resonant behaviour of compressible water layer during tsunamigenic earthquakes, dispersive properties of seismic and volcanically generated tsunami waves, tsunami runup on a vertical wall and influence of earthquake rupture velocity on maximum tsunami runup. Four papers discuss problems of tsunami warning and real-time forecasting for Central America, the Mediterranean coast of France, the coast of Peru, and some general problems regarding the optimum use of the DART buoy network for effective real-time tsunami warning in the Pacific Ocean. Two papers describe historical and paleotsunami studies in the Russian Far East. The final set of three papers importantly investigates tsunamis generated by non-seismic sources: asteroid airburst and meteorological disturbances. Collectively, this volume highlights contemporary trends in global tsunami research, both fundamental and applied toward hazard assessment and mitigation.

  13. Barnwell Nuclear Fuels Plant applicability study. Volume III. Appendices

    International Nuclear Information System (INIS)

    1978-03-01

    Volume III suppliees supporting information to assist Congress in making a decision on the optimum utilization of the Barnwell Nuclear Fuels Plant. Included are applicable fuel cycle policies; properties of reference fuels; description and evaluation of alternative operational (flue cycle) modes; description and evaluation of safeguards systems and techniques; description and evaluation of spiking technology; waste and waste solidification evaluation; and Department of Energy programs relating to nonproliferation

  14. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... probability in a physical disector and Cavalieri's direct estimator of volume, the unbiased, number-weighted mean nuclear volume, nuclear vN, of the same benign and malignant nuclear populations is also estimated. Having obtained estimates of nuclear volume in both the volume- and number distribution...... of volume, a detailed investigation of nuclear size variability is possible. Benign and malignant nuclear populations show approximately the same relative variability with regard to nuclear volume, and the presented data are compatible with a simple size transformation from the smaller benign nuclei...

  15. Breckinridge Project, initial effort. Report III, Volume 2. Specifications

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-01-01

    Report III, Volume 2 contains those specifications numbered K through Y, as follows: Specifications for Compressors (K); Specifications for Piping (L); Specifications for Structures (M); Specifications for Insulation (N); Specifications for Electrical (P); Specifications for Concrete (Q); Specifications for Civil (S); Specifications for Welding (W); Specifications for Painting (X); and Specifications for Special (Y). The standard specifications of Bechtel Petroleum Incorporated have been amended as necessary to reflect the specific requirements of the Breckinridge Project and the more stringent specifications of Ashland Synthetic Fuels, Inc. These standard specifications are available for the Initial Effort (Phase Zero) work performed by all contractors and subcontractors.

  16. Minerals Yearbook, volume III, Area Reports—International

    Science.gov (United States)

    ,

    2018-01-01

    The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.

  17. No-migration variance petition. Appendices C--J: Volume 5, Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    1990-03-01

    Volume V contains the appendices for: closure and post-closure plans; RCRA ground water monitoring waver; Waste Isolation Division Quality Program Manual; water quality sampling plan; WIPP Environmental Procedures Manual; sample handling and laboratory procedures; data analysis; and Annual Site Environmental Monitoring Report for the Waste Isolation Pilot Plant.

  18. Waste Isolation Pilot Plant No-Migration Variance Petition. Revision 1, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, Arlen

    1990-03-01

    The purpose of the WIPP No-Migration Variance Petition is to demonstrate, according to the requirements of RCRA {section}3004(d) and 40 CFR {section}268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the facility for as long as the wastes remain hazardous. The DOE submitted the petition to the EPA in March 1989. Upon completion of its initial review, the EPA provided to DOE a Notice of Deficiencies (NOD). DOE responded to the EPA`s NOD and met with the EPA`s reviewers of the petition several times during 1989. In August 1989, EPA requested that DOE submit significant additional information addressing a variety of topics including: waste characterization, ground water hydrology, geology and dissolution features, monitoring programs, the gas generation test program, and other aspects of the project. This additional information was provided to EPA in January 1990 when DOE submitted Revision 1 of the Addendum to the petition. For clarity and ease of review, this document includes all of these submittals, and the information has been updated where appropriate. This document is divided into the following sections: Introduction, 1.0: Facility Description, 2.0: Waste Description, 3.0; Site Characterization, 4.0; Environmental Impact Analysis, 5.0; Prediction and Assessment of Infrequent Events, 6.0; and References, 7.0.

  19. No-migration variance petition. Appendices A--B: Volume 2, Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    1990-03-01

    Volume II contains Appendix A, emergency plan and Appendix B, waste analysis plan. The Waste Isolation Pilot Plant (WIPP) Emergency plan and Procedures (WP 12-9, Rev. 5, 1989) provides an organized plan of action for dealing with emergencies at the WIPP. A contingency plan is included which is in compliance with 40 CFR Part 265, Subpart D. The waste analysis plan provides a description of the chemical and physical characteristics of the wastes to be emplaced in the WIPP underground facility. A detailed discussion of the WIPP Waste Acceptance Criteria and the rationale for its established units are also included.

  20. Partial volume effect correction in PET using regularized iterative deconvolution with variance control based on local topology

    International Nuclear Information System (INIS)

    Kirov, A S; Schmidtlein, C R; Piao, J Z

    2008-01-01

    Correcting positron emission tomography (PET) images for the partial volume effect (PVE) due to the limited resolution of PET has been a long-standing challenge. Various approaches including incorporation of the system response function in the reconstruction have been previously tested. We present a post-reconstruction PVE correction based on iterative deconvolution using a 3D maximum likelihood expectation-maximization (MLEM) algorithm. To achieve convergence we used a one step late (OSL) regularization procedure based on the assumption of local monotonic behavior of the PET signal following Alenius et al. This technique was further modified to selectively control variance depending on the local topology of the PET image. No prior 'anatomic' information is needed in this approach. An estimate of the noise properties of the image is used instead. The procedure was tested for symmetric and isotropic deconvolution functions with Gaussian shape and full width at half-maximum (FWHM) ranging from 6.31 mm to infinity. The method was applied to simulated and experimental scans of the NEMA NU 2 image quality phantom with the GE Discovery LS PET/CT scanner. The phantom contained uniform activity spheres with diameters ranging from 1 cm to 3.7 cm within uniform background. The optimal sphere activity to variance ratio was obtained when the deconvolution function was replaced by a step function few voxels wide. In this case, the deconvolution method converged in ∼3-5 iterations for most points on both the simulated and experimental images. For the 1 cm diameter sphere, the contrast recovery improved from 12% to 36% in the simulated and from 21% to 55% in the experimental data. Recovery coefficients between 80% and 120% were obtained for all larger spheres, except for the 13 mm diameter sphere in the simulated scan (68%). No increase in variance was observed except for a few voxels neighboring strong activity gradients and inside the largest spheres. Testing the method for

  1. Safety Specialist Manpower, Manpower Resources. Volumes II and III.

    Science.gov (United States)

    Booz Allen and Hamilton, Inc., Washington, DC.

    These second and third volumes of a four-volume study of manpower in state highway safety programs over the next decade estimate manpower resources by state and in national aggregate and describe present and planned training programs for safety specialists. For each educational level, both total manpower and manpower actually available for…

  2. INEL environmental characterization report. Volume III. Appendices E-H

    International Nuclear Information System (INIS)

    1984-09-01

    This volume contains the following appendices: (1) INEL subsurface hydrology; (2) cultural resources assessment of two study areas on the INEL; (3) description of INEL facilities; and (4) effluent measurements and environmental monitoring programs

  3. Baseline metal enrichment from Population III star formation in cosmological volume simulations

    Science.gov (United States)

    Jaacks, Jason; Thompson, Robert; Finkelstein, Steven L.; Bromm, Volker

    2018-04-01

    We utilize the hydrodynamic and N-body code GIZMO coupled with our newly developed sub-grid Population III (Pop III) Legacy model, designed specifically for cosmological volume simulations, to study the baseline metal enrichment from Pop III star formation at z > 7. In this idealized numerical experiment, we only consider Pop III star formation. We find that our model Pop III star formation rate density (SFRD), which peaks at ˜ 10- 3 M⊙ yr- 1 Mpc- 1 near z ˜ 10, agrees well with previous numerical studies and is consistent with the observed estimates for Pop II SFRDs. The mean Pop III metallicity rises smoothly from z = 25 to 7, but does not reach the critical metallicity value, Zcrit = 10-4 Z⊙, required for the Pop III to Pop II transition in star formation mode until z ≃ 7. This suggests that, while individual haloes can suppress in situ Pop III star formation, the external enrichment is insufficient to globally terminate Pop III star formation. The maximum enrichment from Pop III star formation in star-forming dark matter haloes is Z ˜ 10-2 Z⊙, whereas the minimum found in externally enriched haloes is Z ≳ 10-7 Z⊙. Finally, mock observations of our simulated IGM enriched with Pop III metals produce equivalent widths similar to observations of an extremely metal-poor damped Lyman alpha system at z = 7.04, which is thought to be enriched by Pop III star formation only.

  4. Small Business Management Volume III: Curriculum. An Adult Education Program.

    Science.gov (United States)

    Persons, Edgar A.; Swanson, Gordon I.

    The small business management adult education program outlined in this curriculum guide is designed to help small business entrepreneurs solve their business management problems and attain the goals they have established for their businesses and their families. (An instructor's manual and practice problems are in separate volumes.) The 3-year…

  5. Handbook of natural resource and energy economics. Volume III

    International Nuclear Information System (INIS)

    Kneese, A.V.; Sweeney, J.L.

    1993-01-01

    The last of a three-volume series of handbooks focuses on the economics of energy, minerals and exhaustible resources, and the forecasting issues. The relationship between energy, the environment and economic growth is also examined. Chapter headings are: economic theory of depletable resources; the optimal use of exhaustible resources; intertemporal consistency issues in depletable resources; buying energy and non-fuel minerals; mineral resource stocks and information; strategies for modelling exhaustible resource supply; natural resources in an age of substitutability; natural resource cartels; the economics of energy security; natural resource use and the environment; and energy, the environment and economic growth

  6. An Independent Scientific Assessment of Well Stimulation in California Volume III

    Energy Technology Data Exchange (ETDEWEB)

    Long, Jane C.S. [California Council on Science and Technology, Sacramento, CA (United States); Feinstein, Laura C. [California Council on Science and Technology, Sacramento, CA (United States); Birkholzer, Jens [California Council on Science and Technology, Sacramento, CA (United States); Foxall, William [California Council on Science and Technology, Sacramento, CA (United States); Houseworth, James [California Council on Science and Technology, Sacramento, CA (United States); Jordan, Preston [California Council on Science and Technology, Sacramento, CA (United States); Lindsey, Nathaniel [California Council on Science and Technology, Sacramento, CA (United States); Maddalena, Randy [California Council on Science and Technology, Sacramento, CA (United States); McKone, Thomas [California Council on Science and Technology, Sacramento, CA (United States); Stringfellow, William [California Council on Science and Technology, Sacramento, CA (United States); Ulrich, Craig [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Heberger, Matthew [Pacific Inst., Oakland, CA (United States); Shonkoff, Seth [PSE Healthy Energy, Berkeley, CA (United States); Brandt, Adam [Stanford Univ., CA (United States); Ferrar, Kyle [The FracTracker Alliance, Oakland, CA (United States); Gautier, Donald [DonGautier LLC., Palo Alto, CA (United States); Phillips, Scott [California State Univ. Stanislaus, Turlock, CA (United States); Greenfield, Ben [Univ. of California, Berkeley, CA (United States); Jerrett, Michael L.B. [Univ. of California, Los Angeles, CA (United States)

    2015-07-01

    This study is issued in three volumes. Volume I, issued in January 2015, describes how well stimulation technologies work, how and where operators deploy these technologies for oil and gas production in California, and where they might enable production in the future. Volume II, issued in July 2015, discusses how well stimulation could affect water, atmosphere, seismic activity, wildlife and vegetation, and human health. Volume II reviews available data, and identifies knowledge gaps and alternative practices that could avoid or mitigate these possible impacts. Volume III, this volume, presents case studies that assess environmental issues and qualitative risks for specific geographic regions. The Summary Report summarizes key findings, conclusions and recommendations of all three volumes.

  7. Soil Properties Database of Spanish Soils Volume III.- Extremadura

    International Nuclear Information System (INIS)

    Trueba, C; Millan, R.; Schmid, T.; Roquero, C; Magister, M.

    1998-01-01

    The soil vulnerability determines the sensitivity of the soil after an accidental radioactive contamination due to Cs-13 7 and Sr-90. The Departamento de Impacto Ambiental de la Energia of CIEMAT is carrying out an assessment of the radiological vulnerability of the different Spanish soils found on the Iberian Peninsula. This requires the knowledge of the soil properties for the various types of existing soils. In order to achieve this aim, a bibliographical compilation of soil profiles has been made to characterize the different soil types and create a database of their properties. Depending on the year of publication and the type of documentary source, the information compiled from the available bibliography is very heterogeneous. Therefore, an important effort has been made to normalize and process the information prior to its incorporation to the database. This volume presents the criteria applied to normalized and process the data as well as the soil properties of the various soil types belonging to the Comunidad Autonoma de Extremadura. (Author) 50 refs

  8. Field Operations and Enforcement Manual for Air Pollution Control. Volume III: Inspection Procedures for Specific Industries.

    Science.gov (United States)

    Weisburd, Melvin I.

    The Field Operations and Enforcement Manual for Air Pollution Control, Volume III, explains in detail the following: inspection procedures for specific sources, kraft pulp mills, animal rendering, steel mill furnaces, coking operations, petroleum refineries, chemical plants, non-ferrous smelting and refining, foundries, cement plants, aluminum…

  9. Technical Reports (Part I). End of Project Report, 1968-1971, Volume III.

    Science.gov (United States)

    Western Nevada Regional Education Center, Lovelock.

    The pamphlets included in this volume are technical reports prepared as outgrowths of the Student Information Systems of the Western Nevada Regional Education Center (WN-REC) funded by a Title III (Elementary and Secondary Education Act) grant. These reports describe methods of interpreting the printouts from the Student Information System;…

  10. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  11. Simulation model for wind energy storage systems. Volume III. Program descriptions. [SIMWEST CODE

    Energy Technology Data Exchange (ETDEWEB)

    Warren, A.W.; Edsinger, R.W.; Burroughs, J.D.

    1977-08-01

    The effort developed a comprehensive computer program for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic). An acronym for the program is SIMWEST (Simulation Model for Wind Energy Storage). The level of detail of SIMWEST is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. Volume III, the SIMWEST program description contains program descriptions, flow charts and program listings for the SIMWEST Model Generation Program, the Simulation program, the File Maintenance program and the Printer Plotter program. Volume III generally would not be required by SIMWEST user.

  12. Hanford spent nuclear fuel project recommended path forward, volume III: Alternatives and path forward evaluation supporting documentation

    International Nuclear Information System (INIS)

    Fulton, J.C.

    1994-10-01

    Volume I of the Hanford Spent Nuclear Fuel Project - Recommended Path Forward constitutes an aggressive series of projects to construct and operate systems and facilities to safely retrieve, package, transport, process, and store K Basins fuel and sludge. Volume II provided a comparative evaluation of four Alternatives for the Path Forward and an evaluation for the Recommended Path Forward. Although Volume II contained extensive appendices, six supporting documents have been compiled in Volume III to provide additional background for Volume II

  13. Three-Dimensional Eyeball and Orbit Volume Modification After LeFort III Midface Distraction.

    Science.gov (United States)

    Smektala, Tomasz; Nysjö, Johan; Thor, Andreas; Homik, Aleksandra; Sporniak-Tutak, Katarzyna; Safranow, Krzysztof; Dowgierd, Krzysztof; Olszewski, Raphael

    2015-07-01

    The aim of our study was to evaluate orbital volume modification with LeFort III midface distraction in patients with craniosynostosis and its influence on eyeball volume and axial diameter modification. Orbital volume was assessed by the semiautomatic segmentation method based on deformable surface models and on 3-dimensional (3D) interaction with haptics. The eyeball volumes and diameters were automatically calculated after manual segmentation of computed tomographic scans with 3D slicer software. The mean, minimal, and maximal differences as well as the standard deviation and intraclass correlation coefficient (ICC) for intraobserver and interobserver measurements reliability were calculated. The Wilcoxon signed rank test was used to compare measured values before and after surgery. P eyeball volume were 0.87 and 0.86, respectively. The orbital volume increased significantly after surgery: 30.32% (mean, 5.96  mL) for the left orbit and 31.04% (mean, 6.31  mL) for the right orbit. The mean increase in eyeball volume was 12.3%. The mean increases in the eyeball axial dimensions were 7.3%, 9.3%, and 4.4% for the X-, Y-, and Z-axes, respectively. The Wilcoxon signed rank test showed that preoperative and postoperative eyeball volumes, as well as the diameters along the X- and Y-axes, were statistically significant. Midface distraction in patients with syndromic craniostenosis results in a significant increase (P eyeball volumes. The 2 methods (haptic-aided semiautomatic segmentation and manual 3D slicer segmentation) are reproducible techniques for orbit and eyeball volume measurements.

  14. World Energy Data System (WENDS). Volume III. Country data, LY-PO

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-06-01

    The World Energy Data System contains organized data on those countries and international organizations that may have critical impact on the world energy scene. Included in this volume, Vol. III, are Libya, Luxembourg, Malaysia, Mexico, Netherlands, New Zealand, Niger, Nigeria, Norway, Pakistan, Peru, Philippines, Poland, and Portugal. The following topics are covered for most of the countries: economic, demographic, and educational profiles; energy policy; indigenous energy resources and uses; forecasts, demand, exports, imports of energy supplies; environmental considerations of energy supplies; power production facilities; energy industries; commercial applications of energy; research and development activities of energy; and international activities.

  15. Minerals Yearbook, volume III, Area Reports—International—Europe and Central Eurasia

    Science.gov (United States)

    Geological Survey, U.S.

    2018-01-01

    The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.

  16. Minerals Yearbook, volume III, Area Reports—International—Asia and the Pacific

    Science.gov (United States)

    Geological Survey, U.S.

    2018-01-01

    The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.

  17. Minerals Yearbook, volume III, Area Reports—International—Africa and the Middle East

    Science.gov (United States)

    ,

    2018-01-01

    The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.

  18. Minerals Yearbook, volume III, Area Reports—International—Latin America and Canada

    Science.gov (United States)

    ,

    2018-01-01

    The U.S. Geological Survey (USGS) Minerals Yearbook discusses the performance of the worldwide minerals and materials industries and provides background information to assist in interpreting that performance. Content of the individual Minerals Yearbook volumes follows:Volume I, Metals and Minerals, contains chapters about virtually all metallic and industrial mineral commodities important to the U.S. economy. Chapters on survey methods, summary statistics for domestic nonfuel minerals, and trends in mining and quarrying in the metals and industrial mineral industries in the United States are also included.Volume II, Area Reports: Domestic, contains a chapter on the mineral industry of each of the 50 States and Puerto Rico and the Administered Islands. This volume also has chapters on survey methods and summary statistics of domestic nonfuel minerals.Volume III, Area Reports: International, is published as four separate reports. These regional reports contain the latest available minerals data on more than 180 foreign countries and discuss the importance of minerals to the economies of these nations and the United States. Each report begins with an overview of the region’s mineral industries during the year. It continues with individual country chapters that examine the mining, refining, processing, and use of minerals in each country of the region and how each country’s mineral industry relates to U.S. industry. Most chapters include production tables and industry structure tables, information about Government policies and programs that affect the country’s mineral industry, and an outlook section.The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the Minerals Yearbook are welcomed.

  19. Proceedings of the symposium to review Volume III of the Annual Report to Congress

    Energy Technology Data Exchange (ETDEWEB)

    Alt, F.; Norland, D.

    1979-01-01

    This report is a transcript of the proceedings of a two-day Symposium, held in the Fall of 1979 at the University of Maryland in order to independently review the 1978 Energy Information Administration (EIA) Annual Report to Congress (ARC), Volume III. Participants included energy forecasting experts from the academic community and the private sector; other Federal, State, and local government energy experts; and Office of Applied Analysis, EIA, staff members. The Symposium and its transcript are a critique of the underlying 1978 ARC assumptions, methodologies, and energy system projections. Discussions cover the short-, mid-, and long-term periods, national and international forecasts, source and consuming sectors and projected economic impacts. 27 figures, 22 tables.

  20. Planning manual for energy resource development on Indian lands. Volume III. Manpower and training

    Energy Technology Data Exchange (ETDEWEB)

    1978-03-01

    This volume addresses ways to bridge the gap between existing tribal skill levels and the skill levels required for higher-paying jobs in energy resource development projects. It addresses opportunities for technical, skilled, and semiskilled employment as well as professional positions, because it is important to have tribal participation at all levels of an operation. Section II, ''Energy-Related Employment Opportunities,'' covers three areas: (1) identification of energy-resource occupations; (2) description of these occupations; and (3) identification of skill requirements by type of occupation. Section III, ''Description of Training Programs,'' also covers three areas: (a) concept of a training-program model; (b) description of various training methods; and (c) an assessment of the cost of training, utilizing different programs. Section IV concentrates on development of a training program for target occupations, skills, and populations. Again this section covers three areas: (i) overview of the development of a skills training program; (ii) identification of target occupations, skills, and populations; and (iii) energy careers for younger tribal members.

  1. A Structural Molar Volume Model for Oxide Melts Part III: Fe Oxide-Containing Melts

    Science.gov (United States)

    Thibodeau, Eric; Gheribi, Aimen E.; Jung, In-Ho

    2016-04-01

    As part III of this series, the model is extended to iron oxide-containing melts. All available experimental data in the FeO-Fe2O3-Na2O-K2O-MgO-CaO-MnO-Al2O3-SiO2 system were critically evaluated based on the experimental condition. The variations of FeO and Fe2O3 in the melts were taken into account by using FactSage to calculate the Fe2+/Fe3+ distribution. The molar volume model with unary and binary model parameters can be used to predict the molar volume of the molten oxide of the Li2O-Na2O-K2O-MgO-CaO-MnO-PbO-FeO-Fe2O3-Al2O3-SiO2 system in the entire range of compositions, temperatures, and oxygen partial pressures from Fe saturation to 1 atm pressure.

  2. Downside Variance Risk Premium

    OpenAIRE

    Feunou, Bruno; Jahan-Parvar, Mohammad; Okou, Cedric

    2015-01-01

    We propose a new decomposition of the variance risk premium in terms of upside and downside variance risk premia. The difference between upside and downside variance risk premia is a measure of skewness risk premium. We establish that the downside variance risk premium is the main component of the variance risk premium, and that the skewness risk premium is a priced factor with significant prediction power for aggregate excess returns. Our empirical investigation highlights the positive and s...

  3. Technical and economic assessment of fluidized bed augmented compressed air energy storage system. Volume III. Preconceptual design

    Energy Technology Data Exchange (ETDEWEB)

    Giramonti, A.J.; Lessard, R.D.; Merrick, D.; Hobson, M.J.

    1981-09-01

    A technical and economic assessment of fluidized bed combustion augmented compressed air energy storage systems is presented. The results of this assessment effort are presented in three volumes. Volume III - Preconceptual Design contains the system analysis which led to the identification of a preferred component configuration for a fluidized bed combustion augmented compressed air energy storage system, the results of the effort which transformed the preferred configuration into preconceptual power plant design, and an introductory evaluation of the performance of the power plant system during part-load operation and while load following.

  4. POPULATION III STAR FORMATION IN LARGE COSMOLOGICAL VOLUMES. I. HALO TEMPORAL AND PHYSICAL ENVIRONMENT

    Energy Technology Data Exchange (ETDEWEB)

    Crosby, Brian D.; O' Shea, Brian W.; Smith, Britton D. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Turk, Matthew J. [Department of Astronomy, Columbia University, New York, NY 10025 (United States); Hahn, Oliver, E-mail: crosbyb1@msu.edu [Institute for Astronomy, ETH Zurich, CH-8093 Zuerich (Switzerland)

    2013-08-20

    We present a semi-analytic, computationally inexpensive model to identify halos capable of forming a Population III star in cosmological simulations across a wide range of times and environments. This allows for a much more complete and representative set of Population III star forming halos to be constructed, which will lead to Population III star formation simulations that more accurately reflect the diversity of Population III stars, both in time and halo mass. This model shows that Population III and chemically enriched stars coexist beyond the formation of the first generation of stars in a cosmological simulation until at least z {approx} 10, and likely beyond, though Population III stars form at rates that are 4-6 orders of magnitude lower than chemically enriched stars by z = 10. A catalog of more than 40,000 candidate Population III forming halos were identified, with formation times temporally ranging from z = 30 to z = 10, and ranging in mass from 2.3 Multiplication-Sign 10{sup 5} M{sub Sun} to 1.2 Multiplication-Sign 10{sup 10} M{sub Sun }. At early times, the environment that Population III stars form in is very similar to that of halos hosting chemically enriched star formation. At later times Population III stars are found to form in low-density regions that are not yet chemically polluted due to a lack of previous star formation in the area. Population III star forming halos become increasingly spatially isolated from one another at later times, and are generally closer to halos hosting chemically enriched star formation than to another halo hosting Population III star formation by z {approx} 10.

  5. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  6. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  7. Feasibility planning study for a behavior database. Volume III Appendix B, Compendium of survey questions on drinking and driving and occupant restraints

    Science.gov (United States)

    1987-04-01

    The general objective of the project was to determine the feasibility of and the general requirements for a centralized database on driver behavior and attitudes related to drunk driving and occupant restraints. Volume III is a compendium of question...

  8. No-migration variance petition

    International Nuclear Information System (INIS)

    1990-03-01

    Volume III contains the following attachments: TRUPACT-II content codes (TRUCON); TRUPACT-II chemical list; chemical compatibility analysis for Rocky Flats Plant waste forms (Appendix 2.10.12 of TRUPACT-II safety analysis report); and chemical compatibility analyses for waste forms across all sites

  9. Proceedings of the Malaysian Science and Technology Congress 2000: Symposium B,Volume III

    International Nuclear Information System (INIS)

    2001-01-01

    This proceedings is a collection of lectures presented at this symposium. This volume covers the following areas - biodiversity, cleaner production, green science, environment, renewable resources, social sciences, waste management and basic sciences

  10. ICPP calcined solids storage facility closure study. Volume III: Engineering design files

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-02-01

    The following information was calculated to support cost estimates and radiation exposure calculations for closure activities at the Calcined Solids Storage Facility (CSSF). Within the estimate, volumes were calculated to determine the required amount of grout to be used during closure activities. The remaining calcine on the bin walls, supports, piping, and floor was also calculated to approximate the remaining residual calcine volumes at different stages of the removal process. The estimates for remaining calcine and vault void volume are higher than what would actually be experienced in the field, but are necessary for bounding purposes. The residual calcine in the bins may be higher than was is experienced in the field as it was assumed that the entire bin volume is full of calcine before removal activities commence. The vault void volumes are higher as the vault roof beam volumes were neglected. The estimations that follow should be considered rough order of magnitude, due to the time constraints as dictated by the project`s scope of work. Should more accurate numbers be required, a new analysis would be necessary.

  11. ICPP calcined solids storage facility closure study. Volume III: Engineering design files

    International Nuclear Information System (INIS)

    1998-02-01

    The following information was calculated to support cost estimates and radiation exposure calculations for closure activities at the Calcined Solids Storage Facility (CSSF). Within the estimate, volumes were calculated to determine the required amount of grout to be used during closure activities. The remaining calcine on the bin walls, supports, piping, and floor was also calculated to approximate the remaining residual calcine volumes at different stages of the removal process. The estimates for remaining calcine and vault void volume are higher than what would actually be experienced in the field, but are necessary for bounding purposes. The residual calcine in the bins may be higher than was is experienced in the field as it was assumed that the entire bin volume is full of calcine before removal activities commence. The vault void volumes are higher as the vault roof beam volumes were neglected. The estimations that follow should be considered rough order of magnitude, due to the time constraints as dictated by the project's scope of work. Should more accurate numbers be required, a new analysis would be necessary

  12. Proceedings of the sixth international conference on fluidized bed combustion. Volume III. Technical sessions

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-08-01

    The Sixth International Conference on Fluidized Bed Combustion was held April 9-11, 1980, at the Atlanta Hilton, Atlanta, Georgia. It was sponsored by the US Department of Energy, the Electric Power Research Institute, the US Environmental Protection Agency, and the Tennessee Valley Authority. Forty-five papers from Vol. III of the proceedings have been entered individually into EDB and ERA. Two papers had been entered previously from other sources. (LTN)

  13. Petroleum industry in Latin America: volume III Argentina, Bolivia, Mexico, Peru

    International Nuclear Information System (INIS)

    Reinsch, A.E.; Tissot, R.R.

    1995-01-01

    As the previous volume in this series, this concluding volume was divided into separately paged sections, one for each of Argentina, Bolivia, Mexico and Peru, each section being complete in itself. For each of the countries dealt with, there was a brief historical introduction, followed by a detailed analysis of its energy sector, a description of the physical and market characteristics, the transportation and infrastructure systems, the legal and regulatory issues pertaining to the petroleum industry, especially as regards investment and environmental requirements, and an analysis of the prevailing political climate. figs., tabs., refs

  14. National Childcare Consumer Study: 1975. Volume III: American Consumer Attitudes and Opinions on Child Care.

    Science.gov (United States)

    Rodes, Thomas W.

    This report represents the third of a series of analyses of child care usages based on 4609 personal interviews conducted in 1975 from a national probability sample of households with children under 14 years of age. The study was sponsored by the office of Child Development of the U.S. Department of Health, Education and Welfare. This volume is…

  15. Combined cycle solar central receiver hybrid power system study. Volume III. Appendices. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-11-01

    A design study for a 100 MW gas turbine/steam turbine combined cycle solar/fossil-fuel hybrid power plant is presented. This volume contains the appendices: (a) preconceptual design data; (b) market potential analysis methodology; (c) parametric analysis methodology; (d) EPGS systems description; (e) commercial-scale solar hybrid power system assessment; and (f) conceptual design data lists. (WHK)

  16. Developing maintainability for tokamak fusion power systems. Phase II report. Volume III: appendices

    International Nuclear Information System (INIS)

    Fuller, G.M.; Zahn, H.S.; Mantz, H.C.; Kaletta, G.R.; Waganer, L.M.; Carosella, L.A.; Conlee, J.L.

    1978-11-01

    This volume contains time estimate summaries to the second level of detail for scheduled or unscheduled maintenance of the first wall/blanket, some selected subsystem components and maintenance equipment. Elaboration of selected maintenance equipment functions and performance as well as conceptual designs is also included

  17. Optimization of radiation therapy, III: a method of assessing complication probabilities from dose-volume histograms

    International Nuclear Information System (INIS)

    Lyman, J.T.; Wolbarst, A.B.

    1987-01-01

    To predict the likelihood of success of a therapeutic strategy, one must be able to assess the effects of the treatment upon both diseased and healthy tissues. This paper proposes a method for determining the probability that a healthy organ that receives a non-uniform distribution of X-irradiation, heat, chemotherapy, or other agent will escape complications. Starting with any given dose distribution, a dose-cumulative-volume histogram for the organ is generated. This is then reduced by an interpolation scheme (involving the volume-weighting of complication probabilities) to a slightly different histogram that corresponds to the same overall likelihood of complications, but which contains one less step. The procedure is repeated, one step at a time, until there remains a final, single-step histogram, for which the complication probability can be determined. The formalism makes use of a complication response function C(D, V) which, for the given treatment schedule, represents the probability of complications arising when the fraction V of the organ receives dose D and the rest of the organ gets none. Although the data required to generate this function are sparse at present, it should be possible to obtain the necessary information from in vivo and clinical studies. Volume effects are taken explicitly into account in two ways: the precise shape of the patient's histogram is employed in the calculation, and the complication response function is a function of the volume

  18. Analysis and forecast of electrical distribution system materials. Final report. Volume III. Appendix

    Energy Technology Data Exchange (ETDEWEB)

    Love, C G

    1976-08-23

    These appendixes are referenced in Volume II of this report. They contain the detailed electrical distribution equipment requirements and input material requirements forecasts. Forecasts are given for three electric energy usage scenarios. Also included are data on worldwide reserves and demand for 30 raw materials required for the manufacture of electrical distribution equipment.

  19. Beach Profile Analysis System (BPAS). Volume III. BPAS User’s Guide: Analysis Module SURVY1.

    Science.gov (United States)

    1982-06-01

    extrapolated using the two seawardmost points. Before computing volume changes, common bonds are established relative to the landward and seawsrd extent...Cyber 176 or equivalent computer. Such features include the 10- character, 60-bit word size, the FORTRAN- callable sort routine (interfacing with the NOS

  20. Operations Events Census Report: Volume III, 1975-1980. Sanitized Version.

    Science.gov (United States)

    1985-04-01

    2 0971 WILKINSON, WILLIAM 2 0746 122 Personnel Dose Information_ 1977 ID ORG. NAME CODE CODE WILLIAMS, ALONZO 2 0961 WILLIAMS, DAVE S. 2 0971...0903 GONZALEZ, DAVE 2 0874 GOOD, JACKIE C. 2 0837 GOOD, JAMES H. 2 0845 GORDON, DONALD C. 2 0680 GORDON, JAMES A. 2 0879 GORE, ARWIND K. 2 0679 GORMAN...0687 ZERGA, DONALD L. 2 0918 ZERGA, SUSAN J. 2 0918 ZIELINSKI , KENNETH 2 0679 ZIKA, ANDREW P., III 2 0837 ZIMMERMAN, DOUGLAS E. 2 0918 ZYGAN, ROBERT

  1. Algorithm for Surface of Translation Attached Radiators (A-STAR). Volume III. Computer Codes.

    Science.gov (United States)

    1982-05-01

    MULTR t SPORT ONFRBP K8 BPP ALP TEt M;M: III ::CSoSPIII 111448C CAP-WIRE. 440 78 (KeQ 1l) REAMI) IY(I)oIllLCWI 44 ML ZE O(LCZ.LECCTO 14 CLL ZER0(LC 24...PI6O SPTZ" 160) 4 ON11 K11 T61 EpKUI fAu 5,495 NM f.M PTvBAN0 IDA 藉 NP49 FuRNA 311 48 FORMS ~I NMDE PTADN NBAND NPPIP4181 N1111 13 46 3’.8o WRITE(6

  2. Aerial radiometric and magnetic survey; Brushy Basin detail survey: Price/Salina national topographic map sheets, Utah. Volume III. Area II: graphic data, Section III-IX Final report

    International Nuclear Information System (INIS)

    1981-01-01

    This volume contains all of the graphic data for Area II, which include map lines 1660 to 3400 and 5360 to 5780 and tie lines 6100, 6120, and 6160. Due to the large map scale of the data presented (1:62,500), this area was further subdivided into eleven 7-1/2 min quadrant sheets. It should be noted that TL6100 resides in both Areas II and III. The graphic data for TL6100 are presented in Volume IV - Area III - Graphic Data of this report

  3. Economic evaluation of the annual cycle energy system (ACES). Final report. Volume III, appendices

    Energy Technology Data Exchange (ETDEWEB)

    1980-06-01

    This volume consists of seven appendices related to ACES, the first three of which are concerned with computer programs. The appendices are entitled: (A) ACESIM: Residential Program Listing; (B) Typical Inputs and Outputs of ACESIM; (C) CACESS: Commercial Building Program Listing; (D) Typical Weather-Year Selection Requirements; (E) Building Characteristics; (F) List of Major Variables Used in the Computer Programs; and (G) Bibliography. 79 references.

  4. Genetic correlations between brain volumes and the WAIS-III dimensions of verbal comprehension, working memory, perceptual organization, and processing speed.

    Science.gov (United States)

    Posthuma, Daniëlle; Baaré, Wim F C; Hulshoff Pol, Hilleke E; Kahn, René S; Boomsma, Dorret I; De Geus, Eco J C

    2003-04-01

    We recently showed that the correlation of gray and white matter volume with full scale IQ and the Working Memory dimension are completely mediated by common genetic factors (Posthuma et al., 2002). Here we examine whether the other WAIS III dimensions (Verbal Comprehension, Perceptual Organization, Processing Speed) are also related to gray and white matter volume, and whether any of the dimensions are related to cerebellar volume. Two overlapping samples provided 135 subjects from 60 extended twin families for whom both MRI scans and WAIS III data were available. All three brain volumes are related to Working Memory capacity (r = 0.27). This phenotypic correlation is completely due to a common underlying genetic factor. Processing Speed was genetically related to white matter volume (r(g) = 0.39). Perceptual Organization was both genetically (r(g) = 0.39) and environmentally (r(e) = -0.71) related to cerebellar volume. Verbal Comprehension was not related to any of the three brain volumes. It is concluded that brain volumes are genetically related to intelligence which suggests that genes that influence brain volume may also be important for intelligence. It is also noted however, that the direction of causation (i.e., do genes influence brain volume which in turn influences intelligence, or alternatively, do genes influence intelligence which in turn influences brain volume), or the presence or absence of pleiotropy has not been resolved yet.

  5. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  6. A COSMIC VARIANCE COOKBOOK

    International Nuclear Information System (INIS)

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A.

    2011-01-01

    Deep pencil beam surveys ( 2 ) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , Δz, and stellar mass m * . We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with Δz = 0.5, the relative cosmic variance of galaxies with m * >10 11 M sun is ∼38%, while it is ∼27% for GEMS and ∼12% for COSMOS. For galaxies of m * ∼ 10 10 M sun , the relative cosmic variance is ∼19% for GOODS, ∼13% for GEMS, and ∼6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z-bar =2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic

  7. Genetic correlations between brain volumes and the WAIS-III dimensions of verbal comprehension, working memory, perceptual organization, and processing speed

    DEFF Research Database (Denmark)

    Posthuma, Daniëlle; Baare, Wim F.C.; Hulshoff Pol, Hilleke E.

    2003-01-01

    We recently showed that the correlation of gray and white matter volume with full scale IQ and the Working Memory dimension are completely mediated by common genetic factors (Posthuma et al., 2002). Here we examine whether the other WAIS III dimensions (Verbal Comprehension, Perceptual Organization......, Processing Speed) are also related to gray and white matter volume, and whether any of the dimensions are related to cerebellar volume. Two overlapping samples provided 135 subjects from 60 extended twin families for whom both MRI scans and WAIS III data were available. All three brain volumes are related...... to Working Memory capacity (r = 0.27). This phenotypic correlation is completely due to a common underlying genetic factor. Processing Speed was genetically related to white matter volume (r(g) = 0.39). Perceptual Organization was both genetically (r(g) = 0.39) and environmentally (r(e) = -0.71) related...

  8. Maternal vitamin C deficiency does not reduce hippocampal volume and beta-tubulin III intensity in prenatal Guinea pigs

    DEFF Research Database (Denmark)

    Hansen, Stine Normann; Schjoldager, Janne Gram; Paidi, Maya Devi

    2016-01-01

    Marginal vitamin C (vitC) deficiency affects 5% to 10% of adults including subpopulations such as pregnant women and newborns. Animal studies link vitC deficiency to deleterious effects on the developing brain, but exactly how the brain adapts to vitC deficiency and the mechanisms behind...... the observed deficits remain largely unknown. We hypothesized that vitC deficiency in utero may lead to a decreased neuronal maturation and increased cellular death giving rise to alterations of the hippocampal morphology in a guinea pig model. Brains from prenatal guinea pig pups (n = 9-10 in each group......) subjected to either a sufficient (918 mg vitC/kg feed) or deficient (100 mg vitC/kg feed) maternal dietary regimen were assessed with regards to hippocampal volume and beta-tubulin isotype III staining intensity at 2 gestational time points (45 and 56). We found a distinct differential regional growth...

  9. NWTS conceptual reference repository description (CRRD). Volume III. Criteria, bases, special studies, and codes

    International Nuclear Information System (INIS)

    1981-05-01

    This volume documents the criteria, design bases, and special studies and provides the backup for the design presented in Volume II. The criteria presented here were developed by ONWI as a draft version for consideration in this conceptual report. Since these criteria were developed subsequent to preparation of the reports used as a basis for the CRRD, not all of the criteria could be fully considered in preparation of the CRRD. However, they were used as guidelines wherever possible. The criteria for terminal storage of waste are still in development. The chapter on the design bases identifies the important design considerations and provides the justification for their selection. The design bases were developed not so much to give exact values for parameters as to identify the parameters that are significant to the design. They also serve as a common basis for coordinating analysis and design studies unitl the next design phase is completed. Some of the design bases presented here were taken directly from the Stearns-Roger NWTS-R1 Conceptual Design Report. The special studies document technical aspects of the design that are of particular importance or that furnish additional information pertaining to the design

  10. An excursion through elementary mathematics, volume iii discrete mathematics and polynomial algebra

    CERN Document Server

    Caminha Muniz Neto, Antonio

    2018-01-01

    This book provides a comprehensive, in-depth overview of elementary mathematics as explored in Mathematical Olympiads around the world. It expands on topics usually encountered in high school and could even be used as preparation for a first-semester undergraduate course. This third and last volume covers Counting, Generating Functions, Graph Theory, Number Theory, Complex Numbers, Polynomials, and much more. As part of a collection, the book differs from other publications in this field by not being a mere selection of questions or a set of tips and tricks that applies to specific problems. It starts from the most basic theoretical principles, without being either too general or too axiomatic. Examples and problems are discussed only if they are helpful as applications of the theory. Propositions are proved in detail and subsequently applied to Olympic problems or to other problems at the Olympic level. The book also explores some of the hardest problems presented at National and International Mathematics Ol...

  11. Guide for the evaluation of physical protection equipment. Book 1: Volumes I--III

    International Nuclear Information System (INIS)

    Haberman, W.

    1977-06-01

    A guide for evaluating the performance of commercially available physical protection equipment has been prepared in partial fulfillment of Task 2 of MITRE contract AT(49-24)-0376 for use by the U.S. Nuclear Regulatory Commission (NRC). Separate evaluation procedures are provided for each generic type of equipment contained in the companion document, Catalog of Physical Protection Equipment. Among the equipment parameters evaluated, as appropriate, are sensitivity, area/volume of coverage, false/nuisance alarm rate, resistance to countermeasures, environmental requirements, installation parameters and maintenance. Four evaluation techniques are employed (inspections, analyses, demonstrations and tests); standard test equipment (both commercially available as well as developmental) to be used in the evaluation are listed

  12. Solar Pilot Plant, Phase I. Preliminary design report. Volume III. Collector subsystem. CDRL item 2

    Energy Technology Data Exchange (ETDEWEB)

    None

    1977-05-01

    The Honeywell collector subsystem features a low-profile, multifaceted heliostat designed to provide high reflectivity and accurate angular and spatial positioning of the redirected solar energy under all conditions of wind load and mirror attitude within the design operational envelope. The heliostats are arranged in a circular field around a cavity receiver on a tower halfway south of the field center. A calibration array mounted on the receiver tower provides capability to measure individual heliostat beam location and energy periodically. This information and weather data from the collector field are transmitted to a computerized control subsystem that addresses the individual heliostat to correct pointing errors and determine when the mirrors need cleaning. This volume contains a detailed subsystem design description, a presentation of the design process, and the results of the SRE heliostat test program.

  13. Site characterization report for the basalt waste isolation project. Volume III

    International Nuclear Information System (INIS)

    1982-11-01

    The reference location for a repository in basalt for the terminal storage of nuclear wastes on the Hanford Site and the candidate horizons within this reference repository location have been identified and the preliminary characterization work in support of the site screening process has been completed. Fifteen technical questions regarding the qualification of the site were identified to be addressed during the detailed site characterization phase of the US Department of Energy-National Waste Terminal Storage Program site selection process. Resolution of these questions will be provided in the final site characterization progress report, currently planned to be issued in 1987, and in the safety analysis report to be submitted with the License Application. The additional information needed to resolve these questions and the plans for obtaining the information have been identified. This Site Characterization Report documents the results of the site screening process, the preliminary site characterization data, the technical issues that need to be addressed, and the plans for resolving these issues. Volume 3 contains chapters 13 through 19: site issues and plans; geoengineering and repository design issues and plans; waste package and site geochemistry issues and plans; performance-assessment issues and plans; site characterization program; quality assurance; and identification of alternate sites

  14. Experimental fusion power reactor conceptual design study. Final report. Volume III

    International Nuclear Information System (INIS)

    Baker, C.C.

    1976-12-01

    This document is the final report which describes the work carried out by General Atomic Company for the Electric Power Research Institute on a conceptual design study of a fusion experimental power reactor (EPR) and an overall EPR facility. The primary objective of the two-year program was to develop a conceptual design of an EPR that operates at ignition and produces continuous net power. A conceptual design was developed for a Doublet configuration based on indications that a noncircular tokamak offers the best potential of achieving a sufficiently high effective fuel containment to provide a viable reactor concept at reasonable cost. Other objectives included the development of a planning cost estimate and schedule for the plant and the identification of critical R and D programs required to support the physics development and engineering and construction of the EPR. This volume contains the following appendices: (1) tradeoff code analysis, (2) residual mode transport, (3) blanket/first wall design evaluations, (4) shielding design evaluation, (5) toroidal coil design evaluation, (6) E-coil design evaluation, (7) F-coil design evaluation, (8) plasma recycle system design evaluation, (9) primary coolant purification design evaluation, (10) power supply system design evaluation, (11) number of coolant loops, (12) power conversion system design evaluation, and (13) maintenance methods evaluation

  15. Novel concepts for the compression of large volumes of carbon dioxide-phase III

    Energy Technology Data Exchange (ETDEWEB)

    Moore, J. Jeffrey [Southwest Research Inst., San Antonio, TX (United States); Allison, Timothy C. [Southwest Research Inst., San Antonio, TX (United States); Evans, Neal D. [Southwest Research Inst., San Antonio, TX (United States); Moreland, Brian [Southwest Research Inst., San Antonio, TX (United States); Hernandez, Augusto J. [Southwest Research Inst., San Antonio, TX (United States); Day, Meera [Southwest Research Inst., San Antonio, TX (United States); Ridens, Brandon L. [Southwest Research Inst., San Antonio, TX (United States)

    2014-06-30

    and tested in a closed loop compressor facility using CO2 . Both test programs successfully demonstrated good performance and mechanical behavior. In Phase III, a pilot compression plant consisting of a multi-stage centrifugal compressor with cooled diaphragm technology has been designed, constructed, and tested. Comparative testing of adiabatic and cooled tests at equivalent inlet conditions shows that the cooled diaphragms reduce power consumption by 3-8% when the compressor is operated as a back-to-back unit and by up to 9% when operated as a straight-though compressor with no intercooler. The power savings, heat exchanger effectiveness, and temperature drops for the cooled diaphragm were all slightly higher than predicted values but showed the same trends.

  16. Restricted Variance Interaction Effects

    DEFF Research Database (Denmark)

    Cortina, Jose M.; Köhler, Tine; Keeler, Kathleen R.

    2018-01-01

    Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV int...

  17. Genetic correlations between brain volumes and the WAIS-III dimensions of verbal comprehension, working memory, perceptual organization, and processing speed

    NARCIS (Netherlands)

    Posthuma, D.; Baare, W.F.C.; Hulshoff Pol, H.E.; Kahn, R.S.; Boomsma, D.I.; de Geus, E.J.C.

    2003-01-01

    We recently showed that the correlation of gray and white matter volume with full scale IQ and the Working Memory dimension are completely mediated by common genetic factors (Posthuma et al., 2002). Here we examine whether the other WAIS III dimensions (Verbal Comprehension, Perceptual Organization,

  18. Aerial radiometric and magnetic survey; Brushy Basin detail survey: Price/Salina national topographic map sheets, Utah. Volume IV. Area III: graphic data. Final report

    International Nuclear Information System (INIS)

    1981-01-01

    This volume contains all the graphic data for Area III, which includes lines 3420 to 5320 and tie lines 6080, 6100, and 6140. Due to the large map scale of the data presented (1:62,500), this area was further subdivided into eleven 7-1/2 min quadrant sheets

  19. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.Th; Verburg, T.G.

    2001-01-01

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  20. Comparison of satellite imagery from LISS-III/Resourcesat-1 and TM/Landsat 5 to estimate stand-level timber volume

    Directory of Open Access Journals (Sweden)

    Elias Fernando Berra

    2017-01-01

    Full Text Available After Landsat 5 activities were discontinued, sensors on board ResourceSat-1 satellite have been pointed as an option for Landsat series. The aim of this study is to estimate timber volume from a slash pine (Pinus elliottii Engelm. stand using images from both LISS-III/ResourceSat-1 and TM/Landsat 5 sensors, cross comparing their performances. Reflectance values from the four spectral bands considered equivalent for both sensors were compared regarding sensitivity to changes in timber volume. Trends were similar, with direct relationship in the near-infrared bands and inverse relationships in the visible and mid-infrared bands. Significant differences were only found in the equivalent band of green. Multiple linear regressions were used to select spectral bands that would better explain variations in timber volume. The best fit equations for each sensor were inverted to generate maps of timber volume, estimates which were compared at pixel and stand level. None of the scales showed significant differences between estimates generated from the two sensors. We concluded that LISS-III and TM have generally very similar performance for monitoring timber volume, and LISS-III could therefore be potentially used as a complement or substitute to Landsat series.

  1. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    1999-01-01

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  2. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  3. The history of NATO TNF policy: The role of studies, analysis and exercises conference proceedings. Volume 3: Papers by Gen. Robert C. Richardson III (Ret.)

    Energy Technology Data Exchange (ETDEWEB)

    Rinne, R.L. [ed.

    1994-02-01

    This conference was organized to study and analyze the role of simulation, analysis, modeling, and exercises in the history of NATO policy. The premise was not that the results of past studies will apply to future policy, but rather that understanding what influenced the decision process-and how-would be of value. The structure of the conference was built around discussion panels. The panels were augmented by a series of papers and presentations focusing on particular TNF events, issues, studies, or exercises. The conference proceedings consist of three volumes. Volume 1 contains the conference introduction, agenda, biographical sketches of principal participants, and analytical summary of the presentations and discussion panels. Volume 2 contains a short introduction and the papers and presentations from the conference. This volume contains selected papers by Brig. Gen. Robert C. Richardson III (Ret.).

  4. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  5. Genetic correlations between brain volumes and the WAIS-III dimensions of verbal comprehension, working memory, perceptual organization, and processing speed

    DEFF Research Database (Denmark)

    Posthuma, Daniëlle; Baare, Wim F.C.; Hulshoff Pol, Hilleke E.

    2003-01-01

    We recently showed that the correlation of gray and white matter volume with full scale IQ and the Working Memory dimension are completely mediated by common genetic factors (Posthuma et al., 2002). Here we examine whether the other WAIS III dimensions (Verbal Comprehension, Perceptual Organization...... to Working Memory capacity (r = 0.27). This phenotypic correlation is completely due to a common underlying genetic factor. Processing Speed was genetically related to white matter volume (r(g) = 0.39). Perceptual Organization was both genetically (r(g) = 0.39) and environmentally (r(e) = -0.71) related...

  6. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  7. Industrial Fuel Gas Demonstration Plant Program. Conceptual design and evaluation of commercial plant. Volume III. Economic analyses (Deliverable Nos. 15 and 16)

    Energy Technology Data Exchange (ETDEWEB)

    None

    1978-01-01

    This report presents the results of Task I of Phase I in the form of a Conceptual Design and Evaluation of Commercial Plant report. The report is presented in four volumes as follows: I - Executive Summary, II - Commercial Plant Design, III - Economic Analyses, IV - Demonstration Plant Recommendations. Volume III presents the economic analyses for the commercial plant and the supporting data. General cost and financing factors used in the analyses are tabulated. Three financing modes are considered. The product gas cost calculation procedure is identified and appendices present computer inputs and sample computer outputs for the MLGW, Utility, and Industry Base Cases. The results of the base case cost analyses for plant fenceline gas costs are as follows: Municipal Utility, (e.g. MLGW), $3.76/MM Btu; Investor Owned Utility, (25% equity), $4.48/MM Btu; and Investor Case, (100% equity), $5.21/MM Btu. The results of 47 IFG product cost sensitivity cases involving a dozen sensitivity variables are presented. Plant half size, coal cost, plant investment, and return on equity (industrial) are the most important sensitivity variables. Volume III also presents a summary discussion of the socioeconomic impact of the plant and a discussion of possible commercial incentives for development of IFG plants.

  8. On the volatility-volume relationship in energy futures markets using intra-day data

    International Nuclear Information System (INIS)

    Chevallier, Julien; Sevi, Benoit

    2011-01-01

    This paper investigates the relationship between trading volume and price volatility in the crude oil and natural gas futures markets when using high-frequency data. By regressing various realized volatility measures (with/without jumps) on trading volume and trading frequency, our results feature a contemporaneous and largely positive relationship. Furthermore, we test whether the volatility-volume relationship is symmetric for energy futures by considering positive and negative realized semi-variance. We show that (i) an asymmetric volatility-volume relationship indeed exists, (ii) trading volume and trading frequency significantly affect negative and positive realized semi-variance, and (iii) the information content of negative realized semi-variance is higher than for positive realized semi-variance. (authors)

  9. Variance analysis refines overhead cost control.

    Science.gov (United States)

    Cooper, J C; Suver, J D

    1992-02-01

    Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.

  10. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  11. Soil Properties Database of Spanish Soils Volume III.- Extremadura; Base de Datos de Propiedades Edafologicas de los Suelos Espanoles Volumen III.- Extremadura

    Energy Technology Data Exchange (ETDEWEB)

    Trueba, C; Millam, R; Schmid, T; Roquero, C; Magister, M

    1998-12-01

    The soil vulnerability determines the sensitivity of the soil after an accidental radioactive contamination due to Cs-137 and Sr-90. The Departamento de Impacto Ambiental de la Energia of CIEMAT is carrying out an assessment of the radiological vulnerability of the different Spanish soils found on the Iberian Peninsula. This requires the knowledge of the soil properties for the various types of existing soils. In order to achieve this aim, a bibliographical compilation of soil profiles has been made to characterize the different soil types and create a database of their properties. Depending on the year of publication and the type of documentary source, the information compiled from the available bibliography is very heterogeneous. Therefore, an important effort has been made to normalize and process the information prior to its incorporation to the database. This volume presents the criteria applied to normalize and process the data as well as the soil properties of the various soil types belonging to the Comunidad Autonoma de Extremadura. (Author) 50 refs.

  12. SOLVENT-BASED TO WATERBASED ADHESIVE-COATED SUBSTRATE RETROFIT - VOLUME III: LABEL MANUFACTURING CASE STUDY: NASHUA CORPORATION

    Science.gov (United States)

    This volume discusses Nashua Corporation's Omaha facility, a label and label stock manufacturing facility that no longer uses solvent-based adhesives. Information obtained includes issues related to the technical, economic, and environmental barriers and opportunities associated ...

  13. Predictive and prognostic value of tumor volume and its changes during radical radiotherapy of stage III non-small cell lung cancer. A systematic review

    Energy Technology Data Exchange (ETDEWEB)

    Kaesmann, Lukas [University of Luebeck, Department of Radiation Oncology, Luebeck (Germany); Niyazi, Maximilian; Fleischmann, Daniel [LMU Munich, Department of Radiation Oncology, Munich (Germany); German Cancer Consortium (DKTK), partner site Munich, Munich (Germany); German Cancer Research Center (DKFZ), Heidelberg (Germany); Blanck, Oliver; Baumann, Rene [University Medical Center Schleswig-Holstein, Department of Radiation Oncology, Kiel (Germany); Baues, Christian; Klook, Lisa; Rosenbrock, Johannes; Trommer-Nestler, Maike [University Hospital of Cologne, Department of Radiotherapy, Cologne (Germany); Dobiasch, Sophie [Technische Universitaet Muenchen, Department of Radiation Oncology, Munich (Germany); Eze, Chukwuka [LMU Munich, Department of Radiation Oncology, Munich (Germany); Gauer, Tobias; Goy, Yvonne [University Medical Center Hamburg-Eppendorf, Department of Radiotherapy and Radio-Oncology, Hamburg (Germany); Giordano, Frank A.; Sautter, Lisa [University Medical Center Mannheim, Department of Radiation Oncology, Mannheim (Germany); Hausmann, Jan [University Medical Center Duesseldorf, Department of Radiation Oncology, Duesseldorf (Germany); Henkenberens, Christoph [Hannover Medical School, Department of Radiation and Special Oncology, Hannover (Germany); Kaul, David; Thieme, Alexander H. [Charite School of Medicine and University Hospital, Campus Virchow-Klinikum, Department of Radiation Oncology, Berlin (Germany); Krug, David; Schmitt, Daniela [University Hospital Heidelberg and National Center for Radiation Research in Oncology (NCRO) and Heidelberg Institute for Radiation Oncology (HIRO), Department of Radiation Oncology, Heidelberg (Germany); Maeurer, Matthias [University Medical Center Jena, Department of Radiation Oncology, Jena (Germany); Panje, Cedric M. [Kantonsspital St. Gallen, Department of Radiation Oncology, St. Gallen (Switzerland); Suess, Christoph [University Medical Center Regensburg, Department of Radiation Oncology, Regensburg (Germany); Ziegler, Sonia [University Medical Center Erlangen, Department of Radiation Oncology, Erlangen (Germany); Ebert, Nadja [University Medical Center Dresden, Department of Radiation Oncology, Dresden (Germany); OncoRay - National Center for Radiation Research in Oncology, Dresden (Germany); Medenwald, Daniel [Martin Luther University Halle-Wittenberg, Department of Radiation Oncology, Faculty of Medicine, Halle (Germany); Ostheimer, Christian [Martin Luther University Halle-Wittenberg, Department of Radiation Oncology, Faculty of Medicine, Halle (Germany); Klinik und Poliklinik fuer Strahlentherapie, Universitaetsklinikum Halle (Saale) (Germany); Collaboration: Young DEGRO Trial Group

    2018-02-15

    Lung cancer remains the leading cause of cancer-related mortality worldwide. Stage III non-small cell lung cancer (NSCLC) includes heterogeneous presentation of the disease including lymph node involvement and large tumour volumes with infiltration of the mediastinum, heart or spine. In the treatment of stage III NSCLC an interdisciplinary approach including radiotherapy is considered standard of care with acceptable toxicity and improved clinical outcome concerning local control. Furthermore, gross tumour volume (GTV) changes during definitive radiotherapy would allow for adaptive replanning which offers normal tissue sparing and dose escalation. A literature review was conducted to describe the predictive value of GTV changes during definitive radiotherapy especially focussing on overall survival. The literature search was conducted in a two-step review process using PubMed registered /Medline registered with the key words ''stage III non-small cell lung cancer'' and ''radiotherapy'' and ''tumour volume'' and ''prognostic factors''. After final consideration 17, 14 and 9 studies with a total of 2516, 784 and 639 patients on predictive impact of GTV, GTV changes and its impact on overall survival, respectively, for definitive radiotherapy for stage III NSCLC were included in this review. Initial GTV is an important prognostic factor for overall survival in several studies, but the time of evaluation and the value of histology need to be further investigated. GTV changes during RT differ widely, optimal timing for re-evaluation of GTV and their predictive value for prognosis needs to be clarified. The prognostic value of GTV changes is unclear due to varying study qualities, re-evaluation time and conflicting results. The main findings were that the clinical impact of GTV changes during definitive radiotherapy is still unclear due to heterogeneous study designs with varying quality

  14. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  15. Field Surveys, IOC Valleys. Volume III, Part II. Cultural Resources Survey, Pine and Wah Wah Valleys, Utah.

    Science.gov (United States)

    1981-08-01

    including horse, camel, mammoth, Ertm E-TR-48-III-II 20 musk ox, and certain species of bison, goat, and bear, which had previously inhabited the marsh and...34 - - -9,$.. 𔄃 Im I I I Si to * Location lype/Contents Affiliation 42B@644 rid e over cr ek - P/J depression, cleared areas, Fr elon (f4-5-18-92) ground

  16. Survey of fish impingement at power plants in the United States. Volume III. Estuaries and coastal waters

    International Nuclear Information System (INIS)

    Stupka, R.C.; Sharma, R.K.

    1977-03-01

    Impingement of fish at cooling-water intakes of 32 power plants, located on estuaries and coastal waters has been surveyed and data are presented. Descriptions of site, plant, and intake design and operation are provided. Reports in this volume summarize impingement data for individual plants in tabular and histogram formats. Information was available from differing sources such as the utilities themselves, public documents, regulatory agencies, and others. Thus, the extent of detail in the reports varies greatly from plant to plant. Histogram preparation involved an extrapolation procedure that has inadequacies. The reader is cautioned in the use of information presented in this volume to determine intake-design acceptability or intensity of impacts on ecosystems. No conclusions are presented herein; data comparisons are made in Volume IV

  17. ICPP tank farm closure study. Volume III: Cost estimates, planning schedules, yearly cost flowcharts, and life-cycle cost estimates

    International Nuclear Information System (INIS)

    1998-02-01

    This volume contains information on cost estimates, planning schedules, yearly cost flowcharts, and life-cycle costs for the six options described in Volume 1, Section 2: Option 1 -- Total removal clean closure; No subsequent use; Option 2 -- Risk-based clean closure; LLW fill; Option 3 -- Risk-based clean closure; CERCLA fill; Option 4 -- Close to RCRA landfill standards; LLW fill; Option 5 -- Close to RCRA landfill standards; CERCLA fill; and Option 6 -- Close to RCRA landfill standards; Clean fill. This volume is divided into two portions. The first portion contains the cost and planning schedule estimates while the second portion contains life-cycle costs and yearly cash flow information for each option

  18. Survey of fish impingement at power plants in the United States. Volume III. Estuaries and coastal waters

    Energy Technology Data Exchange (ETDEWEB)

    Stupka, Richard C.; Sharma, Rajendra K.

    1977-03-01

    Impingement of fish at cooling-water intakes of 32 power plants, located on estuaries and coastal waters has been surveyed and data are presented. Descriptions of site, plant, and intake design and operation are provided. Reports in this volume summarize impingement data for individual plants in tabular and histogram formats. Information was available from differing sources such as the utilities themselves, public documents, regulatory agencies, and others. Thus, the extent of detail in the reports varies greatly from plant to plant. Histogram preparation involved an extrapolation procedure that has inadequacies. The reader is cautioned in the use of information presented in this volume to determine intake-design acceptability or intensity of impacts on ecosystems. No conclusions are presented herein; data comparisons are made in Volume IV.

  19. Predictive and prognostic value of tumor volume and its changes during radical radiotherapy of stage III non-small cell lung cancer. A systematic review

    International Nuclear Information System (INIS)

    Kaesmann, Lukas; Niyazi, Maximilian; Fleischmann, Daniel; Blanck, Oliver; Baumann, Rene; Baues, Christian; Klook, Lisa; Rosenbrock, Johannes; Trommer-Nestler, Maike; Dobiasch, Sophie; Eze, Chukwuka; Gauer, Tobias; Goy, Yvonne; Giordano, Frank A.; Sautter, Lisa; Hausmann, Jan; Henkenberens, Christoph; Kaul, David; Thieme, Alexander H.; Krug, David; Schmitt, Daniela; Maeurer, Matthias; Panje, Cedric M.; Suess, Christoph; Ziegler, Sonia; Ebert, Nadja; Medenwald, Daniel; Ostheimer, Christian

    2018-01-01

    Lung cancer remains the leading cause of cancer-related mortality worldwide. Stage III non-small cell lung cancer (NSCLC) includes heterogeneous presentation of the disease including lymph node involvement and large tumour volumes with infiltration of the mediastinum, heart or spine. In the treatment of stage III NSCLC an interdisciplinary approach including radiotherapy is considered standard of care with acceptable toxicity and improved clinical outcome concerning local control. Furthermore, gross tumour volume (GTV) changes during definitive radiotherapy would allow for adaptive replanning which offers normal tissue sparing and dose escalation. A literature review was conducted to describe the predictive value of GTV changes during definitive radiotherapy especially focussing on overall survival. The literature search was conducted in a two-step review process using PubMed registered /Medline registered with the key words ''stage III non-small cell lung cancer'' and ''radiotherapy'' and ''tumour volume'' and ''prognostic factors''. After final consideration 17, 14 and 9 studies with a total of 2516, 784 and 639 patients on predictive impact of GTV, GTV changes and its impact on overall survival, respectively, for definitive radiotherapy for stage III NSCLC were included in this review. Initial GTV is an important prognostic factor for overall survival in several studies, but the time of evaluation and the value of histology need to be further investigated. GTV changes during RT differ widely, optimal timing for re-evaluation of GTV and their predictive value for prognosis needs to be clarified. The prognostic value of GTV changes is unclear due to varying study qualities, re-evaluation time and conflicting results. The main findings were that the clinical impact of GTV changes during definitive radiotherapy is still unclear due to heterogeneous study designs with varying quality

  20. The Uses of Mass Communications: Current Perspectives on Gratifications Research. Sage Annual Reviews of Communication Research Volume III.

    Science.gov (United States)

    Blumler, Jay G., Ed.; Katz, Elihu, Ed.

    The essays in this volume examine the use of the mass media and explore the findings of the gratifications approach to mass communication research. Part one summaries the achievements in this area of mass media research and proposes an agenda for discussion of the future direction of this research in terms of a set of theoretical, methodological,…

  1. International conference on high-energy physics. Volume 1. Sessions I to III. [Geneva, June 27-July 4, 1979

    Energy Technology Data Exchange (ETDEWEB)

    1980-02-01

    Volume 1 of the conference proceedings contains sessions on neutrino physics and weak interactions, e/sup +/e/sup -/ physics, and theory. Five of the papers have already been cited in ERA, and can be found by reference to the entry CONF-790642-- in the Report Number Index. The remaining 30 will be processed as they are received on the Atomindex tape. (RWR)

  2. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  3. West Hackberry Strategic Petroleum Reserve site brine-disposal monitoring, Year I report. Volume III. Biological oceanography. Final report

    Energy Technology Data Exchange (ETDEWEB)

    DeRouen, L.R.; Hann, R.W.; Casserly, D.M.; Giammona, C.; Lascara, V.J. (eds.)

    1983-02-01

    The Department of Energy's Strategic Petroleum Reserve Program began discharging brine into the Gulf of Mexico from its West Hackberry site near Cameron, Louisiana in May 1981. The brine originates from underground salt domes being leached with water from the Intracoastal Waterway, making available vast underground storage caverns for crude oil. The effects of brine discharge on aquatic organisms are presented in this volume. The topics covered are: benthos; nekton; phytoplankton; zooplankton; and data management.

  4. Florence Richardson Wyckoff (1905-1997), Fifty Years of Grassroots Social ActivismVolume III: Watsonville Years 1960-1985

    OpenAIRE

    Wyckoff, Florence Richardson; Jarrell, Randall

    1990-01-01

    Florence Wyckoff's three-volume oral history documents her remarkable, lifelong work as a social activist, during which she has become nationally recognized as an advocate of migrant families and children. From the depression years through the 1970s, she pursued grassroots, democratic, community-building efforts in the service of improving public health standards and providing health care, education, and housing for migrant families. Major legislative milestones in her career of advocacy were...

  5. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Zhou Li'na; Qi Zhongzhi; Zeng Yu; Ou Xiaohong; Li Lin

    2010-01-01

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  6. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  7. Artificial heart development program. Volume II. System support. Phase III summary report, July 1, 1973--September 30, 1977

    International Nuclear Information System (INIS)

    1977-01-01

    Volume 2 covers major activities of the Artificial Heart Development program that supported the design, fabrication, and test of the system demonstration units. Section A.1.0 provides a listing beyond that of the body of the report on the components needed for an implantation. It also presents glove box sterilization calibration results and results of an extensive mock circulation calibration. Section A.2.0 provides detailed procedures for assembly, preparing for use, and the use of the system and major components. Section A.3.0 covers the component research and development activities undertaken to improve components of the existing system units and to prepare for a future prototype system. Section A.4.0 provides a listing of the top assembly drawings of the major systems variations fabricated and tested

  8. Artificial heart development program. Volume II. System support. Phase III summary report, July 1, 1973--September 30, 1977

    Energy Technology Data Exchange (ETDEWEB)

    1977-01-01

    Volume 2 covers major activities of the Artificial Heart Development program that supported the design, fabrication, and test of the system demonstration units. Section A.1.0 provides a listing beyond that of the body of the report on the components needed for an implantation. It also presents glove box sterilization calibration results and results of an extensive mock circulation calibration. Section A.2.0 provides detailed procedures for assembly, preparing for use, and the use of the system and major components. Section A.3.0 covers the component research and development activities undertaken to improve components of the existing system units and to prepare for a future prototype system. Section A.4.0 provides a listing of the top assembly drawings of the major systems variations fabricated and tested.

  9. Inventory of Federal Energy-Related Environment and Safety Research for FY 1978. Volume III, interactive terminal users guide

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C. E.; Barker, Janice F.

    1979-12-01

    This users' guide was prepared to provide interested persons access to, via computer terminals, federally funded energy-related environmental and safety research projects for FY 1978. Although this information is also available in hardbound volumes, this on-line searching capability is expected to reduce the time required to answer ad hoc questions and, at the same time, produce meaningful reports. The data contained in this data base are not exhaustive and represent research reported by the following agencies: Department of Agriculture, Department of Commerce, Department of Defense, Department of Energy, Department of Health, Education, and Welfare, Department of the Interior, Department of Transportation, Federal Energy Administration, National Aeronautics and Space Administration, National Science Foundation, Nuclear Regulatory Commission, Tennessee Valley Authority, U.S. Coast Guard, and the U.S. Environmental Protection Agency.

  10. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    We study equity (EVRP) and Treasury variance risk premia (TVRP) jointly and document a number of findings: First, relative to their volatility, TVRP are comparable in magnitude to EVRP. Second, while there is mild positive co-movement between EVRP and TVRP unconditionally, time series estimates...... equity returns for horizons up to 6-months, long maturity TVRP contain robust information for long run equity returns. Finally, exploiting the dynamics of real and nominal Treasuries we document that short maturity break-even rates are a power determinant of the joint dynamics of EVRP, TVRP and their co-movement...... of correlation display distinct spikes in both directions and have been notably volatile since the financial crisis. Third $(i)$ short maturity TVRP predict excess returns on short maturity bonds; $(ii)$ long maturity TVRP and EVRP predict excess returns on long maturity bonds; and $(iii)$ while EVRP predict...

  11. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the condit...

  12. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  13. Modal Profiles for the WISC-III.

    Science.gov (United States)

    Pritchard, David A.; Livingston, Ronald B.; Reynolds, Cecil R.; Moses, James A., Jr.

    2000-01-01

    Presents a normative typology for classifying the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) factor index profiles according to profile shape. Current analyses indicate that overall profile level accounted for a majority of the variance in WISC-III index scores, but a considerable proportion of the variance was because of…

  14. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  15. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  16. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  17. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  18. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  19. On Mean-Variance Analysis

    OpenAIRE

    Li, Yang; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  20. SU-E-J-88: Margin Reduction of Level II/III Planning Target Volume for Image-Guided Simultaneous Integrated Boost Head-And-Neck Treatment

    International Nuclear Information System (INIS)

    Can, S; Neylon, J; Qi, S; Santhanam, A; Low, D

    2014-01-01

    Purpose: To investigate the feasibility of improved normal tissue sparing for head-and-neck (H'N) image-guided radiotherapy (IGRT) by employing tighter CTV-to-PTV margins for target level II/III though a GPU-based deformable image registration and dose accumulation framework. Methods: Ten H'N simultaneous integrated boost cases treated on TomoTherapy were retrospectively analyzed. Weekly kVCT scans in addition to daily MVCT scans were acquired for each patient. Reduced margin plans were generated with 0- mm margin for level II and III PTV (while 3-5 mm margin for PTV1) and compared with the standard margin plan using 3-5mm margin to all CTV1-3 (reference plan). An in-house developed GPU-based 3D image deformation tool was used to register and deform the weekly KVCTs with the planning CT and determine the delivered mean/minimum/maximum dose, dose volume histograms (DVHs), etc. Results: Compared with the reference plans, the averaged cord maximum, the right and left parotid doses reduced by 22.7 %, 16.5 %, and 9 % respectively in the reduced margin plans. The V95 for PTV2 and PTV3 were found within 2 and 5% between the reference and tighter margin plans. For the reduced margin plans, the averaged cumulative mean doses were consistent with the planned dose for PTV1, PTV2 and PTV3 within 1.5%, 1.7% and 1.4%. Similar dose variations of the delivered dose were seen for the reference and tighter margin plans. The delivered maximum and mean doses for the cord were 3.55 % and 2.37% higher than the planned doses; a 5 % higher cumulative mean dose for the parotids was also observed for the delivered dose than the planned doses in both plans. Conclusion: By imposing tighter CTV-to-PTV margins for level II and III targets for H'N irradiation, acceptable cumulative doses were achievable when coupled with weekly kVCT guidance while improving normal structure sparing

  1. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    Os, J.J.A.M. van; Uittenbogaard, R.E.

    2004-01-01

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  2. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  3. No-migration variance petition

    International Nuclear Information System (INIS)

    1990-03-01

    Volume IV contains the following attachments: TRU mixed waste characterization database; hazardous constituents of Rocky flats transuranic waste; summary of waste components in TRU waste sampling program at INEL; total volatile organic compounds (VOC) analyses at Rocky Flats Plant; total metals analyses from Rocky Flats Plant; results of toxicity characteristic leaching procedure (TCLP) analyses; results of extraction procedure (EP) toxicity data analyses; summary of headspace gas analysis in Rocky Flats Plant (RFP) -- sampling program FY 1988; waste drum gas generation--sampling program at Rocky Flats Plant during FY 1988; TRU waste sampling program -- volume one; TRU waste sampling program -- volume two; and summary of headspace gas analyses in TRU waste sampling program; summary of volatile organic compounds (V0C) -- analyses in TRU waste sampling program

  4. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  5. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  6. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  7. No-migration variance petition

    International Nuclear Information System (INIS)

    1990-03-01

    Volume V contains the appendices for: closure and post-closure plans; RCRA ground water monitoring waver; Waste Isolation Division Quality Program Manual; water quality sampling plan; WIPP Environmental Procedures Manual; sample handling and laboratory procedures; data analysis; and Annual Site Environmental Monitoring Report for the Waste Isolation Pilot Plant

  8. Speed Variance and Its Influence on Accidents.

    Science.gov (United States)

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  9. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  10. Descriptive Summaries of the Research, Development, Test and Evaluation, Army Appropriation FY 1982. Supporting Data FY 1982. Supporting Data FY 1982, Budget Estimate Submitted to Congress January 1981, Amended 30 March 1981. Volume III.

    Science.gov (United States)

    1981-01-01

    UNCLASSIFIED BIDGET ACTIVITY PROGRAM ELEMENT PROJECT/SCIENTIFIC AHFA/TECIINICAL AREA VOLUME III INTELLIGENCE AND COMMUNICATIONS PAGE NO. 6.31.12.A MAPPING AND...System prototype. Continue depot prototype of Tracked CP Assemblage. Initiate depot prototype of new Shelterized Assemblage. Execute second increment ...new Shelterized Assemblage. Execute secondi Increment of Long-ilaul Fiber Optics Tranui:oiton System prototype UNCLASSIFIED ill-7 i UNCLASSIFIED Project

  11. No-migration variance petition

    International Nuclear Information System (INIS)

    1990-03-01

    Volume II contains Appendix A, emergency plan and Appendix B, waste analysis plan. The Waste Isolation Pilot Plant (WIPP) Emergency plan and Procedures (WP 12-9, Rev. 5, 1989) provides an organized plan of action for dealing with emergencies at the WIPP. A contingency plan is included which is in compliance with 40 CFR Part 265, Subpart D. The waste analysis plan provides a description of the chemical and physical characteristics of the wastes to be emplaced in the WIPP underground facility. A detailed discussion of the WIPP Waste Acceptance Criteria and the rationale for its established units are also included

  12. Evolution of Genetic Variance during Adaptive Radiation.

    Science.gov (United States)

    Walter, Greg M; Aguirre, J David; Blows, Mark W; Ortiz-Barrientos, Daniel

    2018-04-01

    Genetic correlations between traits can concentrate genetic variance into fewer phenotypic dimensions that can bias evolutionary trajectories along the axis of greatest genetic variance and away from optimal phenotypes, constraining the rate of evolution. If genetic correlations limit adaptation, rapid adaptive divergence between multiple contrasting environments may be difficult. However, if natural selection increases the frequency of rare alleles after colonization of new environments, an increase in genetic variance in the direction of selection can accelerate adaptive divergence. Here, we explored adaptive divergence of an Australian native wildflower by examining the alignment between divergence in phenotype mean and divergence in genetic variance among four contrasting ecotypes. We found divergence in mean multivariate phenotype along two major axes represented by different combinations of plant architecture and leaf traits. Ecotypes also showed divergence in the level of genetic variance in individual traits and the multivariate distribution of genetic variance among traits. Divergence in multivariate phenotypic mean aligned with divergence in genetic variance, with much of the divergence in phenotype among ecotypes associated with changes in trait combinations containing substantial levels of genetic variance. Overall, our results suggest that natural selection can alter the distribution of genetic variance underlying phenotypic traits, increasing the amount of genetic variance in the direction of natural selection and potentially facilitating rapid adaptive divergence during an adaptive radiation.

  13. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  14. Efficient Cardinality/Mean-Variance Portfolios

    OpenAIRE

    Brito, R. Pedro; Vicente, Luís Nunes

    2014-01-01

    International audience; We propose a novel approach to handle cardinality in portfolio selection, by means of a biobjective cardinality/mean-variance problem, allowing the investor to analyze the efficient tradeoff between return-risk and number of active positions. Recent progress in multiobjective optimization without derivatives allow us to robustly compute (in-sample) the whole cardinality/mean-variance efficient frontier, for a variety of data sets and mean-variance models. Our results s...

  15. The phenotypic variance gradient - a novel concept.

    Science.gov (United States)

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-11-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely "a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added". This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a "phenotypic variance gradient", are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization.

  16. Variance risk premia in CO_2 markets: A political perspective

    International Nuclear Information System (INIS)

    Reckling, Dennis

    2016-01-01

    The European Commission discusses the change of free allocation plans to guarantee a stable market equilibrium. Selling over-allocated contracts effectively depreciates prices and negates the effect intended by the regulator to establish a stable price mechanism for CO_2 assets. Our paper investigates mispricing and allocation issues by quantitatively analyzing variance risk premia of CO_2 markets over the course of changing regimes (Phase I-III) for three different assets (European Union Allowances, Certified Emissions Reductions and European Reduction Units). The research paper gives recommendations to regulatory bodies in order to most effectively cap the overall carbon dioxide emissions. The analysis of an enriched dataset, comprising not only of additional CO_2 assets, but also containing data from the European Energy Exchange, shows that variance risk premia are equal to a sample average of 0.69 for European Union Allowances (EUA), 0.17 for Certified Emissions Reductions (CER) and 0.81 for European Reduction Units (ERU). We identify the existence of a common risk factor across different assets that justifies the presence of risk premia. Various policy implications with regards to gaining investors’ confidence in the market are being reviewed. Consequently, we recommend the implementation of a price collar approach to support stable prices for emission allowances. - Highlights: •Enriched dataset covering all three political phases of the CO_2 markets. •Clear policy implications for regulators to most effectively cap the overall CO_2 emissions pool. •Applying a cross-asset benchmark index for variance beta estimation. •CER contracts have been analyzed with respect to variance risk premia for the first time. •Increased forecasting accuracy for CO_2 asset returns by using variance risk premia.

  17. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  18. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    risk premium with the P/E ratio results in an R2 for the quarterly returns of more than twenty-five percent. The results depend crucially on the use of "model-free", as opposed to standard Black-Scholes, implied variances, and realized variances constructed from high-frequency intraday, as opposed...

  19. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  20. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  1. Efficacy and safety of tolvaptan in heart failure patients with volume overload despite the standard treatment with conventional diuretics: a phase III, randomized, double-blind, placebo-controlled study (QUEST study).

    Science.gov (United States)

    Matsuzaki, Masunori; Hori, Masatsugu; Izumi, Tohru; Fukunami, Masatake

    2011-12-01

    Diuretics are recommended to treat volume overload with heart failure (HF), however, they may cause serum electrolyte imbalance, limiting their use. Moreover, patients with advanced HF could poorly respond to these diuretics. In this study, we evaluated the efficacy and safety of Tolvaptan, a competitive vasopressin V2-receptor antagonist developed as a new drug to treat volume overload in HF patients. A phase III, multicenter, randomized, double-blind, placebo-controlled parallel study was performed to assess the efficacy and safety of tolvaptan in treating HF patients with volume overload despite the use of conventional diuretics. One hundred and ten patients were randomly assigned to receive either placebo or 15 mg/day tolvaptan for 7 consecutive days. Compared with placebo, tolvaptan administered for 7 days significantly reduced body weight and improved symptoms associated with volume overload. The safety profile of tolvaptan was considered acceptable for clinical use with minimal adverse effects. Tolvaptan reduced volume overload and improved congestive symptoms associated with HF by a potent water diuresis (aquaresis).

  2. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  3. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  4. Aerial radiometric and magnetic survey; Brushy Basin detail survey: Price/Salina national topographic map sheets, Utah. Volume III. Area II: graphic data, Section I-II. Final report

    International Nuclear Information System (INIS)

    1981-01-01

    This volume contains all of the graphic data for Area II which consists of map lines 1660 to 3400 and 5360 to 5780, and tie lines 6100, 6120, and 6160. Due to the large map scale of the presented data (1:62,500), this sub-section was divided into eleven 7-1/2 min quadrant sheets

  5. Thermodynamic study of (heptane + amine) mixtures. III: Excess and partial molar volumes in mixtures with secondary, tertiary, and cyclic amines at 298.15 K

    International Nuclear Information System (INIS)

    Lepori, Luciano; Gianni, Paolo; Spanedda, Andrea; Matteoli, Enrico

    2011-01-01

    Graphical abstract: Highlights: → Excess volumes of (sec., tert., or cyclic amines + heptane) mixtures. → Excess volumes are positive for small size amines and decrease as the size increases. → Group contributions to predict the partial molar volumes of amines in heptane. → The void volume is larger for sec. and tert. than for linear amines in heptane. → The void volume is much smaller for cyclic than for linear amines in heptane. - Abstract: Excess molar volumes V E at 298.15 K were determined by means of a vibrating tube densimeter for binary mixtures of {heptane + open chain secondary (diethyl to dibutyl) and tertiary (triethyl to tripentyl) amines} as well as for cyclic imines (C 2 , C 3 , C 4 , C 6 , and C 7 ) and primary cycloalkylamines (C 5 , C 6 , C 7 , and C 12 ). The V E values were found positive for mixtures involving small size amines, with V E decreasing as the size increases. Negative V E 's were found for tributyl- and tripentylamine, heptamethylenimine, and cyclododecylamine. Mixtures of heptane with cycloheptylamine showed an s-shaped curve. Partial molar volumes V 0 of amines at infinite dilution in heptane were obtained from V E and compared with V 0 of hydrocarbons and other classes of organic compounds taken from literature. An additivity scheme, based on the intrinsic volume approach, was applied to estimate group (CH 3 , CH 2 , CH, C, NH 2 , NH, N, OH, O, CO, and COO) contributions to V 0 . These contributions, the effect of cyclization on V 0 , and the limiting slope of the apparent excess molar volumes were discussed in terms of solute-solvent and solute-solute interactions.

  6. Design Guidelines and Criteria for User/Operator Transactions with Battlefield Automated Systems. Volume III-A. Human Factors Analyses of User/ Operator Transactions with TACFIRE - The Tactical Fire Direction System

    Science.gov (United States)

    1981-02-01

    7. Reseaarch Product 81-26 - DESIGN GUIDELINES AND CRITERIA FOR USER/ I;. I’OPERATOR TRANSACTIONS WITH BATTLEFIELD AUTOMIATED SYSTEMS I’ /HVtAN...FACTORS XWLYSES :’F K~R/ OPERATOR TRANSACTIONS WTHT TACFIRE - THE TACTICAL FIRE DiRECTION SY2T3EM A HUMAN FACTORS TECHNICAL AREA L~h~h K L-J 1’ U~~i~ ll...Battlefield Auto- Inter : Oct 1979-Feb 1981 mated Systems Volume III-A: Human Factors 4t C/ Analyses of User/Operator Transactions with 6. PERFORMING

  7. Ferroelectric Thin Films III, Symposium Held in San Francisco, California on April 13 - 16, 1993. Materials Research Society Symposium Proceedings, Volume 310

    Science.gov (United States)

    1993-04-16

    Publication Data Ferroelectric thin films III : symposium held April 13-16, 1993, San Francisco,California, U.S.A. / editors, Bruce A. Turtle , Edwlrd R...All solutions were placed in a modified Collison Nebulizer which generated the droplets in an oxygen carrier gas. The droplets were transported into

  8. Encyclopedia of Archaeology: The Great Archaeologists, Volumes I-II, edited by Tim Murray. ABC­-CLIO Inc., Santa Barbara, 1999

    OpenAIRE

    Christenson, Andrew L.

    2001-01-01

    There have been two previous volumes published on Great Archaeologists, one for young adults (Daugherty 1962) and one a collection of articles from the Illustrated London News (Bacon 1976). What really distinguishes this two volume set from the earlier books is that who was included was decided by archaeologists, rather than by educators or journalists. Archaeologists whose lives are considered great for didactic or jo...

  9. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  10. Wien Automatic System Planning (WASP) Package. A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 1: Chapters 1-11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-09-01

    As a continuation of its effort to provide comprehensive and impartial guidance to Member States facing the need for introducing nuclear power, the IAEA has completed a new version of the Wien Automatic System Planning (WASP) Package for carrying out power generation expansion planning studies. WASP was originally developed in 1972 in the USA to meet the IAEA's needs to analyze the economic competitiveness of nuclear power in comparison to other generation expansion alternatives for supplying the future electricity requirements of a country or region. The model was first used by the IAEA to conduct global studies (Market Survey for Nuclear Power Plants in Developing Countries, 1972-1973) and to carry out Nuclear Power Planning Studies for several Member States. The WASP system developed into a very comprehensive planning tool for electric power system expansion analysis. Following these developments, the so-called WASP-Ill version was produced in 1979. This version introduced important improvements to the system, namely in the treatment of hydroelectric power plants. The WASP-III version has been continually updated and maintained in order to incorporate needed enhancements. In 1981, the Model for Analysis of Energy Demand (MAED) was developed in order to allow the determination of electricity demand, consistent with the overall requirements for final energy, and thus, to provide a more adequate forecast of electricity needs to be considered in the WASP study. MAED and WASP have been used by the Agency for the conduct of Energy and Nuclear Power Planning Studies for interested Member States. More recently, the VALORAGUA model was completed in 1992 as a means for helping in the preparation of the hydro plant characteristics to be input in the WASP study and to verify that the WASP overall optimized expansion plan takes also into account an optimization of the use of water for electricity generation. The combined application of VALORAGUA and WASP permits the

  11. Wien Automatic System Planning (WASP) Package. A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 1: Chapters 1-11

    International Nuclear Information System (INIS)

    1995-01-01

    As a continuation of its effort to provide comprehensive and impartial guidance to Member States facing the need for introducing nuclear power, the IAEA has completed a new version of the Wien Automatic System Planning (WASP) Package for carrying out power generation expansion planning studies. WASP was originally developed in 1972 in the USA to meet the IAEA's needs to analyze the economic competitiveness of nuclear power in comparison to other generation expansion alternatives for supplying the future electricity requirements of a country or region. The model was first used by the IAEA to conduct global studies (Market Survey for Nuclear Power Plants in Developing Countries, 1972-1973) and to carry out Nuclear Power Planning Studies for several Member States. The WASP system developed into a very comprehensive planning tool for electric power system expansion analysis. Following these developments, the so-called WASP-Ill version was produced in 1979. This version introduced important improvements to the system, namely in the treatment of hydroelectric power plants. The WASP-III version has been continually updated and maintained in order to incorporate needed enhancements. In 1981, the Model for Analysis of Energy Demand (MAED) was developed in order to allow the determination of electricity demand, consistent with the overall requirements for final energy, and thus, to provide a more adequate forecast of electricity needs to be considered in the WASP study. MAED and WASP have been used by the Agency for the conduct of Energy and Nuclear Power Planning Studies for interested Member States. More recently, the VALORAGUA model was completed in 1992 as a means for helping in the preparation of the hydro plant characteristics to be input in the WASP study and to verify that the WASP overall optimized expansion plan takes also into account an optimization of the use of water for electricity generation. The combined application of VALORAGUA and WASP permits the

  12. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  13. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  14. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  15. Efficacy and safety of tolvaptan in heart failure patients with sustained volume overload despite the use of conventional diuretics: a phase III open-label study.

    Science.gov (United States)

    Fukunami, Masatake; Matsuzaki, Masunori; Hori, Masatsugu; Izumi, Tohru

    2011-12-01

    Volume overload is a common complication associated with heart failure (HF) and is recommended to be treated with loop or thiazide diuretics. However, use of diuretics can cause serum electrolyte imbalances and diuretic resistance. Tolvaptan, a selective, oral, non-peptide vasopressin V2-receptor antagonist, offers a new option for treating volume overload in HF patients. The aim of this study was to investigate the efficacy and safety of tolvaptan in Japanese HF patients with volume overload. Fifty-one HF patients with volume overload, despite using conventional diuretics, were treated with 15 mg/day tolvaptan for 7 days. If the response was insufficient at Day 7, tolvaptan was continued for a further 7 days at either 15 mg/day or 30 mg/day. Outcomes included changes in body weight, symptoms and safety parameters. Thirty-six patients discontinued treatment within 7 days, therefore 15 patients entered the second phase of treatment. In two patients, tolvaptan was increased to 30 mg/day after 7 days. Body weight was reduced on Day 7 (-1.95 ± 1.98 kg; n = 41) and Day 14 (-2.35 ± 1.44 kg; n = 11, 15 mg/day). Symptoms of volume overload, including lower limb edema, pulmonary congestion, jugular venous distention and hepatomegaly, were improved by tolvaptan treatment for 7 or 14 days. Neither tolvaptan increased the incidence of severe or serious adverse events when administered for 7-14 days. This study confirms the efficacy and safety of 15 mg/day tolvaptan for 7-14 days in Japanese HF patients with volume overload despite conventional diuretics.

  16. Genetic variants influencing phenotypic variance heterogeneity.

    Science.gov (United States)

    Ek, Weronica E; Rask-Andersen, Mathias; Karlsson, Torgny; Enroth, Stefan; Gyllensten, Ulf; Johansson, Åsa

    2018-03-01

    Most genetic studies identify genetic variants associated with disease risk or with the mean value of a quantitative trait. More rarely, genetic variants associated with variance heterogeneity are considered. In this study, we have identified such variance single-nucleotide polymorphisms (vSNPs) and examined if these represent biological gene × gene or gene × environment interactions or statistical artifacts caused by multiple linked genetic variants influencing the same phenotype. We have performed a genome-wide study, to identify vSNPs associated with variance heterogeneity in DNA methylation levels. Genotype data from over 10 million single-nucleotide polymorphisms (SNPs), and DNA methylation levels at over 430 000 CpG sites, were analyzed in 729 individuals. We identified vSNPs for 7195 CpG sites (P mean DNA methylation levels. We further showed that variance heterogeneity between genotypes mainly represents additional, often rare, SNPs in linkage disequilibrium (LD) with the respective vSNP and for some vSNPs, multiple low frequency variants co-segregating with one of the vSNP alleles. Therefore, our results suggest that variance heterogeneity of DNA methylation mainly represents phenotypic effects by multiple SNPs, rather than biological interactions. Such effects may also be important for interpreting variance heterogeneity of more complex clinical phenotypes.

  17. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  18. Three Mile Island nuclear reactor accident of March 1979. Environmental radiation data: Volume III. A report to the President's Commission on the Accident at Three Mile Island

    International Nuclear Information System (INIS)

    Bretthauer, E.W.; Grossman, R.F.; Thome, D.J.; Smith, A.E.

    1981-03-01

    This report contains a listing of environmental radiation monitoring data collected in the vicinity of Three Mile Island (TMI) following the March 28, 1979 accident. These data were collected by the EPA, NRC, DOE, HHS, the Commonwealth of Pennsylvania, or the Bethlehem Steel Corporation. This volume consists of Table 9 Computer printout of environmental data collected NRC

  19. Design of Training Systems, Phase II Report, Volume III; Model Program Descriptions and Operating Procedures. TAEG Report No. 12-2.

    Science.gov (United States)

    Naval Training Equipment Center, Orlando, FL. Training Analysis and Evaluation Group.

    The Design of Training Systems (DOTS) project was initiated by the Department of Defense (DOD) to develop tools for the effective management of military training organizations. Volume 3 contains the model and data base program descriptions and operating procedures designed for phase 2 of the project. Flow charts and program listings for the…

  20. III-V microelectronics

    CERN Document Server

    Nougier, JP

    1991-01-01

    As is well known, Silicon widely dominates the market of semiconductor devices and circuits, and in particular is well suited for Ultra Large Scale Integration processes. However, a number of III-V compound semiconductor devices and circuits have recently been built, and the contributions in this volume are devoted to those types of materials, which offer a number of interesting properties. Taking into account the great variety of problems encountered and of their mutual correlations when fabricating a circuit or even a device, most of the aspects of III-V microelectronics, from fundamental p

  1. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  2. Central Receiver Solar Thermal Power System, Phase 1. CDRL Item 2. Pilot Plant preliminary design report. Volume III, Book 1. Collector subsystem

    Energy Technology Data Exchange (ETDEWEB)

    Hallet, Jr., R. W.; Gervais, R. L.

    1977-10-01

    The central receiver system consists of a field of heliostats, a central receiver, a thermal storage unit, an electrical power generation system, and balance of plant. This volume discusses the collector field geometry, requirements and configuration. The development of the collector system and subsystems are discussed and the selection rationale outlined. System safety and availability are covered. Finally, the plans for collector portion of the central receiver system are reviewed.

  3. Measurements of the frontal and prefrontal lobe volumes by three dimensional magnetic resonance imaging scan. III. Analysis of sex differences with advanced age

    Energy Technology Data Exchange (ETDEWEB)

    Kanemura, Hideaki; Aihara, Masao; Nakazawa, Shinpei [Yamanashi Medical Univ., Tamaho (Japan)

    2002-09-01

    To determine whether there is sex difference in the growth of the frontal and prefrontal lobes, we quantitatively measured the volume of these lobes by three dimensional (3-D) MRI in healthy 12 males (5 months to 39 years) and six females (1 year 11 months to 27 years). The left and right lobes were studied separately. The 3-D MRI data were acquired by the fast spoiled gradient recalled (SPGR) sequence using a 1.5 T MR imager. The frontal and prefrontal lobe volumes were measured by the volume measurement function of the Workstation. In males, the left to right ratio (L/R ratio) of the frontal and prefrontal lobes increased with age. On the contrary, in females, L/R ratio of the frontal and prefrontal lobes showed no significant change with advancing age. These results highlighted sex-specific maturational changes of the frontal and prefrontal lobes and suggested that quantitative data on the frontal and prefrontal lobe are important in interpreting brain abnormalities in children with developmental disorders. (author)

  4. Measurements of the frontal and prefrontal lobe volumes by three dimensional magnetic resonance imaging scan. III. Analysis of sex differences with advanced age

    International Nuclear Information System (INIS)

    Kanemura, Hideaki; Aihara, Masao; Nakazawa, Shinpei

    2002-01-01

    To determine whether there is sex difference in the growth of the frontal and prefrontal lobes, we quantitatively measured the volume of these lobes by three dimensional (3-D) MRI in healthy 12 males (5 months to 39 years) and six females (1 year 11 months to 27 years). The left and right lobes were studied separately. The 3-D MRI data were acquired by the fast spoiled gradient recalled (SPGR) sequence using a 1.5 T MR imager. The frontal and prefrontal lobe volumes were measured by the volume measurement function of the Workstation. In males, the left to right ratio (L/R ratio) of the frontal and prefrontal lobes increased with age. On the contrary, in females, L/R ratio of the frontal and prefrontal lobes showed no significant change with advancing age. These results highlighted sex-specific maturational changes of the frontal and prefrontal lobes and suggested that quantitative data on the frontal and prefrontal lobe are important in interpreting brain abnormalities in children with developmental disorders. (author)

  5. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  6. Decomposition of Variance for Spatial Cox Processes.

    Science.gov (United States)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-03-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.

  7. Variance in binary stellar population synthesis

    Science.gov (United States)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  8. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    with a rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  9. Variations in Target Volume Definition for Postoperative Radiotherapy in Stage III Non-Small-Cell Lung Cancer: Analysis of an International Contouring Study

    International Nuclear Information System (INIS)

    Spoelstra, Femke; Senan, Suresh; Le Pechoux, Cecile; Ishikura, Satoshi; Casas, Francesc; Ball, David; Price, Allan; De Ruysscher, Dirk; Soernsen de Koste, John R. van

    2010-01-01

    Purpose: Postoperative radiotherapy (PORT) in patients with completely resected non-small-cell lung cancer with mediastinal involvement is controversial because of the failure of earlier trials to demonstrate a survival benefit. Improved techniques may reduce toxicity, but the treatment fields used in routine practice have not been well studied. We studied routine target volumes used by international experts and evaluated the impact of a contouring protocol developed for a new prospective study, the Lung Adjuvant Radiotherapy Trial (Lung ART). Methods and Materials: Seventeen thoracic radiation oncologists were invited to contour their routine clinical target volumes (CTV) for 2 representative patients using a validated CD-ROM-based contouring program. Subsequently, the Lung ART study protocol was provided, and both cases were contoured again. Variations in target volumes and their dosimetric impact were analyzed. Results: Routine CTVs were received for each case from 10 clinicians, whereas six provided both routine and protocol CTVs for each case. Routine CTVs varied up to threefold between clinicians, but use of the Lung ART protocol significantly decreased variations. Routine CTVs in a postlobectomy patient resulted in V 20 values ranging from 12.7% to 54.0%, and Lung ART protocol CTVs resulted in values of 20.6% to 29.2%. Similar results were seen for other toxicity parameters and in the postpneumectomy patient. With the exception of upper paratracheal nodes, protocol contouring improved coverage of the required nodal stations. Conclusion: Even among experts, significant interclinician variations are observed in PORT fields. Inasmuch as contouring variations can confound the interpretation of PORT results, mandatory quality assurance procedures have been incorporated into the current Lung ART study.

  10. Fusion Power Demonstration III

    International Nuclear Information System (INIS)

    Lee, J.D.

    1985-07-01

    This is the third in the series of reports covering the Fusion Power Demonstration (FPD) design study. This volume considers the FPD-III configuration that incorporates an octopole end plug. As compared with the quadrupole end-plugged designs of FPD-I and FPD-II, this octopole configuration reduces the number of end cell magnets and shortens the minimum ignition length of the central cell. The end-cell plasma length is also reduced, which in turn reduces the size and cost of the end cell magnets and shielding. As a contiuation in the series of documents covering the FPD, this report does not stand alone as a design description of FPD-III. Design details of FPD-III subsystems that do not differ significantly from those of the FPD-II configuration are not duplicated in this report

  11. Is fMRI "noise" really noise? Resting state nuisance regressors remove variance with network structure.

    Science.gov (United States)

    Bright, Molly G; Murphy, Kevin

    2015-07-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. Copyright © 2015. Published by Elsevier Inc.

  12. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application

    Science.gov (United States)

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov

    2016-01-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002

  13. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    Science.gov (United States)

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  14. Best-practices guidelines for L2PSA development and applications. Volume 2 - Best practices for the Gen II PWR, Gen II BWR L2PSAs. Extension to Gen III reactors

    International Nuclear Information System (INIS)

    Raimond, E.; Durin, T.; Rahni, N.; Meignen, R.; Cranga, M.; Pichereau, F.; Bentaib, A.; Guigueno, Y.; Loeffler, H.; Mildenberger, O.; Lajtha, G.; Santamaria, C.S.; Dienstbier, J.; Rydl, A.; Holmberg, J.E.; Lindholm, I.; Maennistoe, I.; Pauli, E.M.; Dirksen, G.; Grindon, L.; Peers, K.; Hulqvist, G.; Parozzi, F.; Polidoro, F.; Cazzoli, E.; Vitazkova, J.; Burgazzi, L.; Oury, L.; Ngatchou, C.; Siltanen, S.; Niemela, I.; Routamo, T.; Helstroem, P.; Bassi, C.; Brinkman, H.; Seidel, A.; Schubert, B.; Wohlstein, R.; Guentay, S.; Vincon, L.

    2010-01-01

    The objective of this coordinated action was to develop best practice guidelines for the performance of Level 2 PSA methodologies with a view of harmonisation at EU level and to allow meaningful and practical uncertainty evaluations in a Level 2 PSA. Specific relationships with community in charge of nuclear reactor safety (utilities, safety authorities, vendors, and research or services companies) have been established in order to define the current needs in terms of guidelines for level 2 PSA development and applications. An international workshop was organised in Hamburg, with the support of VATTENFALL, in November 2008. The level 2 PSA experts from the ASAMPSA2 project partners have proposed some guidelines for the development and application of L2PSA based on their experience and on information available from international cooperation (EC Severe Accident network of Excellence - SARNET, IAEA standards, OECD-NEA publications and workshop) or open literature. The number of technical issues addressed in the guideline is very large and all are not covered with the same relevancy in the first version of the guideline. This version is submitted for external review in November 2010 by severe accident experts and PSA, especially, from SARNET and OECD-NEA members. The feedback of the external review will be dis cussed during an international open works hop planned in March 2011 and all outcomes will be taken into consideration in the final version of this guideline (June 2011). The guideline includes 3 volumes: - Volume 1 - General considerations on L2PSA. - Volume 2 - Technical recommendations for Gen II and III reactors. - Volume 3 - Specific considerations for future reactor (Gen IV). The recommendations formulated in the guideline should not be considered as 'mandatory' but should help the L2PSA developers to achieve high quality studies with limited time and resources. It may also help the L2PSA reviewers by positioning one specific study in comparison with some

  15. 29 CFR 1920.2 - Variances.

    Science.gov (United States)

    2010-07-01

    ...) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR WORKERS...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13...

  16. 78 FR 14122 - Revocation of Permanent Variances

    Science.gov (United States)

    2013-03-04

    ... Douglas Fir planking had to have at least a 1,900 fiber stress and 1,900,000 modulus of elasticity, while the Yellow Pine planking had to have at least 2,500 fiber stress and 2,000,000 modulus of elasticity... the permanent variances, and affected employees, to submit written data, views, and arguments...

  17. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    Investors in fixed income markets are willing to pay a very large premium to be hedged against shocks in expected volatility and the size of this premium can be studied through variance swaps. Using thirty years of option and high-frequency data, we document the following novel stylized facts...

  18. Biological Variance in Agricultural Products. Theoretical Considerations

    NARCIS (Netherlands)

    Tijskens, L.M.M.; Konopacki, P.

    2003-01-01

    The food that we eat is uniform neither in shape or appearance nor in internal composition or content. Since technology became increasingly important, the presence of biological variance in our food became more and more of a nuisance. Techniques and procedures (statistical, technical) were

  19. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  20. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models...

  1. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  2. Variance Swap Replication: Discrete or Continuous?

    Directory of Open Access Journals (Sweden)

    Fabien Le Floc’h

    2018-02-01

    Full Text Available The popular replication formula to price variance swaps assumes continuity of traded option strikes. In practice, however, there is only a discrete set of option strikes traded on the market. We present here different discrete replication strategies and explain why the continuous replication price is more relevant.

  3. Zero-intelligence realized variance estimation

    NARCIS (Netherlands)

    Gatheral, J.; Oomen, R.C.A.

    2010-01-01

    Given a time series of intra-day tick-by-tick price data, how can realized variance be estimated? The obvious estimator—the sum of squared returns between trades—is biased by microstructure effects such as bid-ask bounce and so in the past, practitioners were advised to drop most of the data and

  4. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  5. Reexamining financial and economic predictability with new estimators of realized variance and variance risk premium

    DEFF Research Database (Denmark)

    Casas, Isabel; Mao, Xiuping; Veiga, Helena

    This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from...... time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil........ Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional...

  6. Sweet Lake geopressured-geothermal project, Magma Gulf-Technadril/DOE Amoco fee. Volume III. Final report. Annual report, February 1982-March 1985

    Energy Technology Data Exchange (ETDEWEB)

    Durham, C.O. Jr.; O' Brien, F.D.; Rodgers, R.W. (eds.)

    1985-01-01

    This report presents the results of the testing of Sand 3 (15,245 to 15,280 feet in depth) which occurred from November 1983 to March 1984 and evaluates these new data in comparison to results from the testing of Sand 5 (15,385 to 15,415 feet in depth) which occurred from June 1981 to February 1982. It also describes the reworking of the production and salt water disposal wells preparatory to the Sand 3 testing as well as the plug and abandon procedures requested to terminate the project. The volume contains two parts: Part 1 includes the text and accompanying plates, figures and tables; Part 2 consists of the appendixes including auxiliary reports and tabulations.

  7. New Concepts in Fish Ladder Design, Volume III of IV, Assessment of Fishway Development and Design, 1982-1983 Final Report.

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Patrick D.; Orsborn, John F.

    1985-08-01

    This volume covers the broad, though relatively short, historical basis for this project. The historical developments of certain design features, criteria and research activities are traced. Current design practices are summarized based on the results of an international survey and interviews with agency personnel and consultants. The fluid mechanics and hydraulics of fishway systems are discussed. Fishways (or fishpasses) can be classified in two ways: (1) on the basis of the method of water control (chutes, steps (ladders), or slots); and (2) on the basis of the degree and type of water control. This degree of control ranges from a natural waterfall to a totally artificial environment at a hatchery. Systematic procedures for analyzing fishways based on their configuration, species, and hydraulics are presented. Discussions of fish capabilities, energy expenditure, attraction flow, stress and other factors are included.

  8. Acid-base titrations by stepwise addition of equal volumes of titrant with special reference to automatic titrations-III Presentation of a fully automatic titration apparatus and of results supporting the theories given in the preceding parts.

    Science.gov (United States)

    Pehrsson, L; Ingman, F

    1977-02-01

    This paper forms Part III of a series in which the first two parts describe methods for evaluating titrations performed by stepwise addition of equal volumes of titrant. The great advantage of these methods is that they do not require an accurate calibration of the electrode system. This property makes the methods very suitable for routine work. e.g., in automatic analysis. An apparatus for performing such titrations automatically is presented. Further, results of titrations of monoprotic acids, a diprotic acid, an ampholyte, a mixture of an acid with its conjugate base, and mixtures of two acids with a small difference between the stability constants are given. Most of these titrations cannot be evaluated by the Gran or Hofstee methods but yield results having errors of the order of 0.1% if the methods proposed in Parts I and II of this series are employed. The advantages of the method of stepwise addition of equal volumes of titrant combined with the proposed evaluation methods, in comparison with common methods such as titration to a preset pH, are that all the data are used in the evaluation, permitting a statistical treatment and giving better possibilities for tracing systematic errors.

  9. REVIEW OF THE NEGOTIATION OF THE MODEL PROTOCOL ADDITIONAL TO THE AGREEMENT(S) BETWEEN STATE(S) AND THE INTERNATIONAL ATOMIC ENERGY AGENCY FOR THE APPLICATION OF SAFEGUARDS, INFCIRC/540 (Corrected) VOLUME III/III, IAEA COMMITTEE 24, DEVELOPMENT OF INFCIRC/540, ARTICLE-BY-ARTICLE REVIEW (1996-1997).

    Energy Technology Data Exchange (ETDEWEB)

    Rosenthal, M.D.; Houck, F.

    2010-01-01

    In this section of the report, the development of INFCIRC/540 is traced by a compilation of citations from the IAEA documents presented to the Board of Governors and the records of discussions in the Board that took place prior to the establishment of Committee 24 as well as the documents and discussions of that committee. The evolution of the text is presented separately for each article or, for the more complex articles, for each paragraph or group of paragraphs of the article. This section covers all articles, including those involving no issues. Background, issues, interpretations and conclusions, which were addressed in Volumes I, II, and III are not repeated here. The comments by states that are included are generally limited to objections and suggested changes. Requests for clarification or elaboration have been omitted, although it is recognized that such comments were sometimes veiled objections.

  10. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel......-based estimator dominates the RV for the estimation of integrated variance (IV). An empirical analysis of the Dow Jones Industrial Average stocks reveals that market microstructure noise its time-dependent and correlated with increments in the efficient price. This has important implications for volatility...... estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient...

  11. The Theory of Variances in Equilibrium Reconstruction

    International Nuclear Information System (INIS)

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-01

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature

  12. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  13. Richard III

    DEFF Research Database (Denmark)

    Lauridsen, Palle Schantz

    2017-01-01

    Kort analyse af Shakespeares Richard III med fokus på, hvordan denne skurk fremstilles, så tilskuere (og læsere) langt henad vejen kan føle sympati med ham. Med paralleller til Netflix-serien "House of Cards"......Kort analyse af Shakespeares Richard III med fokus på, hvordan denne skurk fremstilles, så tilskuere (og læsere) langt henad vejen kan føle sympati med ham. Med paralleller til Netflix-serien "House of Cards"...

  14. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  15. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  16. Vehicle-based road dust emission measurement (III):. effect of speed, traffic volume, location, and season on PM 10 road dust emissions in the Treasure Valley, ID

    Science.gov (United States)

    Etyemezian, V.; Kuhns, H.; Gillies, J.; Chow, J.; Hendrickson, K.; McGown, M.; Pitchford, M.

    The testing re-entrained aerosol kinetic emissions from roads (TRAKER) road dust measurement system was used to survey more than 400 km of paved roads in southwestern Idaho during 3-week sampling campaigns in winter and summer, 2001. Each data point, consisting of a 1-s measurement of particle light scattering sampled behind the front tire, was associated with a link (section of road) in the traffic demand model network for the Treasure Valley, ID. Each link was in turn associated with a number of characteristics including posted speed limit, vehicle kilometers traveled (vkt), road class (local/residential, collector, arterial, and interstate), county, and land use (urban vs. rural). Overall, the TRAKER-based emission factors based on location, setting, season, and speed spanned a narrow range from 3.6 to 8.0 g/vkt. Emission factors were higher in winter compared to summer, higher in urban areas compared to rural, and lower for roads with fast travel speeds compared to slower roads. The inherent covariance between traffic volume and traffic speed obscured the assessment of the effect of traffic volume on emission potentials. Distance-based emission factors expressed in grams per kilometer traveled (g/vkt) for roads with low travel speeds (˜11 m/s residential roads) compared to those with high travel speeds (˜25 m/s interstates) were higher (5.2 vs. 3.0 g/vkt in summer and 5.9 vs. 4.9 g/vkt in winter). However, emission potentials which characterize the amount of suspendable material on a road were substantially higher on roads with low travel speeds (0.71 vs. 0.13 g/vkt/(m/s) in summer and 0.78 vs. 0.21 g/vkt/(m/s) in winter). This suggested that while high speed roads are much cleaner (factor of 5.4 in summer), on a vehicle kilometer traveled basis, emissions from high and low speed roads are of the same order. Emission inventories based on the TRAKER method, silt loadings obtained during the field study, and US EPA's AP-42 default values of silt loading were

  17. Projection models for health-effects assessment in populations exposed to radioactive and nonradioactive pollutants. Volume III. SPAHR interactive package guide

    International Nuclear Information System (INIS)

    Collins, J.J.

    1982-09-01

    The Simulation Package for the Analysis of Health Risk (SPAHR) is a computer software package based upon a demographic model for health risk projectons. The model extends several health risk projection models by making realistic assumptions about the population at risk, adn thus represents a distinct improvement over previous models. Complete documentation for use of SPAHR is contained in this five-volume publication. The demographic model in SPAHR estimates population response to environmental toxic exposures. Latency of response, changing dose level over time, competing risks from other causes of death, and population structure can be incorporated into SPAHR to project health risks. Risks are measured by morbid years, number of deaths, and loss of life expectancy. Comparisons of estimates of excess deaths demonstrate that previous health risk projection models may have underestimated excess deaths by a factor of from 2 to 10, depending on the pollutant and the exposure scenario. The software supporting the use of the demographic model is designed to be user oriented. Complex risk projections are made by responding to a series of prompts generated by the package. The flexibility and ease of use of SPAHR make it an important contribution to existing models and software packages. This manual outlines the use of the interactive capabilities of SPAHR. SPAHR is an integrated system of computer programs designed for simulating numerous health risk scenarios using the techniques of demographic modeling. This system of computer programs has been designed to be very flexible so as to allow the user to simulate a large variety of scenarios. It provides the user with an integrated package for projecting the impacts on human health of exposure to various hazards, particularly those resulting from the effluents related to energy production

  18. Variabilidade temporal do volume e caracterização física e química dos sedimentos do açude São José III no Cariri Paraibano

    Directory of Open Access Journals (Sweden)

    Danilo Rodrigues Monteiro

    2015-12-01

    Full Text Available Esse trabalho teve como objetivo analisar os características físico-químicas do sedimento do Açude de São José III, situado no município de São José dos Cordeiros no estado da Paraíba, bem como a pluviometria da região e o volume do manancial nos últimos 5 anos. Realizou-se a coleta do sedimento no Açude São José III, em três pontos distintos (A-Fundo do manancial; B-Margem do açude – depósito de resíduos; C-Margem do açude-área agricultável a uma profundidade de 0-30 cm com auxílio de trado manual, pás plásticas e tubos de PVC (50 mm de diâmetro e acondicionadas em sacos plásticos. Após a coleta, as amostras passaram por secagem ao ar, destorroadas e enviadas ao Laboratório de Irrigação e Salinidade (LIS-UFCG, para realização das análises física e químicas. Afirma-se com os dados pluviométricos o ano que apresentou a maior média foi o de 2009 (103,89 mm e o de menor foi o de 2012 (16,59 mm. A análise granulométrica dos sedimentos revelou que o sedimento da amostra A (Fundo do Manancial possui 50,06% de argila. As amostras de sedimento B (Margem do Manancial – Depósito de resíduos e C (Margem do Manancial - Área agricultável apresentaram uma fração de areia superior (B-80,74%; C-62,64%, na caracterização química percebeu-se que a razão C/N obtida nas amostras do açude São José III possuem origem de ambientes terrestres e aquáticos.Temporal variability of the volume and physical and chemical characterization of sediment weir São José III in Cariri ParaibanoAbstract: This study has how to objective to analyze the physical and chemical characteristics of the sediment of São José III, located in São José dos Cordeiros - PB, as well as the rainfall of the region and the volume of the weir the last 5 years. Held  the collection of the sediment weir São José III, at three different points (A-Fund of the weir, B-border of the weir - waste disposal; C-Border of the weir - arable area to a

  19. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    Science.gov (United States)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  20. PARDISEKO III

    International Nuclear Information System (INIS)

    Jordan, H.; Sack, C.

    1975-05-01

    This report gives a detailed description of the latest version of the PARDISEKO code, PARDISEKO III, with particular emphasis on the numerical and programming methods employed. The physical model and its relation to nuclear safety as well as a description and the results of confirming experiments are treated in detail in the Karlsruhe Nuclear Research Centre report KFK-1989. (orig.) [de

  1. Cosmic variance in inflation with two light scalars

    Energy Technology Data Exchange (ETDEWEB)

    Bonga, Béatrice; Brahma, Suddhasattwa; Deutsch, Anne-Sylvie; Shandera, Sarah, E-mail: bpb165@psu.edu, E-mail: suddhasattwa.brahma@gmail.com, E-mail: asdeutsch@psu.edu, E-mail: shandera@gravity.psu.edu [Institute for Gravitation and the Cosmos and Physics Department, The Pennsylvania State University, University Park, PA, 16802 (United States)

    2016-05-01

    We examine the squeezed limit of the bispectrum when a light scalar with arbitrary non-derivative self-interactions is coupled to the inflaton. We find that when the hidden sector scalar is sufficiently light ( m ∼< 0.1 H ), the coupling between long and short wavelength modes from the series of higher order correlation functions (from arbitrary order contact diagrams) causes the statistics of the fluctuations to vary in sub-volumes. This means that observations of primordial non-Gaussianity cannot be used to uniquely reconstruct the potential of the hidden field. However, the local bispectrum induced by mode-coupling from these diagrams always has the same squeezed limit, so the field's locally determined mass is not affected by this cosmic variance.

  2. Wien Automatic System Package (WASP). A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 2: Appendices

    International Nuclear Information System (INIS)

    1995-01-01

    With several Member States, the IAEA has completed a new version of the WASP program, which has been called WASP-Ill Plus since it follows quite closely the methodology of the WASP-Ill model. The major enhancements in WASP-Ill Plus with respect to the WASP-Ill version are: increase in the number of thermal fuel types (from 5 to 10); verification of which configurations generated by CONGEN have already been simulated in previous iterations with MERSIM; direct calculation of combined Loading Order of FIXSYS and VARSYS plants; simulation of system operation includes consideration of physical constraints imposed on some fuel types (i.e., fuel availability for electricity generation); extended output of the resimulation of the optimal solution; generation of a file that can be used for graphical representation of the results of the resimulation of the optimal solution and cash flows of the investment costs; calculation of cash flows allows to include the capital costs of plants firmly committed or in construction (FIXSYS plants); user control of the distribution of capital cost expenditures during the construction period (if required to be different from the general 'S' curve distribution used as default). This second volume of the document to support use of the WASP-Ill Plus computer code consists of 5 appendices giving some additional information about the WASP-Ill Plus program. Appendix A is mainly addressed to the WASP-Ill Plus system analyst and supplies some information which could help in the implementation of the program on the user computer facilities. This appendix also includes some aspects about WASP-Ill Plus that could not be treated in detail in Chapters 1 to 11. Appendix B identifies all error and warning messages that may appear in the WASP printouts and advises the user how to overcome the problem. Appendix C presents the flow charts of the programs along with a brief description of the objectives and structure of each module. Appendix D describes the

  3. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  4. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    . In this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are presented......Time-continuous Markov jump processes is a popular modelling tool in disciplines ranging from computational finance and operations research to human genetics and genomics. The data is often sampled at discrete points in time, and it can be useful to simulate sample paths between the datapoints...

  5. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  6. Fermilab III

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    The total ongoing plans for Fermilab are wrapped up in the Fermilab III scheme, centrepiece of which is the proposal for a new Main Injector. The Laboratory has been awarded a $200,000 Illinois grant which will be used to initiate environmental assessment and engineering design of the Main Injector, while a state review panel recommended that the project should also benefit from $2 million of funding

  7. Fermilab III

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1990-09-15

    The total ongoing plans for Fermilab are wrapped up in the Fermilab III scheme, centrepiece of which is the proposal for a new Main Injector. The Laboratory has been awarded a $200,000 Illinois grant which will be used to initiate environmental assessment and engineering design of the Main Injector, while a state review panel recommended that the project should also benefit from $2 million of funding.

  8. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  9. Deep Downhole Seismic Testing at the Waste Treatment Plant Site, Hanford, WA. Volume III P-Wave Measurements in Borehole C4997 Seismic Records, Wave-Arrival Identifications and Interpreted P-Wave Velocity Profile.

    Energy Technology Data Exchange (ETDEWEB)

    Stokoe, Kenneth H.; Li, Song Cheng; Cox, Brady R.; Menq, Farn-Yuh

    2007-06-06

    In this volume (III), all P-wave measurements are presented that were performed in Borehole C4997 at the Waste Treatment Plant (WTP) with T-Rex as the seismic source and the Lawrence Berkeley National Laboratory (LBNL) 3-D wireline geophone as the at-depth borehole receiver. P-wave measurements were performed over the depth range of 390 to 1220 ft, typically in 10-ft intervals. However, in some interbeds, 5-ft depth intervals were used. Compression (P) waves were generated by moving the base plate of T-Rex for a given number of cycles at a fixed frequency as discussed in Section 2. This process was repeated so that signal averaging in the time domain was performed using 3 to about 15 averages, with 5 averages typically used. In addition to the LBNL 3-D geophone, called the lower receiver herein, a 3-D geophone from Redpath Geophysics was fixed at a depth of 40 ft (later relocated to 27.5 ft due to visibility in borehole after rain) in Borehole C4997, and a 3-D geophone from the University of Texas was embedded near the borehole at about 1.5 ft below the ground surface. This volume is organized into 12 sections as follows: Section 1: Introduction, Section 2: Explanation of Terminology, Section 3: Vp Profile at Borehole C4997, Sections 4 to 6: Unfiltered P-wave records of lower vertical receiver, reaction mass, and reference receiver, Sections 7 to 9: Filtered P-wave signals of lower vertical receiver, reaction mass and reference receiver, Section 10: Expanded and filtered P-wave signals of lower vertical receiver, and Sections 11 and 12: Waterfall plots of unfiltered and filtered lower vertical receiver signals.

  10. Reduction of variance in spectral estimates for correction of ultrasonic aberration.

    Science.gov (United States)

    Astheimer, Jeffrey P; Pilkington, Wayne C; Waag, Robert C

    2006-01-01

    A variance reduction factor is defined to describe the rate of convergence and accuracy of spectra estimated from overlapping ultrasonic scattering volumes when the scattering is from a spatially uncorrelated medium. Assuming that the individual volumes are localized by a spherically symmetric Gaussian window and that centers of the volumes are located on orbits of an icosahedral rotation group, the factor is minimized by adjusting the weight and radius of each orbit. Conditions necessary for the application of the variance reduction method, particularly for statistical estimation of aberration, are examined. The smallest possible value of the factor is found by allowing an unlimited number of centers constrained only to be within a ball rather than on icosahedral orbits. Computations using orbits formed by icosahedral vertices, face centers, and edge midpoints with a constraint radius limited to a small multiple of the Gaussian width show that a significant reduction of variance can be achieved from a small number of centers in the confined volume and that this reduction is nearly the maximum obtainable from an unlimited number of centers in the same volume.

  11. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  12. A zero-variance-based scheme for variance reduction in Monte Carlo criticality

    International Nuclear Information System (INIS)

    Christoforou, S.; Hoogenboom, J. E.

    2006-01-01

    A zero-variance scheme is derived and proven theoretically for criticality cases, and a simplified transport model is used for numerical demonstration. It is shown in practice that by appropriate biasing of the transition and collision kernels, a significant reduction in variance can be achieved. This is done using the adjoint forms of the emission and collision densities, obtained from a deterministic calculation, according to the zero-variance scheme. By using an appropriate algorithm, the figure of merit of the simulation increases by up to a factor of 50, with the possibility of an even larger improvement. In addition, it is shown that the biasing speeds up the convergence of the initial source distribution. (authors)

  13. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  14. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  15. The value of travel time variance

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Engelson, Leonid

    2011-01-01

    This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability...... that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending...... on parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....

  16. Hybrid biasing approaches for global variance reduction

    International Nuclear Information System (INIS)

    Wu, Zeyun; Abdel-Khalik, Hany S.

    2013-01-01

    A new variant of Monte Carlo—deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses. - Highlights: ► Hybrid Monte Carlo Deterministic Method based on Gaussian Process Model is introduced. ► Method employs deterministic model to calculate responses correlations. ► Method employs correlations to bias Monte Carlo transport. ► Method compared to FW-CADIS methodology in SCALE code. ► An order of magnitude speed up is achieved for a PWR core model.

  17. MARS CODE MANUAL VOLUME III - Programmer's Manual

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Hwang, Moon Kyu; Jeong, Jae Jun; Kim, Kyung Doo; Bae, Sung Won; Lee, Young Jin; Lee, Won Jae

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This programmer's manual provides a complete list of overall information of code structure and input/output function of MARS. In addition, brief descriptions for each subroutine and major variables used in MARS are also included in this report, so that this report would be very useful for the code maintenance. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  18. Airborne radioactive emission control technology. Volume III

    International Nuclear Information System (INIS)

    Skoski, L.; Berlin, R.; Corby, D.; Clancy, J.; Hoopes, G.

    1980-03-01

    This report reviews the current and future control technology for airborne emissions from a wide variety of industries/facilities, including uranium mining and milling, other nuclear fuel cycle facilities, other NRC-licensed and DOE facilities, fossil fuel facilities, selected metal and non-metal extraction industries, and others. Where specific radioactivity control technology is lacking, a description of any existing control technology is given. Future control technology is assessed in terms of improvements to equipment performance and process alterations. A catalogue of investigated research on advanced control technologies is presented

  19. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  20. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Science.gov (United States)

    2011-12-19

    ... Administration (``OSHA'' or ``the Agency'') granted permanent variances to 24 companies engaged in the... DEPARTMENT OF LABOR Occupational Safety and Health Administration [Docket No. OSHA-2011-0054] Proposed Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA...

  1. variance components and genetic parameters for live weight

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance .... Starting values for the (co)variance components of two-trait models were ..... Estimates of genetic parameters for weaning weight of beef accounting for direct-maternal.

  2. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  3. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real...... events and only marginally by the premium associated with normal price fluctuations....

  4. Gene set analysis using variance component tests.

    Science.gov (United States)

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  5. Causality in variance and the type of traders in crude oil futures

    International Nuclear Information System (INIS)

    Bhar, Ramaprasad; Hamori, Shigeyuki

    2005-01-01

    This article examines the causal relationship and, in particular, informational dependence between crude oil futures return and the trading volume using daily data over a ten-year period using a recent econometric methodology. The two-step procedure developed by Cheung and Ng (1996) [Cheung, Y.W., Ng, L.K., 1996. A causality-in-variance test and its applications to financial market prices, Journal of Econometrics 72, 33-48.] is robust to distributional assumption and does not depend on simultaneous modeling of the two variables. We find only causality at higher order lags running from return to volume in the mean as well as in conditional variance. Our result is not in complete agreement with several earlier studies in this area. However, the result does indicate mild support for noise traders' hypothesis in the crude oil futures market. (Author)

  6. Is fMRI “noise” really noise? Resting state nuisance regressors remove variance with network structure

    Science.gov (United States)

    Bright, Molly G.; Murphy, Kevin

    2015-01-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured “signal” as well as “noise.” Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors. PMID:25862264

  7. Graphics Gems III IBM version

    CERN Document Server

    Kirk, David

    1994-01-01

    This sequel to Graphics Gems (Academic Press, 1990), and Graphics Gems II (Academic Press, 1991) is a practical collection of computer graphics programming tools and techniques. Graphics Gems III contains a larger percentage of gems related to modeling and rendering, particularly lighting and shading. This new edition also covers image processing, numerical and programming techniques, modeling and transformations, 2D and 3D geometry and algorithms,ray tracing and radiosity, rendering, and more clever new tools and tricks for graphics programming. Volume III also includes a

  8. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    Wei, Pengfei; Lu, Zhenzhou; Ruan, Wenbin; Song, Jingwen

    2014-01-01

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  9. Individual differences in personality traits reflect structural variance in specific brain regions.

    Science.gov (United States)

    Gardini, Simona; Cloninger, C Robert; Venneri, Annalena

    2009-06-30

    Personality dimensions such as novelty seeking (NS), harm avoidance (HA), reward dependence (RD) and persistence (PER) are said to be heritable, stable across time and dependent on genetic and neurobiological factors. Recently a better understanding of the relationship between personality traits and brain structures/systems has become possible due to advances in neuroimaging techniques. This Magnetic Resonance Imaging (MRI) study investigated if individual differences in these personality traits reflected structural variance in specific brain regions. A large sample of eighty five young adult participants completed the Three-dimensional Personality Questionnaire (TPQ) and had their brain imaged with MRI. A voxel-based correlation analysis was carried out between individuals' personality trait scores and grey matter volume values extracted from 3D brain scans. NS correlated positively with grey matter volume in frontal and posterior cingulate regions. HA showed a negative correlation with grey matter volume in orbito-frontal, occipital and parietal structures. RD was negatively correlated with grey matter volume in the caudate nucleus and in the rectal frontal gyrus. PER showed a positive correlation with grey matter volume in the precuneus, paracentral lobule and parahippocampal gyrus. These results indicate that individual differences in the main personality dimensions of NS, HA, RD and PER, may reflect structural variance in specific brain areas.

  10. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  11. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  12. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  13. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  14. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    Energy Technology Data Exchange (ETDEWEB)

    Ankirchner, Stefan, E-mail: ankirchner@hcm.uni-bonn.de [Rheinische Friedrich-Wilhelms-Universitaet Bonn, Institut fuer Angewandte Mathematik, Hausdorff Center for Mathematics (Germany); Dermoune, Azzouz, E-mail: Azzouz.Dermoune@math.univ-lille1.fr [Universite des Sciences et Technologies de Lille, Laboratoire Paul Painleve UMR CNRS 8524 (France)

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  15. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  16. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    International Nuclear Information System (INIS)

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-01-01

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  17. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  18. Discrete time and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  19. Mechatronic systems and materials III

    CERN Document Server

    Gosiewski, Zdzislaw

    2009-01-01

    This very interesting volume is divided into 24 sections; each of which covers, in detail, one aspect of the subject-matter: I. Industrial robots; II. Microrobotics; III. Mobile robots; IV. Teleoperation, telerobotics, teleoperated semi-autonomous systems; V. Sensors and actuators in mechatronics; VI. Control of mechatronic systems; VII. Analysis of vibration and deformation; VIII. Optimization, optimal design; IX. Integrated diagnostics; X. Failure analysis; XI. Tribology in mechatronic systems; XII. Analysis of signals; XIII. Measurement techniques; XIV. Multifunctional and smart materials;

  20. ANALISIS PORTOFOLIO RESAMPLED EFFICIENT FRONTIER BERDASARKAN OPTIMASI MEAN-VARIANCE

    OpenAIRE

    Abdurakhman, Abdurakhman

    2008-01-01

    Keputusan alokasi asset yang tepat pada investasi portofolio dapat memaksimalkan keuntungan dan atau meminimalkan risiko. Metode yang sering dipakai dalam optimasi portofolio adalah metode Mean-Variance Markowitz. Dalam prakteknya, metode ini mempunyai kelemahan tidak terlalu stabil. Sedikit perubahan dalam estimasi parameter input menyebabkan perubahan besar pada komposisi portofolio. Untuk itu dikembangkan metode optimasi portofolio yang dapat mengatasi ketidakstabilan metode Mean-Variance ...

  1. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  2. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is a...

  3. Diagnostic checking in linear processes with infinit variance

    OpenAIRE

    Krämer, Walter; Runde, Ralf

    1998-01-01

    We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.

  4. Evaluation of Mean and Variance Integrals without Integration

    Science.gov (United States)

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  5. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Data at the beginning and at the end of lactation period, have higher variances than tests in the middle of the lactation. Furthermore, first lactations have lower mean and variances compared to second and third lactations. This is a deviation from the basic assumptions required for the application of repeatability models.

  6. Direct encoding of orientation variance in the visual system.

    Science.gov (United States)

    Norman, Liam J; Heywood, Charles A; Kentridge, Robert W

    2015-01-01

    Our perception of regional irregularity, an example of which is orientation variance, seems effortless when we view two patches of texture that differ in this attribute. Little is understood, however, of how the visual system encodes a regional statistic like orientation variance, but there is some evidence to suggest that it is directly encoded by populations of neurons tuned broadly to high or low levels. The present study shows that selective adaptation to low or high levels of variance results in a perceptual aftereffect that shifts the perceived level of variance of a subsequently viewed texture in the direction away from that of the adapting stimulus (Experiments 1 and 2). Importantly, the effect is durable across changes in mean orientation, suggesting that the encoding of orientation variance is independent of global first moment orientation statistics (i.e., mean orientation). In Experiment 3 it was shown that the variance-specific aftereffect did not show signs of being encoded in a spatiotopic reference frame, similar to the equivalent aftereffect of adaptation to the first moment orientation statistic (the tilt aftereffect), which is represented in the primary visual cortex and exists only in retinotopic coordinates. Experiment 4 shows that a neuropsychological patient with damage to ventral areas of the cortex but spared intact early areas retains sensitivity to orientation variance. Together these results suggest that orientation variance is encoded directly by the visual system and possibly at an early cortical stage.

  7. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  8. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity.

    Science.gov (United States)

    Diaz, S Anaid; Viney, Mark

    2014-06-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species.

  9. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  10. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section 456.522 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... perform UR within the time requirements for which the variance is requested and its good faith efforts to...

  11. 29 CFR 1905.5 - Effect of variances.

    Science.gov (United States)

    2010-07-01

    ...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR... concerning a proposed penalty or period of abatement is pending before the Occupational Safety and Health...

  12. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ..., DEPARTMENT OF LABOR RECORDING AND REPORTING OCCUPATIONAL INJURIES AND ILLNESSES Other OSHA Injury and Illness... he or she finds appropriate. (iv) If the Assistant Secretary grants your variance petition, OSHA will... Secretary is reviewing your variance petition. (4) If I have already been cited by OSHA for not following...

  13. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  14. Zooplankton biomass (displacement volume) data collected in North Atlantic during ICNAF NORWESTLANT projects I-III in 1963 by different countries, data were acquired from the NMFS-COPEPOD database (NODC Accession 0070201)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Zooplankton biomass data (displacement volume) collected in North Atlantic during ICNAF (International Convention for the Northwest Atlantic Fisheries) NORWESTLANT...

  15. Zooplankton biomass (displacement and settled volume) data collected during the International Cooperative Investigations of the Tropical Atlantic EQUALANT I, EQUALANT II, and EQUALANT III projects from 1963-02-15 to 1964-07-09 (NODC Accession 0071432)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Zooplankton biomass (displacement and settled volume) data collected during the International Cooperative Investigations of the Tropical Atlantic EQUALANT I,...

  16. Flow rate dependent extra-column variance from injection in capillary liquid chromatography.

    Science.gov (United States)

    Aggarwal, Pankaj; Liu, Kun; Sharma, Sonika; Lawson, John S; Dennis Tolley, H; Lee, Milton L

    2015-02-06

    Efficiency and resolution in capillary liquid chromatography (LC) can be significantly affected by extra-column band broadening, especially for isocratic separations. This is particularly a concern in evaluating column bed structure using non-retained test compounds. The band broadening due to an injector supplied with a commercially available capillary LC system was characterized from experimental measurements. The extra-column variance from the injection valve was found to have an extra-column contribution independent of the injection volume, showing an exponential dependence on flow rate. The overall extra-column variance from the injection valve was found to vary from 34 to 23 nL. A new mathematical model was derived that explains this exponential contribution of extra-column variance on chromatographic performance. The chromatographic efficiency was compromised by ∼130% for a non-retained analyte because of injection valve dead volume. The measured chromatographic efficiency was greatly improved when a new nano-flow pumping system with integrated injection valve was used. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Report on the behalf of the special commission for the examination of the bill project, after activation of the accelerated procedure, related to energy transition for a green growth (nr 2188) - Nr 2230. Volume I, Volume II - comparative table, Volume III - hearings, impact study

    International Nuclear Information System (INIS)

    Bareigts, Ericka; Battistel, Marie-Noelle; Buis, Sabine; Baupin, Denis; Plisson, Philippe

    2014-01-01

    The first volume of this huge report reports the general discussion and the detailed examination, discussion and modifications of the French bill project on energy transition. The addressed topics are: the definition of common objectives for a successful energy transition, for a strengthening of France energy independence and for the struggle against global warming; a better renovation of buildings to save energy, to reduce prices and to create jobs; the development of clean transports to improve air quality and to protect health; the struggle against wastage and the promotion of circular economy from product design to product recycling; the promotion of renewable energies to diversify our energies and valorise territorial resources; the strengthening of nuclear safety and citizen information; the simplification and clarification of procedures for efficiency and competitiveness gains; the empowerment of citizen, enterprises, territories and State to act together. The second volume proposes a table which gives a comparative overview between the bill project text and the text modified and adopted by the commission. The third volume reports hearings of the minister and of several representatives of professional, public, and consumer organisations and bodies. It also contains the report of an impact study performed on all the different arrangements and measures contained by the bill project

  18. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    Science.gov (United States)

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment

  19. A Study of Job Demands and Curriculum Development in Agricultural Training Related to the Muskegon County Wastewater Management System. Final Report. Volume III. Student Terminal Performance Objectives and Instructional Modules.

    Science.gov (United States)

    Fisher, Harold S.; And Others

    This is the third volume of a four-volume report of a research project designed to (1) identify job needs for agricultural occupations which will result from the Muskegon County Wastewater Management System and perform a task analysis on each occupation, (2) develop instructional modules and determine their place in either high school or 2-year…

  20. GPS Space Service Volume: Ensuring Consistent Utility Across GPS Design Builds for Space Users

    Science.gov (United States)

    Bauer, Frank H.; Parker, Joel Jefferson Konkl; Valdez, Jennifer Ellen

    2015-01-01

    GPS availability and signal strength originally specified for users on or near surface of Earth with transmitted power levels specified at edge-of-Earth, 14.3 degrees. Prior to the SSV specification, on-orbit performance of GPS varied from block build to block build (IIA, IIRM, IIF) due to antenna gain and beam width variances. Unstable on-orbit performance results in significant risk to space users. Side-lobe signals, although not specified, were expected to significantly boost GPS signal availability for users above the constellation. During GPS III Phase A, NASA noted significant discrepancies in power levels specified in GPS III specification documents, and measured on-orbit performance. To stabilize the signal for high altitude space users, NASA DoD team in 2003-2005 led the creation of new Space Service Volume (SSV) definition and specifications.

  1. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  3. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  4. Small-Volume Injections: Evaluation of Volume Administration Deviation From Intended Injection Volumes.

    Science.gov (United States)

    Muffly, Matthew K; Chen, Michael I; Claure, Rebecca E; Drover, David R; Efron, Bradley; Fitch, William L; Hammer, Gregory B

    2017-10-01

    regression model. Analysis of variance was used to determine whether the absolute log proportional error differed by the intended injection volume. Interindividual and intraindividual deviation from the intended injection volume was also characterized. As the intended injection volumes decreased, the absolute log proportional injection volume error increased (analysis of variance, P standard deviations of the log proportional errors for injection volumes between physicians and pediatric PACU nurses; however, the difference in absolute bias was significantly higher for nurses with a 2-sided significance of P = .03. Clinically significant dose variation occurs when injecting volumes ≤0.5 mL. Administering small volumes of medications may result in unintended medication administration errors.

  5. Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances

    Science.gov (United States)

    Deng, Wei Q; Asma, Senay; Paré, Guillaume

    2014-01-01

    Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analysis to identify the genetic basis of phenotypic variability and to prioritize single-nucleotide polymorphisms (SNPs) for gene–gene and gene–environment interactions. The use of Levene's test has several advantages, including robustness to departure from the normality assumption, freedom from the influence of the main effects of SNPs, and no assumption of an additive genetic model. We conducted a meta-analysis of the log-transformed body mass index of 5892 individuals and identified a variant with a highly suggestive Levene's test P-value of 4.28E-06 near the NEGR1 locus known to be associated with extreme obesity. PMID:23921533

  6. III-V semiconductor materials and devices

    CERN Document Server

    Malik, R J

    1989-01-01

    The main emphasis of this volume is on III-V semiconductor epitaxial and bulk crystal growth techniques. Chapters are also included on material characterization and ion implantation. In order to put these growth techniques into perspective a thorough review of the physics and technology of III-V devices is presented. This is the first book of its kind to discuss the theory of the various crystal growth techniques in relation to their advantages and limitations for use in III-V semiconductor devices.

  7. Assessing child belt fit, volume II : effect of restraint configuration, booster seat designs, seating procedure, and belt fit on the dynamic response of the hybrid III 10-year-old ATD in sled tests.

    Science.gov (United States)

    2008-09-01

    A total of 49 dynamic sled tests were performed with the Hybrid III 10YO to examine issues relating to child belt fit. The goals of these tests were to evaluate ATD response to realistic belt geometries and belt fit, develop methods for accurate, rep...

  8. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  9. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  10. Phenotypic variance explained by local ancestry in admixed African Americans.

    Science.gov (United States)

    Shriner, Daniel; Bentley, Amy R; Doumatey, Ayo P; Chen, Guanjie; Zhou, Jie; Adeyemo, Adebowale; Rotimi, Charles N

    2015-01-01

    We surveyed 26 quantitative traits and disease outcomes to understand the proportion of phenotypic variance explained by local ancestry in admixed African Americans. After inferring local ancestry as the number of African-ancestry chromosomes at hundreds of thousands of genotyped loci across all autosomes, we used a linear mixed effects model to estimate the variance explained by local ancestry in two large independent samples of unrelated African Americans. We found that local ancestry at major and polygenic effect genes can explain up to 20 and 8% of phenotypic variance, respectively. These findings provide evidence that most but not all additive genetic variance is explained by genetic markers undifferentiated by ancestry. These results also inform the proportion of health disparities due to genetic risk factors and the magnitude of error in association studies not controlling for local ancestry.

  11. Host nutrition alters the variance in parasite transmission potential.

    Science.gov (United States)

    Vale, Pedro F; Choisy, Marc; Little, Tom J

    2013-04-23

    The environmental conditions experienced by hosts are known to affect their mean parasite transmission potential. How different conditions may affect the variance of transmission potential has received less attention, but is an important question for disease management, especially if specific ecological contexts are more likely to foster a few extremely infectious hosts. Using the obligate-killing bacterium Pasteuria ramosa and its crustacean host Daphnia magna, we analysed how host nutrition affected the variance of individual parasite loads, and, therefore, transmission potential. Under low food, individual parasite loads showed similar mean and variance, following a Poisson distribution. By contrast, among well-nourished hosts, parasite loads were right-skewed and overdispersed, following a negative binomial distribution. Abundant food may, therefore, yield individuals causing potentially more transmission than the population average. Measuring both the mean and variance of individual parasite loads in controlled experimental infections may offer a useful way of revealing risk factors for potential highly infectious hosts.

  12. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  13. Advanced methods of analysis variance on scenarios of nuclear prospective

    International Nuclear Information System (INIS)

    Blazquez, J.; Montalvo, C.; Balbas, M.; Garcia-Berrocal, A.

    2011-01-01

    Traditional techniques of propagation of variance are not very reliable, because there are uncertainties of 100% relative value, for this so use less conventional methods, such as Beta distribution, Fuzzy Logic and the Monte Carlo Method.

  14. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  15. Heritability, variance components and genetic advance of some ...

    African Journals Online (AJOL)

    Heritability, variance components and genetic advance of some yield and yield related traits in Ethiopian ... African Journal of Biotechnology ... randomized complete block design at Adet Agricultural Research Station in 2008 cropping season.

  16. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  17. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J.

    2009-01-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing

  18. Volatility and variance swaps : A comparison of quantitative models to calculate the fair volatility and variance strike

    OpenAIRE

    Röring, Johan

    2017-01-01

    Volatility is a common risk measure in the field of finance that describes the magnitude of an asset’s up and down movement. From only being a risk measure, volatility has become an asset class of its own and volatility derivatives enable traders to get an isolated exposure to an asset’s volatility. Two kinds of volatility derivatives are volatility swaps and variance swaps. The problem with volatility swaps and variance swaps is that they require estimations of the future variance and volati...

  19. ASYMMETRY OF MARKET RETURNS AND THE MEAN VARIANCE FRONTIER

    OpenAIRE

    SENGUPTA, Jati K.; PARK, Hyung S.

    1994-01-01

    The hypothesis that the skewness and asymmetry have no significant impact on the mean variance frontier is found to be strongly violated by monthly U.S. data over the period January 1965 through December 1974. This result raises serious doubts whether the common market portifolios such as SP 500, value weighted and equal weighted returns can serve as suitable proxies for meanvariance efficient portfolios in the CAPM framework. A new test for assessing the impact of skewness on the variance fr...

  20. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  1. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  2. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  3. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    Science.gov (United States)

    2008-12-01

    slight longitudinal variations, with secondary high- latitude peaks occurring over Greenland and Europe . As the QBO changes to the westerly phase, the...equatorial GW temperature variances from suborbital data (e.g., Eck- ermann et al. 1995). The extratropical wave variances are generally larger in the...emanating from tropopause altitudes, presumably radiated from tropospheric jet stream in- stabilities associated with baroclinic storm systems that

  4. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    Giorla, J.

    1985-10-01

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case [fr

  5. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  6. The asymptotic variance of departures in critically loaded queues

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; Mandjes, M.R.H.; Nazarathy, Y.; Whitt, W.

    2011-01-01

    We consider the asymptotic variance of the departure counting process D(t) of the GI/G/1 queue; D(t) denotes the number of departures up to time t. We focus on the case where the system load ϱ equals 1, and prove that the asymptotic variance rate satisfies limt→∞varD(t) / t = λ(1 - 2 / π)(ca2 +

  7. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  8. Why risk is not variance: an expository note.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2008-08-01

    Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.

  9. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  10. A versatile omnibus test for detecting mean and variance heterogeneity.

    Science.gov (United States)

    Cao, Ying; Wei, Peng; Bailey, Matthew; Kauwe, John S K; Maxwell, Taylor J

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (G × G), or gene-by-environment interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRT(MV)) or either effect alone (LRT(M) or LRT(V)) in the presence of covariates. Using extensive simulations for our method and others, we found that all parametric tests were sensitive to nonnormality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant, we demonstrate how LD can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D', and relatively low r² values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance-only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect G × G interactions and also how vQTL are related to relationship loci, and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait.

  11. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  12. Antithrombin III blood test

    Science.gov (United States)

    ... medlineplus.gov/ency/article/003661.htm Antithrombin III blood test To use the sharing features on this page, ... a protein that helps control blood clotting. A blood test can determine the amount of AT III present ...

  13. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  14. Accounting for Cosmic Variance in Studies of Gravitationally Lensed High-redshift Galaxies in the Hubble Frontier Field Clusters

    Science.gov (United States)

    Robertson, Brant E.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; Stark, Dan P.; McLeod, Derek

    2014-12-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ~35% at redshift z ~ 7 to >~ 65% at z ~ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.

  15. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  16. How does variance in fertility change over the demographic transition?

    Science.gov (United States)

    Hruschka, Daniel J; Burger, Oskar

    2016-04-19

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45-49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. © 2016 The Author(s).

  17. Mononuclear non-heme iron(III)

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Chemical Sciences; Volume 123; Issue 2. Mononuclear non-heme iron(III) complexes of linear and tripodal tridentate ligands as functional models for catechol dioxygenases: Effect of -alkyl substitution on regioselectivity and reaction rate. Mallayan Palaniandavar Kusalendiran Visvaganesan.

  18. Individual Differences in EEG Spectral Power Reflect Genetic Variance in Gray and White Matter Volumes

    NARCIS (Netherlands)

    Smit, D.J.A.; Boomsma, D.I.; Schnack, H.G.; Hulshoff Pol, H.E.; de Geus, E.J.C.

    2012-01-01

    The human electroencephalogram (EEG) consists of oscillations that reflect the summation of postsynaptic potentials at the dendritic tree of cortical neurons. The strength of the oscillations (EEG power) is a highly genetic trait that has been related to individual differences in many phenotypes,

  19. Waste Isolation Pilot Plant No-migration variance petition. Addendum: Volume 7, Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    1990-03-01

    This report describes various aspects of the Waste Isolation Pilot Plant (WIPP) including design data, waste characterization, dissolution features, ground water hydrology, natural resources, monitoring, general geology, and the gas generation/test program.

  20. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  1. Genetic and environmental variance in content dimensions of the MMPI.

    Science.gov (United States)

    Rose, R J

    1988-08-01

    To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.

  2. A new variance stabilizing transformation for gene expression data analysis.

    Science.gov (United States)

    Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor

    2013-12-01

    In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.

  3. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Yoon, Ji-Hun

    2015-01-01

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  4. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  5. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  6. Variance estimation for sensitivity analysis of poverty and inequality measures

    Directory of Open Access Journals (Sweden)

    Christian Dudel

    2017-04-01

    Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.

  7. Studying Variance in the Galactic Ultra-compact Binary Population

    Science.gov (United States)

    Larson, Shane; Breivik, Katelyn

    2017-01-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  8. Variance of a product with application to uranium estimation

    International Nuclear Information System (INIS)

    Lowe, V.W.; Waterman, M.S.

    1976-01-01

    The U in a container can either be determined directly by NDA or by estimating the weight of material in the container and the concentration of U in this material. It is important to examine the statistical properties of estimating the amount of U by multiplying the estimates of weight and concentration. The variance of the product determines the accuracy of the estimate of the amount of uranium. This paper examines the properties of estimates of the variance of the product of two random variables

  9. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  10. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  11. Variance squeezing and entanglement of the XX central spin model

    International Nuclear Information System (INIS)

    El-Orany, Faisal A A; Abdalla, M Sebawe

    2011-01-01

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  12. Variance squeezing and entanglement of the XX central spin model

    Energy Technology Data Exchange (ETDEWEB)

    El-Orany, Faisal A A [Department of Mathematics and Computer Science, Faculty of Science, Suez Canal University, Ismailia (Egypt); Abdalla, M Sebawe, E-mail: m.sebaweh@physics.org [Mathematics Department, College of Science, King Saud University PO Box 2455, Riyadh 11451 (Saudi Arabia)

    2011-01-21

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  13. Application of variance reduction techniques of Monte-Carlo method to deep penetration shielding problems

    International Nuclear Information System (INIS)

    Rawat, K.K.; Subbaiah, K.V.

    1996-01-01

    General purpose Monte Carlo code MCNP is being widely employed for solving deep penetration problems by applying variance reduction techniques. These techniques depend on the nature and type of the problem being solved. Application of geometry splitting and implicit capture method are examined to study the deep penetration problems of neutron, gamma and coupled neutron-gamma in thick shielding materials. The typical problems chosen are: i) point isotropic monoenergetic gamma ray source of 1 MeV energy in nearly infinite water medium, ii) 252 Cf spontaneous source at the centre of 140 cm thick water and concrete and iii) 14 MeV fast neutrons incident on the axis of 100 cm thick concrete disk. (author). 7 refs., 5 figs

  14. Accounting for Cosmic Variance in Studies of Gravitationally Lensed High-redshift Galaxies in the Hubble Frontier Field Clusters

    OpenAIRE

    Robertson, Brant E.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; Stark, Dan P.; McLeod, Derek

    2014-01-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we sho...

  15. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  16. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  17. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...

  18. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2014-01-01

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...

  19. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...

  20. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  1. Genetic variance components for residual feed intake and feed ...

    African Journals Online (AJOL)

    Feeding costs of animals is a major determinant of profitability in livestock production enterprises. Genetic selection to improve feed efficiency aims to reduce feeding cost in beef cattle and thereby improve profitability. This study estimated genetic (co)variances between weaning weight and other production, reproduction ...

  2. Cumulative Prospect Theory, Option Returns, and the Variance Premium

    NARCIS (Netherlands)

    Baele, Lieven; Driessen, Joost; Ebert, Sebastian; Londono Yarce, J.M.; Spalt, Oliver

    The variance premium and the pricing of out-of-the-money (OTM) equity index options are major challenges to standard asset pricing models. We develop a tractable equilibrium model with Cumulative Prospect Theory (CPT) preferences that can overcome both challenges. The key insight is that the

  3. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of white noise) as a result of current production objectives.

  4. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    Essa, Khalid S

    2013-01-01

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  5. Bounds for Tail Probabilities of the Sample Variance

    Directory of Open Access Journals (Sweden)

    Van Zuijlen M

    2009-01-01

    Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

  6. Robust estimation of the noise variance from background MR data

    NARCIS (Netherlands)

    Sijbers, J.; Den Dekker, A.J.; Poot, D.; Bos, R.; Verhoye, M.; Van Camp, N.; Van der Linden, A.

    2006-01-01

    In the literature, many methods are available for estimation of the variance of the noise in magnetic resonance (MR) images. A commonly used method, based on the maximum of the background mode of the histogram, is revisited and a new, robust, and easy to use method is presented based on maximum

  7. Stable limits for sums of dependent infinite variance random variables

    DEFF Research Database (Denmark)

    Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas

    2011-01-01

    The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most of these...

  8. Computing the Expected Value and Variance of Geometric Measures

    DEFF Research Database (Denmark)

    Staals, Frank; Tsirogiannis, Constantinos

    2017-01-01

    distance (MPD), the squared Euclidean distance from the centroid, and the diameter of the minimum enclosing disk. We also describe an efficient (1-e)-approximation algorithm for computing the mean and variance of the mean pairwise distance. We implemented three of our algorithms and we show that our...

  9. Estimation of the additive and dominance variances in South African ...

    African Journals Online (AJOL)

    The objective of this study was to estimate dominance variance for number born alive (NBA), 21- day litter weight (LWT21) and interval between parities (FI) in South African Landrace pigs. A total of 26223 NBA, 21335 LWT21 and 16370 FI records were analysed. Bayesian analysis via Gibbs sampling was used to estimate ...

  10. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  11. A Visual Model for the Variance and Standard Deviation

    Science.gov (United States)

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  12. Multidimensional adaptive testing with a minimum error-variance criterion

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple

  13. Asymptotics of variance of the lattice point count

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří

    2008-01-01

    Roč. 58, č. 3 (2008), s. 751-758 ISSN 0011-4642 R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : point lattice * variance Subject RIV: BA - General Mathematics Impact factor: 0.210, year: 2008

  14. Vertical velocity variances and Reynold stresses at Brookhaven

    DEFF Research Database (Denmark)

    Busch, Niels E.; Brown, R.M.; Frizzola, J.A.

    1970-01-01

    Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind v...

  15. Estimates of variance components for postweaning feed intake and ...

    African Journals Online (AJOL)

    Mike

    2013-03-09

    Mar 9, 2013 ... transformation of RFIp and RDGp to z-scores (mean = 0.0, variance = 1.0) and then ... generation pedigree (n = 9 653) used for this analysis. ..... Nkrumah, J.D., Basarab, J.A., Wang, Z., Li, C., Price, M.A., Okine, E.K., Crews Jr., ...

  16. An observation on the variance of a predicted response in ...

    African Journals Online (AJOL)

    ... these properties and computational simplicity. To avoid over fitting, along with the obvious advantage of having a simpler equation, it is shown that the addition of a variable to a regression equation does not reduce the variance of a predicted response. Key words: Linear regression; Partitioned matrix; Predicted response ...

  17. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity

  18. The Threat of Common Method Variance Bias to Theory Building

    Science.gov (United States)

    Reio, Thomas G., Jr.

    2010-01-01

    The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…

  19. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Maguire, R.P.; Leenders, K.L.; Spyrou, N.M.

    2000-01-01

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  20. 40 CFR 268.44 - Variance from a treatment standard.

    Science.gov (United States)

    2010-07-01

    ... complete petition may be requested as needed to send to affected states and Regional Offices. (e) The... provide an opportunity for public comment. The final decision on a variance from a treatment standard will... than) the concentrations necessary to minimize short- and long-term threats to human health and the...

  1. Application of effective variance method for contamination monitor calibration

    International Nuclear Information System (INIS)

    Goncalez, O.L.; Freitas, I.S.M. de.

    1990-01-01

    In this report, the calibration of a thin window Geiger-Muller type monitor for alpha superficial contamination is presented. The calibration curve is obtained by the method of the least-squares fitting with effective variance. The method and the approach for the calculation are briefly discussed. (author)

  2. The VIX, the Variance Premium, and Expected Returns

    DEFF Research Database (Denmark)

    Osterrieder, Daniela Maria; Ventosa-Santaulària, Daniel; Vera-Valdés, Eduardo

    2018-01-01

    . These problems are eliminated if risk is captured by the variance premium (VP) instead; it is unobservable, however. We propose a 2SLS estimator that produces consistent estimates without observing the VP. Using this method, we find a positive risk–return trade-off and long-run return predictability. Our...

  3. Some asymptotic theory for variance function smoothing | Kibua ...

    African Journals Online (AJOL)

    Simple selection of the smoothing parameter is suggested. Both homoscedastic and heteroscedastic regression models are considered. Keywords: Asymptotic, Smoothing, Kernel, Bandwidth, Bias, Variance, Mean squared error, Homoscedastic, Heteroscedastic. > East African Journal of Statistics Vol. 1 (1) 2005: pp. 9-22 ...

  4. Variance-optimal hedging for processes with stationary independent increments

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...

  5. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  6. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  7. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    Science.gov (United States)

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  8. Molecular variance of the Tunisian almond germplasm assessed by ...

    African Journals Online (AJOL)

    The genetic variance analysis of 82 almond (Prunus dulcis Mill.) genotypes was performed using ten genomic simple sequence repeats (SSRs). A total of 50 genotypes from Tunisia including local landraces identified while prospecting the different sites of Bizerte and Sidi Bouzid (Northern and central parts) which are the ...

  9. Starting design for use in variance exchange algorithms | Iwundu ...

    African Journals Online (AJOL)

    A new method of constructing the initial design for use in variance exchange algorithms is presented. The method chooses support points to go into the design as measures of distances of the support points from the centre of the geometric region and of permutation-invariant sets. The initial design is as close as possible to ...

  10. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  11. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  12. Heterogeneity of variance and its implications on dairy cattle breeding

    African Journals Online (AJOL)

    Milk yield data (n = 12307) from 116 Holstein-Friesian herds were grouped into three production environments based on mean and standard deviation of herd 305-day milk yield and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML ...

  13. Effects of Diversification of Assets on Mean and Variance | Jayeola ...

    African Journals Online (AJOL)

    Diversification is a means of minimizing risk and maximizing returns by investing in a variety of assets of the portfolio. This paper is written to determine the effects of diversification of three types of Assets; uncorrelated, perfectly correlated and perfectly negatively correlated assets on mean and variance. To go about this, ...

  14. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  15. On zero variance Monte Carlo path-stretching schemes

    International Nuclear Information System (INIS)

    Lux, I.

    1983-01-01

    A zero variance path-stretching biasing scheme proposed for a special case by Dwivedi is derived in full generality. The procedure turns out to be the generalization of the exponential transform. It is shown that the biased game can be interpreted as an analog simulation procedure, thus saving some computational effort in comparison with the corresponding nonanalog game

  16. A mean-variance frontier in discrete and continuous time

    NARCIS (Netherlands)

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation

  17. Hedging with stock index futures: downside risk versus the variance

    NARCIS (Netherlands)

    Brouwer, F.; Nat, van der M.

    1995-01-01

    In this paper we investigate hedging a stock portfolio with stock index futures.Instead of defining the hedge ratio as the minimum variance hedge ratio, we considerseveral measures of downside risk: the semivariance according to Markowitz [ 19591 andthe various lower partial moments according to

  18. The variance quadtree algorithm: use for spatial sampling design

    NARCIS (Netherlands)

    Minasny, B.; McBratney, A.B.; Walvoort, D.J.J.

    2007-01-01

    Spatial sampling schemes are mainly developed to determine sampling locations that can cover the variation of environmental properties in the area of interest. Here we proposed the variance quadtree algorithm for sampling in an area with prior information represented as ancillary or secondary

  19. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative

  20. Variance component and heritability estimates of early growth traits ...

    African Journals Online (AJOL)

    as selection criteria for meat production in sheep (Anon, 1970; Olson et ai., 1976;. Lasslo et ai., 1985; Badenhorst et ai., 1991). If these traits are to be included in a breeding programme, accurate estimates of breeding values will be needed to optimize selection programmes. This requires a knowledge of variance and co-.

  1. Variances in consumers prices of selected food Items among ...

    African Journals Online (AJOL)

    The study focused on the determination of variances among consumer prices of rice (local white), beans (white) and garri (yellow) in Watts, Okurikang and 8 Miles markets in southern zone of Cross River State. Completely randomized design was used to test the research hypothesis. Comparing the consumer prices of rice, ...

  2. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.707, year: 2016

  3. Variance in exposed perturbations impairs retention of visuomotor adaptation.

    Science.gov (United States)

    Canaveral, Cesar Augusto; Danion, Frédéric; Berrigan, Félix; Bernier, Pierre-Michel

    2017-11-01

    Sensorimotor control requires an accurate estimate of the state of the body. The brain optimizes state estimation by combining sensory signals with predictions of the sensory consequences of motor commands using a forward model. Given that both sensory signals and predictions are uncertain (i.e., noisy), the brain optimally weights the relative reliance on each source of information during adaptation. In support, it is known that uncertainty in the sensory predictions influences the rate and generalization of visuomotor adaptation. We investigated whether uncertainty in the sensory predictions affects the retention of a new visuomotor relationship. This was done by exposing three separate groups to a visuomotor rotation whose mean was common at 15° counterclockwise but whose variance around the mean differed (i.e., SD of 0°, 3.2°, or 4.5°). Retention was assessed by measuring the persistence of the adapted behavior in a no-vision phase. Results revealed that mean reach direction late in adaptation was similar across groups, suggesting it depended mainly on the mean of exposed rotations and was robust to differences in variance. However, retention differed across groups, with higher levels of variance being associated with a more rapid reversion toward nonadapted behavior. A control experiment ruled out the possibility that differences in retention were accounted for by differences in success rates. Exposure to variable rotations may have increased the uncertainty in sensory predictions, making the adapted forward model more labile and susceptible to change or decay. NEW & NOTEWORTHY The brain predicts the sensory consequences of motor commands through a forward model. These predictions are subject to uncertainty. We use visuomotor adaptation and modulate uncertainty in the sensory predictions by manipulating the variance in exposed rotations. Results reveal that variance does not influence the final extent of adaptation but selectively impairs the retention of

  4. 2000 Physical Acoustics Summer School (PASS 00). Volume II: Transparencies

    National Research Council Canada - National Science Library

    Bass, Henry

    2001-01-01

    .... Volume II of these proceedings contains copies of the transparencies used by the lecturers and Volume III contains background materials that were sent to student and discussion leader participants...

  5. Genetic and environmental variances of bone microarchitecture and bone remodeling markers: a twin study.

    Science.gov (United States)

    Bjørnerem, Åshild; Bui, Minh; Wang, Xiaofang; Ghasem-Zadeh, Ali; Hopper, John L; Zebaze, Roger; Seeman, Ego

    2015-03-01

    All genetic and environmental factors contributing to differences in bone structure between individuals mediate their effects through the final common cellular pathway of bone modeling and remodeling. We hypothesized that genetic factors account for most of the population variance of cortical and trabecular microstructure, in particular intracortical porosity and medullary size - void volumes (porosity), which establish the internal bone surface areas or interfaces upon which modeling and remodeling deposit or remove bone to configure bone microarchitecture. Microarchitecture of the distal tibia and distal radius and remodeling markers were measured for 95 monozygotic (MZ) and 66 dizygotic (DZ) white female twin pairs aged 40 to 61 years. Images obtained using high-resolution peripheral quantitative computed tomography were analyzed using StrAx1.0, a nonthreshold-based software that quantifies cortical matrix and porosity. Genetic and environmental components of variance were estimated under the assumptions of the classic twin model. The data were consistent with the proportion of variance accounted for by genetic factors being: 72% to 81% (standard errors ∼18%) for the distal tibial total, cortical, and medullary cross-sectional area (CSA); 67% and 61% for total cortical porosity, before and after adjusting for total CSA, respectively; 51% for trabecular volumetric bone mineral density (vBMD; all p accounted for 47% to 68% of the variance (all p ≤ 0.001). Cross-twin cross-trait correlations between tibial cortical porosity and medullary CSA were higher for MZ (rMZ  = 0.49) than DZ (rDZ  = 0.27) pairs before (p = 0.024), but not after (p = 0.258), adjusting for total CSA. For the remodeling markers, the data were consistent with genetic factors accounting for 55% to 62% of the variance. We infer that middle-aged women differ in their bone microarchitecture and remodeling markers more because of differences in their genetic factors than

  6. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    Science.gov (United States)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  7. PORTFOLIO COMPOSITION WITH MINIMUM VARIANCE: COMPARISON WITH MARKET BENCHMARKS

    Directory of Open Access Journals (Sweden)

    Daniel Menezes Cavalcante

    2016-07-01

    Full Text Available Portfolio optimization strategies are advocated as being able to allow the composition of stocks portfolios that provide returns above market benchmarks. This study aims to determine whether, in fact, portfolios based on the minimum variance strategy, optimized by the Modern Portfolio Theory, are able to achieve earnings above market benchmarks in Brazil. Time series of 36 securities traded on the BM&FBOVESPA have been analyzed in a long period of time (1999-2012, with sample windows of 12, 36, 60 and 120 monthly observations. The results indicated that the minimum variance portfolio performance is superior to market benchmarks (CDI and IBOVESPA in terms of return and risk-adjusted return, especially in medium and long-term investment horizons.

  8. Compounding approach for univariate time series with nonstationary variances

    Science.gov (United States)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  9. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  10. Robust LOD scores for variance component-based linkage analysis.

    Science.gov (United States)

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  11. Response variance in functional maps: neural darwinism revisited.

    Directory of Open Access Journals (Sweden)

    Hirokazu Takahashi

    Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  12. Response variance in functional maps: neural darwinism revisited.

    Science.gov (United States)

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  13. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  14. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  15. Metallothionein (MT)-III

    DEFF Research Database (Denmark)

    Carrasco, J; Giralt, M; Molinero, A

    1999-01-01

    Metallothionein-III is a low molecular weight, heavy-metal binding protein expressed mainly in the central nervous system. First identified as a growth inhibitory factor (GIF) of rat cortical neurons in vitro, it has subsequently been shown to be a member of the metallothionein (MT) gene family...... injected rats. The specificity of the antibody was also demonstrated in immunocytochemical studies by the elimination of the immunostaining by preincubation of the antibody with brain (but not liver) extracts, and by the results obtained in MT-III null mice. The antibody was used to characterize...... the putative differences between the rat brain MT isoforms, namely MT-I+II and MT-III, in the freeze lesion model of brain damage, and for developing an ELISA for MT-III suitable for brain samples. In the normal rat brain, MT-III was mostly present primarily in astrocytes. However, lectin staining indicated...

  16. Spatial analysis based on variance of moving window averages

    OpenAIRE

    Wu, B M; Subbarao, K V; Ferrandino, F J; Hao, J J

    2006-01-01

    A new method for analysing spatial patterns was designed based on the variance of moving window averages (VMWA), which can be directly calculated in geographical information systems or a spreadsheet program (e.g. MS Excel). Different types of artificial data were generated to test the method. Regardless of data types, the VMWA method correctly determined the mean cluster sizes. This method was also employed to assess spatial patterns in historical plant disease survey data encompassing both a...

  17. A mean-variance frontier in discrete and continuous time

    OpenAIRE

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...

  18. Efficient Scores, Variance Decompositions and Monte Carlo Swindles.

    Science.gov (United States)

    1984-08-28

    to ;r Then a version .of Pythagoras ’ theorem gives the variance decomposition (6.1) varT var S var o(T-S) P P0 0 0 One way to see this is to note...complete sufficient statistics for (B, a) , and that the standard- ized residuals a(y - XB) 6 are ancillary. Basu’s sufficiency- ancillarity theorem

  19. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  20. The mean and variance of phylogenetic diversity under rarefaction

    OpenAIRE

    Nipperess, David A.; Matsen, Frederick A.

    2013-01-01

    Phylogenetic diversity (PD) depends on sampling intensity, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD. We have derived exact formulae for t...

  1. On mean reward variance in semi-Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2005-01-01

    Roč. 62, č. 3 (2005), s. 387-397 ISSN 1432-2994 R&D Projects: GA ČR(CZ) GA402/05/0115; GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov and semi-Markov processes with rewards * variance of cumulative reward * asymptotic behaviour Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.259, year: 2005

  2. Mean-Variance Analysis in a Multiperiod Setting

    OpenAIRE

    Frauendorfer, Karl; Siede, Heiko

    1997-01-01

    Similar to the classical Markowitz approach it is possible to apply a mean-variance criterion to a multiperiod setting to obtain efficient portfolios. To represent the stochastic dynamic characteristics necessary for modelling returns a process of asset returns is discretized with respect to time and space and summarized in a scenario tree. The resulting optimization problem is solved by means of stochastic multistage programming. The optimal solutions show equivalent structural properties as...

  3. Analytic solution to variance optimization with no short positions

    Science.gov (United States)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  4. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  5. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  6. Improved estimation of the variance in Monte Carlo criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)

    2008-07-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  7. A general transform for variance reduction in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Becker, T.L.; Larsen, E.W.

    2011-01-01

    This paper describes a general transform to reduce the variance of the Monte Carlo estimate of some desired solution, such as flux or biological dose. This transform implicitly includes many standard variance reduction techniques, including source biasing, collision biasing, the exponential transform for path-length stretching, and weight windows. Rather than optimizing each of these techniques separately or choosing semi-empirical biasing parameters based on the experience of a seasoned Monte Carlo practitioner, this General Transform unites all these variance techniques to achieve one objective: a distribution of Monte Carlo particles that attempts to optimize the desired solution. Specifically, this transform allows Monte Carlo particles to be distributed according to the user's specification by using information obtained from a computationally inexpensive deterministic simulation of the problem. For this reason, we consider the General Transform to be a hybrid Monte Carlo/Deterministic method. The numerical results con rm that the General Transform distributes particles according to the user-specified distribution and generally provide reasonable results for shielding applications. (author)

  8. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  9. A proxy for variance in dense matching over homogeneous terrain

    Science.gov (United States)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low

  10. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  11. On the noise variance of a digital mammography system

    International Nuclear Information System (INIS)

    Burgess, Arthur

    2004-01-01

    A recent paper by Cooper et al. [Med. Phys. 30, 2614-2621 (2003)] contains some apparently anomalous results concerning the relationship between pixel variance and x-ray exposure for a digital mammography system. They found an unexpected peak in a display domain pixel variance plot as a function of 1/mAs (their Fig. 5) with a decrease in the range corresponding to high display data values, corresponding to low x-ray exposures. As they pointed out, if the detector response is linear in exposure and the transformation from raw to display data scales is logarithmic, then pixel variance should be a monotonically increasing function in the figure. They concluded that the total system transfer curve, between input exposure and display image data values, is not logarithmic over the full exposure range. They separated data analysis into two regions and plotted the logarithm of display image pixel variance as a function of the logarithm of the mAs used to produce the phantom images. They found a slope of minus one for high mAs values and concluded that the transfer function is logarithmic in this region. They found a slope of 0.6 for the low mAs region and concluded that the transfer curve was neither linear nor logarithmic for low exposure values. It is known that the digital mammography system investigated by Cooper et al. has a linear relationship between exposure and raw data values [Vedantham et al., Med. Phys. 27, 558-567 (2000)]. The purpose of this paper is to show that the variance effect found by Cooper et al. (their Fig. 5) arises because the transformation from the raw data scale (14 bits) to the display scale (12 bits), for the digital mammography system they investigated, is not logarithmic for raw data values less than about 300 (display data values greater than about 3300). At low raw data values the transformation is linear and prevents over-ranging of the display data scale. Parametric models for the two transformations will be presented. Results of pixel

  12. An Efficient SDN Load Balancing Scheme Based on Variance Analysis for Massive Mobile Users

    Directory of Open Access Journals (Sweden)

    Hong Zhong

    2015-01-01

    Full Text Available In a traditional network, server load balancing is used to satisfy the demand for high data volumes. The technique requires large capital investment while offering poor scalability and flexibility, which difficultly supports highly dynamic workload demands from massive mobile users. To solve these problems, this paper analyses the principle of software-defined networking (SDN and presents a new probabilistic method of load balancing based on variance analysis. The method can be used to dynamically manage traffic flows for supporting massive mobile users in SDN networks. The paper proposes a solution using the OpenFlow virtual switching technology instead of the traditional hardware switching technology. A SDN controller monitors data traffic of each port by means of variance analysis and provides a probability-based selection algorithm to redirect traffic dynamically with the OpenFlow technology. Compared with the existing load balancing methods which were designed to support traditional networks, this solution has lower cost, higher reliability, and greater scalability which satisfy the needs of mobile users.

  13. Depressive status explains a significant amount of the variance in COPD assessment test (CAT) scores.

    Science.gov (United States)

    Miravitlles, Marc; Molina, Jesús; Quintano, José Antonio; Campuzano, Anna; Pérez, Joselín; Roncero, Carlos

    2018-01-01

    COPD assessment test (CAT) is a short, easy-to-complete health status tool that has been incorporated into the multidimensional assessment of COPD in order to guide therapy; therefore, it is important to understand the factors determining CAT scores. This is a post hoc analysis of a cross-sectional, observational study conducted in respiratory medicine departments and primary care centers in Spain with the aim of identifying the factors determining CAT scores, focusing particularly on the cognitive status measured by the Mini-Mental State Examination (MMSE) and levels of depression measured by the short Beck Depression Inventory (BDI). A total of 684 COPD patients were analyzed; 84.1% were men, the mean age of patients was 68.7 years, and the mean forced expiratory volume in 1 second (%) was 55.1%. Mean CAT score was 21.8. CAT scores correlated with the MMSE score (Pearson's coefficient r =-0.371) and the BDI ( r =0.620), both p CAT scores and explained 45% of the variability. However, a model including only MMSE and BDI scores explained up to 40% and BDI alone explained 38% of the CAT variance. CAT scores are associated with clinical variables of severity of COPD. However, cognitive status and, in particular, the level of depression explain a larger percentage of the variance in the CAT scores than the usual COPD clinical severity variables.

  14. Research in collegiate mathematics education III

    CERN Document Server

    Arcavi, A; Kaput, Jim; Dubinsky, Ed; Dick, Thomas

    1998-01-01

    Volume III of Research in Collegiate Mathematics Education (RCME) presents state-of-the-art research on understanding, teaching, and learning mathematics at the post-secondary level. This volume contains information on methodology and research concentrating on these areas of student learning: Problem solving. Included here are three different articles analyzing aspects of Schoenfeld's undergraduate problem-solving instruction. The articles provide new detail and insight on a well-known and widely discussed course taught by Schoenfeld for many years. Understanding concepts. These articles fe

  15. White River Falls Fish Passage Project, Tygh Valley, Oregon : Final Technical Report, Volume III, Appendix B, Fisheries Report; Appendix C, Engineering Alternative Evaluation; Appendix D, Benefit/Cost Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Oregon. Dept. of Fish and Wildlife; Mount Hood National Forest (Or.)

    1985-06-01

    Studies were conducted to describe current habitat conditions in the White River basin above White River Falls and to evaluate the potential to produce anadromous fish. An inventory of spawning and rearing habitats, irrigation diversions, and enhancement opportunities for anadromous fish in the White River drainage was conducted. Survival of juvenile fish at White River Falls was estimated by releasing juvenile chinook and steelhead above the falls during high and low flow periods and recapturing them below the falls in 1983 and 1984. Four alternatives to provide upstream passage for adult salmon and steelhead were developd to a predesign level. The cost of adult passage and the estimated run size of anadromous fish were used to determine the benefit/cost of the preferred alternative. Possible effects of the introduction of anadromous fish on resident fish and on nearby Oak Springs Hatchery were evaluated. This included an inventory of resident species, a genetic study of native rainbow, and the identification of fish diseases in the basin. This volume contains appendices of habitat survey data, potential production, resident fish population data, upstream passage designs, and benefit/cost calculations. (ACR)

  16. BALTICA III. Plant condition and life management

    International Nuclear Information System (INIS)

    Hietanen, S.; Auerkari, P.

    1995-01-01

    The BALTICA III, International Conference on Plant Condition and Life Management was held on June 6 - 8, 1995 on board Silja Serenade on its cruise between Helsinki - Stockholm and at the Forest Lake Hotel Korpilampi in Espoo. BALTICA III provides forum for the transfer of technology from applied research to practise. This is the second volume of the publications, which contain the presentations given at the BALTICA III, Plant Condition and Life Management. A total of 45 articles report recent experience in plant condition and life management. The conference focuses on recent applications that have been demonstrated for the benefit of safe and economical operation of power plants. Practical approach is emphasised, including the presentations that aim to provide insight into new techniques, improvements in assessment methodologies as well as maintenance strategies. Compared to earlier occasions in the BALTICA series, a new aspect is in the applications of knowledge-based systems in the service of power plant life management. (orig.)

  17. NNDSS - Table III. Tuberculosis

    Data.gov (United States)

    U.S. Department of Health & Human Services — NNDSS - Table III. Tuberculosis - 2018.This Table includes total number of cases reported in the United States, by region and by states, in accordance with the...

  18. NNDSS - Table III. Tuberculosis

    Data.gov (United States)

    U.S. Department of Health & Human Services — NNDSS - Table III. Tuberculosis - 2017.This Table includes total number of cases reported in the United States, by region and by states, in accordance with the...

  19. Workshop 96. Part III

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-31

    Part III of the proceedings contain 155 contributions in various fields of science and technology including nuclear engineering, environmental science, and biomedical engineering. Out of these, 10 were selected to be inputted in INIS. (P.A.).

  20. Workshop 96. Part III

    International Nuclear Information System (INIS)

    1995-12-01

    Part III of the proceedings contain 155 contributions in various fields of science and technology including nuclear engineering, environmental science, and biomedical engineering. Out of these, 10 were selected to be inputted in INIS. (P.A.)

  1. Eddy current manual, volume 2

    International Nuclear Information System (INIS)

    Cecco, V.S.; Van Drunen, G.; Sharp, F.L.

    1984-09-01

    This report on eddy current testing is divided into three sections: (a) Demonstration of Basic Principles, (b) Practical (Laboratory) Tests and, (c) Typical Certification Questions. It is intended to be used as a supplement to ΣEddy Current Manual, Volume 1Σ (AECL-7523) during CSNDT Foundation Level II and III courses

  2. Fringe biasing: A variance reduction technique for optically thick meshes

    Energy Technology Data Exchange (ETDEWEB)

    Smedley-Stevenson, R. P. [AWE PLC, Aldermaston Reading, Berkshire, RG7 4PR (United Kingdom)

    2013-07-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  3. Fringe biasing: A variance reduction technique for optically thick meshes

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R. P.

    2013-01-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  4. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  5. Double Minimum Variance Beamforming Method to Enhance Photoacoustic Imaging

    OpenAIRE

    Paridar, Roya; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-01-01

    One of the common algorithms used to reconstruct photoacoustic (PA) images is the non-adaptive Delay-and-Sum (DAS) beamformer. However, the quality of the reconstructed PA images obtained by DAS is not satisfying due to its high level of sidelobes and wide mainlobe. In contrast, adaptive beamformers, such as minimum variance (MV), result in an improved image compared to DAS. In this paper, a novel beamforming method, called Double MV (D-MV) is proposed to enhance the image quality compared to...

  6. A Note on the Kinks at the Mean Variance Frontier

    OpenAIRE

    Vörös, J.; Kriens, J.; Strijbosch, L.W.G.

    1997-01-01

    In this paper the standard portfolio case with short sales restrictions is analyzed.Dybvig pointed out that if there is a kink at a risky portfolio on the efficient frontier, then the securities in this portfolio have equal expected return and the converse of this statement is false.For the existence of kinks at the efficient frontier the sufficient condition is given here and a new procedure is used to derive the efficient frontier, i.e. the characteristics of the mean variance frontier.

  7. Variance reduction techniques in the simulation of Markov processes

    International Nuclear Information System (INIS)

    Lessi, O.

    1987-01-01

    We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space

  8. A guide to SPSS for analysis of variance

    CERN Document Server

    Levine, Gustav

    2013-01-01

    This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce

  9. Diffusion-Based Trajectory Observers with Variance Constraints

    DEFF Research Database (Denmark)

    Alcocer, Alex; Jouffroy, Jerome; Oliveira, Paulo

    Diffusion-based trajectory observers have been recently proposed as a simple and efficient framework to solve diverse smoothing problems in underwater navigation. For instance, to obtain estimates of the trajectories of an underwater vehicle given position fixes from an acoustic positioning system...... of smoothing and is determined by resorting to trial and error. This paper presents a methodology to choose the observer gain by taking into account a priori information on the variance of the position measurement errors. Experimental results with data from an acoustic positioning system are presented...

  10. A Fay-Herriot Model with Different Random Effect Variances

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.

    2011-01-01

    Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf

  11. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions

  12. Relationship between turbulence energy and density variance in the solar neighbourhood molecular clouds

    Science.gov (United States)

    Kainulainen, J.; Federrath, C.

    2017-11-01

    The relationship between turbulence energy and gas density variance is a fundamental prediction for turbulence-dominated media and is commonly used in analytic models of star formation. We determine this relationship for 15 molecular clouds in the solar neighbourhood. We use the line widths of the CO molecule as the probe of the turbulence energy (sonic Mach number, ℳs) and three-dimensional models to reconstruct the density probability distribution function (ρ-PDF) of the clouds, derived using near-infrared extinction and Herschel dust emission data, as the probe of the density variance (σs). We find no significant correlation between ℳs and σs among the studied clouds, but we cannot rule out a weak correlation either. In the context of turbulence-dominated gas, the range of the ℳs and σs values corresponds to the model predictions. The data cannot constrain whether the turbulence-driving parameter, b, and/or thermal-to-magnetic pressure ratio, β, vary among the sample clouds. Most clouds are not in agreement with field strengths stronger than given by β ≲ 0.05. A model with b2β/ (β + 1) = 0.30 ± 0.06 provides an adequate fit to the cloud sample as a whole. Based on the average behaviour of the sample, we can rule out three regimes: (i) strong compression combined with a weak magnetic field (b ≳ 0.7 and β ≳ 3); (ii) weak compression (b ≲ 0.35); and (iii) a strong magnetic field (β ≲ 0.1). When we include independent magnetic field strength estimates in the analysis, the data rule out solenoidal driving (b < 0.4) for the majority of the solar neighbourhood clouds. However, most clouds have b parameters larger than unity, which indicates a discrepancy with the turbulence-dominated picture; we discuss the possible reasons for this.

  13. Comparison with experiment of COMETHE III-L fuel rod behaviour predictions

    International Nuclear Information System (INIS)

    Vliet, J. van; Billaux, M.

    1983-01-01

    A comparison is presented between experimental results and COMETHE III-L fuel rod behaviour predictions. The first part of the paper focuses on mechanical aspects, with as main experiments, AECL X-264 and Studsvik Interramp. The second part presents the results of a wide FGR benchmarking campaign, with a reference to previous COMETHE versions. It appears that the variance between experiment and calculation has decreased by a factor four when the III-J version was improved into the III-L version. As conclusion, some COMETHE III-L calculations are presented in order to illustrate its capability of predicting fuel rod performance limits. (author)

  14. Parameter uncertainty effects on variance-based sensitivity analysis

    International Nuclear Information System (INIS)

    Yu, W.; Harris, T.J.

    2009-01-01

    In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used

  15. Variance of indoor radon concentration: Major influencing factors

    Energy Technology Data Exchange (ETDEWEB)

    Yarmoshenko, I., E-mail: ivy@ecko.uran.ru [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Vasilyev, A.; Malinovsky, G. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Bossew, P. [German Federal Office for Radiation Protection (BfS), Berlin (Germany); Žunić, Z.S. [Institute of Nuclear Sciences “Vinca”, University of Belgrade (Serbia); Onischenko, A.; Zhukovsky, M. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation)

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. - Highlights: • Influence of lithosphere and anthroposphere on variance of indoor radon is found. • Level-by-level analysis reduces GSD by a factor of 1.9. • Worldwide GSD is underestimated.

  16. Variance Component Selection With Applications to Microbiome Taxonomic Data

    Directory of Open Access Journals (Sweden)

    Jing Zhai

    2018-03-01

    Full Text Available High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.

  17. Worldwide variance in the potential utilization of Gamma Knife radiosurgery.

    Science.gov (United States)

    Hamilton, Travis; Dade Lunsford, L

    2016-12-01

    OBJECTIVE The role of Gamma Knife radiosurgery (GKRS) has expanded worldwide during the past 3 decades. The authors sought to evaluate whether experienced users vary in their estimate of its potential use. METHODS Sixty-six current Gamma Knife users from 24 countries responded to an electronic survey. They estimated the potential role of GKRS for benign and malignant tumors, vascular malformations, and functional disorders. These estimates were compared with published disease epidemiological statistics and the 2014 use reports provided by the Leksell Gamma Knife Society (16,750 cases). RESULTS Respondents reported no significant variation in the estimated use in many conditions for which GKRS is performed: meningiomas, vestibular schwannomas, and arteriovenous malformations. Significant variance in the estimated use of GKRS was noted for pituitary tumors, craniopharyngiomas, and cavernous malformations. For many current indications, the authors found significant variance in GKRS users based in the Americas, Europe, and Asia. Experts estimated that GKRS was used in only 8.5% of the 196,000 eligible cases in 2014. CONCLUSIONS Although there was a general worldwide consensus regarding many major indications for GKRS, significant variability was noted for several more controversial roles. This expert opinion survey also suggested that GKRS is significantly underutilized for many current diagnoses, especially in the Americas. Future studies should be conducted to investigate health care barriers to GKRS for many patients.

  18. Hidden temporal order unveiled in stock market volatility variance

    Directory of Open Access Journals (Sweden)

    Y. Shapira

    2011-06-01

    Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.

  19. Waste Isolation Pilot Plant no-migration variance petition

    International Nuclear Information System (INIS)

    1990-01-01

    Section 3004 of RCRA allows EPA to grant a variance from the land disposal restrictions when a demonstration can be made that, to a reasonable degree of certainty, there will be no migration of hazardous constituents from the disposal unit for as long as the waste remains hazardous. Specific requirements for making this demonstration are found in 40 CFR 268.6, and EPA has published a draft guidance document to assist petitioners in preparing a variance request. Throughout the course of preparing this petition, technical staff from DOE, EPA, and their contractors have met frequently to discuss and attempt to resolve issues specific to radioactive mixed waste and the WIPP facility. The DOE believes it meets or exceeds all requirements set forth for making a successful ''no-migration'' demonstration. The petition presents information under five general headings: (1) waste information; (2) site characterization; (3) facility information; (4) assessment of environmental impacts, including the results of waste mobility modeling; and (5) analysis of uncertainties. Additional background and supporting documentation is contained in the 15 appendices to the petition, as well as in an extensive addendum published in October 1989

  20. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  1. MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE

    Directory of Open Access Journals (Sweden)

    I GEDE ERY NISCAHYANA

    2016-08-01

    Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution.  The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of  BNLI stock, 0% of SMDM stock, 1% of  SMGR stock.

  2. Variance decomposition-based sensitivity analysis via neural networks

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo

    2003-01-01

    This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project

  3. Concentration variance decay during magma mixing: a volcanic chronometer.

    Science.gov (United States)

    Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B

    2015-09-21

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.

  4. Mean-Variance-Validation Technique for Sequential Kriging Metamodels

    International Nuclear Information System (INIS)

    Lee, Tae Hee; Kim, Ho Sung

    2010-01-01

    The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean 0 validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean 0 validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels

  5. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  6. Argentine Population Genetic Structure: Large Variance in Amerindian Contribution

    Science.gov (United States)

    Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.

    2011-01-01

    Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183

  7. Report on the draft of the law No. 1253 concerning the Revamping and Expanding Domestic Electricity Supply. Volume III. Appendices and Table of abbreviations; Rapport sur le projet de loi (no. 1253) relatif a la modernisation et au developpement du service public de l'electricite. Tome III. Annexes et Table des sigles

    Energy Technology Data Exchange (ETDEWEB)

    Bataille, Christian [Assemblee Nationale, Paris (France)

    1999-02-11

    The third volume of the Report on behalf of the Production and Exchange Commission on the draft of the law No. 1253 concerning the Revamping and Expanding Domestic Electricity Supply contains Appendices. The appendix number 1 presents the directive 96/92 CE of the European Parliament and Council of 19 December 1996, concerning common rules referring to the electricity internal market. It contains the chapters titled: 1. Field of application and definitions; 2. General rules for sector organization; 3. Production; 4. Exploitation of the transport grid; 5. Exploitation of the distribution grid; 6. Accounting dissociation and transparency; 7. Organization of the grid access; 8. Final dispositions. The appendix number 2 gives the law no. 46 - 628 of 8 April, modified, on the nationalization of the electricity and gas. The third appendix reproduces Decree no. 55 - 662 of 20 May 1955 concerning relationships between the establishments aimed by the articles 2 and 23 of the law of 8 April 1946 and the autonomous producers of electric energy. The appendix number 4 contains the notification of State Council of 7 July 1994 regarding the diversification of EDF and GDF activities. The fifth appendix is a chronological list of the European negotiations concerning the opening of the electricity market (1987 -1997). Finally, a list of following abbreviations is given: ART, ATR, CNES, CRE, CTE, DNN, FACE, FPE, GRT, IEG, INB, PPI, RAG and SICAE.

  8. Spatially tuned normalization explains attention modulation variance within neurons.

    Science.gov (United States)

    Ni, Amy M; Maunsell, John H R

    2017-09-01

    Spatial attention improves perception of attended parts of a scene, a behavioral enhancement accompanied by modulations of neuronal firing rates. These modulations vary in size across neurons in the same brain area. Models of normalization explain much of this variance in attention modulation with differences in tuned normalization across neurons (Lee J, Maunsell JHR. PLoS One 4: e4651, 2009; Ni AM, Ray S, Maunsell JHR. Neuron 73: 803-813, 2012). However, recent studies suggest that normalization tuning varies with spatial location both across and within neurons (Ruff DA, Alberts JJ, Cohen MR. J Neurophysiol 116: 1375-1386, 2016; Verhoef BE, Maunsell JHR. eLife 5: e17256, 2016). Here we show directly that attention modulation and normalization tuning do in fact covary within individual neurons, in addition to across neurons as previously demonstrated. We recorded the activity of isolated neurons in the middle temporal area of two rhesus monkeys as they performed a change-detection task that controlled the focus of spatial attention. Using the same two drifting Gabor stimuli and the same two receptive field locations for each neuron, we found that switching which stimulus was presented at which location affected both attention modulation and normalization in a correlated way within neurons. We present an equal-maximum-suppression spatially tuned normalization model that explains this covariance both across and within neurons: each stimulus generates equally strong suppression of its own excitatory drive, but its suppression of distant stimuli is typically less. This new model specifies how the tuned normalization associated with each stimulus location varies across space both within and across neurons, changing our understanding of the normalization mechanism and how attention modulations depend on this mechanism. NEW & NOTEWORTHY Tuned normalization studies have demonstrated that the variance in attention modulation size seen across neurons from the same cortical

  9. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2017-02-01

    Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the

  10. Estimation of measurement variance in the context of environment statistics

    Science.gov (United States)

    Maiti, Pulakesh

    2015-02-01

    The object of environment statistics is for providing information on the environment, on its most important changes over time, across locations and identifying the main factors that influence them. Ultimately environment statistics would be required to produce higher quality statistical information. For this timely, reliable and comparable data are needed. Lack of proper and uniform definitions, unambiguous classifications pose serious problems to procure qualitative data. These cause measurement errors. We consider the problem of estimating measurement variance so that some measures may be adopted to improve upon the quality of data on environmental goods and services and on value statement in economic terms. The measurement technique considered here is that of employing personal interviewers and the sampling considered here is that of two-stage sampling.

  11. Risk Management - Variance Minimization or Lower Tail Outcome Elimination

    DEFF Research Database (Denmark)

    Aabo, Tom

    2002-01-01

    on future cash flows (the budget), while risk managers concerned about costly lower tail outcomes will hedge (considerably) less depending on the level of uncertainty. A risk management strategy of lower tail outcome elimination is in line with theoretical recommendations in a corporate value......This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess......-adding perspective. A cross-case study of blue-chip industrial companies partly supports the empirical use of a risk management strategy of lower tail outcome elimination but does not exclude other factors from (co-)driving the observations....

  12. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  13. Batch variation between branchial cell cultures: An analysis of variance

    DEFF Research Database (Denmark)

    Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

    2003-01-01

    We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...

  14. Interdependence of NAFTA capital markets: A minimum variance portfolio approach

    Directory of Open Access Journals (Sweden)

    López-Herrera Francisco

    2014-01-01

    Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.

  15. Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model

    Science.gov (United States)

    Deng, Guang-Feng; Lin, Woo-Tsong

    This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.

  16. Minimum variance linear unbiased estimators of loss and inventory

    International Nuclear Information System (INIS)

    Stewart, K.B.

    1977-01-01

    The article illustrates a number of approaches for estimating the material balance inventory and a constant loss amount from the accountability data from a sequence of accountability periods. The approaches all lead to linear estimates that have minimum variance. Techniques are shown whereby ordinary least squares, weighted least squares and generalized least squares computer programs can be used. Two approaches are recursive in nature and lend themselves to small specialized computer programs. Another approach is developed that is easy to program; could be used with a desk calculator and can be used in a recursive way from accountability period to accountability period. Some previous results are also reviewed that are very similar in approach to the present ones and vary only in the way net throughput measurements are statistically modeled. 5 refs

  17. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  18. Summary of Session III

    International Nuclear Information System (INIS)

    Furman, M.A.

    2002-01-01

    This is a summary of the talks presented in Session III ''Simulations of Electron-Cloud Build Up'' of the Mini-Workshop on Electron-Cloud Simulations for Proton and Positron Beams ECLOUD-02, held at CERN, 15-18 April 2002

  19. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  20. ACCOUNTING FOR COSMIC VARIANCE IN STUDIES OF GRAVITATIONALLY LENSED HIGH-REDSHIFT GALAXIES IN THE HUBBLE FRONTIER FIELD CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, Brant E.; Stark, Dan P. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Ellis, Richard S. [Department of Astronomy, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Dunlop, James S.; McLure, Ross J.; McLeod, Derek, E-mail: brant@email.arizona.edu [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom)

    2014-12-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ∼35% at redshift z ∼ 7 to ≳ 65% at z ∼ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program.

  1. ACCOUNTING FOR COSMIC VARIANCE IN STUDIES OF GRAVITATIONALLY LENSED HIGH-REDSHIFT GALAXIES IN THE HUBBLE FRONTIER FIELD CLUSTERS

    International Nuclear Information System (INIS)

    Robertson, Brant E.; Stark, Dan P.; Ellis, Richard S.; Dunlop, James S.; McLure, Ross J.; McLeod, Derek

    2014-01-01

    Strong gravitational lensing provides a powerful means for studying faint galaxies in the distant universe. By magnifying the apparent brightness of background sources, massive clusters enable the detection of galaxies fainter than the usual sensitivity limit for blank fields. However, this gain in effective sensitivity comes at the cost of a reduced survey volume and, in this Letter, we demonstrate that there is an associated increase in the cosmic variance uncertainty. As an example, we show that the cosmic variance uncertainty of the high-redshift population viewed through the Hubble Space Telescope Frontier Field cluster Abell 2744 increases from ∼35% at redshift z ∼ 7 to ≳ 65% at z ∼ 10. Previous studies of high-redshift galaxies identified in the Frontier Fields have underestimated the cosmic variance uncertainty that will affect the ultimate constraints on both the faint-end slope of the high-redshift luminosity function and the cosmic star formation rate density, key goals of the Frontier Field program

  2. The pricing of long and short run variance and correlation risk in stock returns

    NARCIS (Netherlands)

    Cosemans, M.

    2011-01-01

    This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk

  3. Spot Variance Path Estimation and its Application to High Frequency Jump Testing

    NARCIS (Netherlands)

    Bos, C.S.; Janus, P.; Koopman, S.J.

    2012-01-01

    This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to

  4. Variance bias analysis for the Gelbard's batch method

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Uk; Shim, Hyung Jin [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this paper, variances and the bias will be derived analytically when the Gelbard's batch method is applied. And then, the real variance estimated from this bias will be compared with the real variance calculated from replicas. Variance and the bias were derived analytically when the batch method was applied. If the batch method was applied to calculate the sample variance, covariance terms between tallies which exist in the batch were eliminated from the bias. With the 2 by 2 fission matrix problem, we could calculate real variance regardless of whether or not the batch method was applied. However as batch size got larger, standard deviation of real variance was increased. When we perform a Monte Carlo estimation, we could get a sample variance as the statistical uncertainty of it. However, this value is smaller than the real variance of it because a sample variance is biased. To reduce this bias, Gelbard devised the method which is called the Gelbard's batch method. It has been certificated that a sample variance get closer to the real variance when the batch method is applied. In other words, the bias get reduced. This fact is well known to everyone in the MC field. However, so far, no one has given the analytical interpretation on it.

  5. Waste Isolation Pilot Plant No-Migration Variance Petition

    International Nuclear Information System (INIS)

    1990-03-01

    The purpose of the WIPP No-Migration Variance Petition is to demonstrate, according to the requirements of RCRA section 3004(d) and 40 CFR section 268.6, that to a reasonable degree of certainty, there will be no migration of hazardous constituents from the facility for as long as the wastes remain hazardous. The DOE submitted the petition to the EPA in March 1989. Upon completion of its initial review, the EPA provided to DOE a Notice of Deficiencies (NOD). DOE responded to the EPA's NOD and met with the EPA's reviewers of the petition several times during 1989. In August 1989, EPA requested that DOE submit significant additional information addressing a variety of topics including: waste characterization, ground water hydrology, geology and dissolution features, monitoring programs, the gas generation test program, and other aspects of the project. This additional information was provided to EPA in January 1990 when DOE submitted Revision 1 of the Addendum to the petition. For clarity and ease of review, this document includes all of these submittals, and the information has been updated where appropriate. This document is divided into the following sections: Introduction, 1.0: Facility Description, 2.0: Waste Description, 3.0; Site Characterization, 4.0; Environmental Impact Analysis, 5.0; Prediction and Assessment of Infrequent Events, 6.0; and References, 7.0

  6. Mean-Variance Portfolio Selection with Margin Requirements

    Directory of Open Access Journals (Sweden)

    Yuan Zhou

    2013-01-01

    Full Text Available We study the continuous-time mean-variance portfolio selection problem in the situation when investors must pay margin for short selling. The problem is essentially a nonlinear stochastic optimal control problem because the coefficients of positive and negative parts of control variables are different. We can not apply the results of stochastic linearquadratic (LQ problem. Also the solution of corresponding Hamilton-Jacobi-Bellman (HJB equation is not smooth. Li et al. (2002 studied the case when short selling is prohibited; therefore they only need to consider the positive part of control variables, whereas we need to handle both the positive part and the negative part of control variables. The main difficulty is that the positive part and the negative part are not independent. The previous results are not directly applicable. By decomposing the problem into several subproblems we figure out the solutions of HJB equation in two disjoint regions and then prove it is the viscosity solution of HJB equation. Finally we formulate solution of optimal portfolio and the efficient frontier. We also present two examples showing how different margin rates affect the optimal solutions and the efficient frontier.

  7. Beyond the GUM: variance-based sensitivity analysis in metrology

    International Nuclear Information System (INIS)

    Lira, I

    2016-01-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand. (paper)

  8. Scale dependence in species turnover reflects variance in species occupancy.

    Science.gov (United States)

    McGlinn, Daniel J; Hurlbert, Allen H

    2012-02-01

    Patterns of species turnover may reflect the processes driving community dynamics across scales. While the majority of studies on species turnover have examined pairwise comparison metrics (e.g., the average Jaccard dissimilarity), it has been proposed that the species-area relationship (SAR) also offers insight into patterns of species turnover because these two patterns may be analytically linked. However, these previous links only apply in a special case where turnover is scale invariant, and we demonstrate across three different plant communities that over 90% of the pairwise turnover values are larger than expected based on scale-invariant predictions from the SAR. Furthermore, the degree of scale dependence in turnover was negatively related to the degree of variance in the occupancy frequency distribution (OFD). These findings suggest that species turnover diverges from scale invariance, and as such pairwise turnover and the slope of the SAR are not redundant. Furthermore, models developed to explain the OFD should be linked with those developed to explain species turnover to achieve a more unified understanding of community structure.

  9. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    Turner, A.; Davis, A.

    2013-01-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  10. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    International Nuclear Information System (INIS)

    Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C

    2007-01-01

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficiently optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS)

  11. A pattern recognition approach to transistor array parameter variance

    Science.gov (United States)

    da F. Costa, Luciano; Silva, Filipi N.; Comin, Cesar H.

    2018-06-01

    The properties of semiconductor devices, including bipolar junction transistors (BJTs), are known to vary substantially in terms of their parameters. In this work, an experimental approach, including pattern recognition concepts and methods such as principal component analysis (PCA) and linear discriminant analysis (LDA), was used to experimentally investigate the variation among BJTs belonging to integrated circuits known as transistor arrays. It was shown that a good deal of the devices variance can be captured using only two PCA axes. It was also verified that, though substantially small variation of parameters is observed for BJT from the same array, larger variation arises between BJTs from distinct arrays, suggesting the consideration of device characteristics in more critical analog designs. As a consequence of its supervised nature, LDA was able to provide a substantial separation of the BJT into clusters, corresponding to each transistor array. In addition, the LDA mapping into two dimensions revealed a clear relationship between the considered measurements. Interestingly, a specific mapping suggested by the PCA, involving the total harmonic distortion variation expressed in terms of the average voltage gain, yielded an even better separation between the transistor array clusters. All in all, this work yielded interesting results from both semiconductor engineering and pattern recognition perspectives.

  12. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  13. Photovoltaic venture analysis. Final report. Volume III. Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Costello, D.; Posner, D.; Schiffel, D.; Doane, J.; Bishop, C.

    1978-07-01

    This appendix contains a brief summary of a detailed description of alternative future energy scenarios which provide an overall backdrop for the photovoltaic venture analysis. Also included is a summary of a photovoltaic market/demand workshop, a summary of a photovoltaic supply workshop which used cross-impact analysis, and a report on photovoltaic array and system prices in 1982 and 1986. The results of a sectorial demand analysis for photovoltaic power systems used in the residential sector (single family homes), the service, commercial, and institutional sector (schools), and in the central power sector are presented. An analysis of photovoltaics in the electric utility market is given, and a report on the industrialization of photovoltaic systems is included. A DOE information memorandum regarding ''A Strategy for a Multi-Year Procurement Initiative on Photovoltaics (ACTS No. ET-002)'' is also included. (WHK)

  14. Annotated Bibliography for Lake Erie. Volume III. Engineering,

    Science.gov (United States)

    1974-10-01

    the ecology is examined. 83. Bar- Kochba , Y. and A. J. Simon. 1971. Rainfall and floods in Northeastern Ohio. Dept. of Eng. Univ. Akron. 87 p. This...Bar- Kochba , Y. Bergs, A. Dept. of Civil Eng. 1 Yonge Street University of Akron Toronto, Ontario Akron, Ohio 44304 Canada Barnhouse Associates Bethlehem

  15. Biological Effects of Nonionizing Electromagnetic Radiation. Volume III, Number 3.

    Science.gov (United States)

    1979-03-01

    were allowed to regenerate to radioactive gold implants for the treatment of 52 the fingerbud stage. pa t ien ts wi th advanced head and neck cancer are...program is corn- (BRH) researcher in experimental embryology , d i ed V pleted , Burdette expects to receive a follow-up November 10 at the age of 75. Dr...I-a lbumin was carried out over a 5-hrperi od , after exposure of the dog’ s head for 20 mm 04 36 NAVY ENVIRONMENT : MICROWAVE DISPERS ION AND to

  16. Problems of Air Defense - and - Appedicies. Volumes I-III

    Science.gov (United States)

    1951-08-01

    interceptor. The FALCON program has elected to exploit the kill potential of11nl those lmisil-es that actually hit the target, writing off as...AP-PENDIX P-2 PROJECT CHARtLES -BH!EFINUt- SCHEDULE e4 -l rm nr ý4 SECRET APPEN-DIX P-)2 BRIEFING S- iELT ",UL• FOR PROJECT CHARLES (19 February -- 12...dais communicate with a -rack marker and indicate a specific track before a nurn2-er has beeu as3dgned to it. Another is the need for writing track

  17. Measurement and modeling of advanced coal conversion processes, Volume III

    Energy Technology Data Exchange (ETDEWEB)

    Ghani, M.U.; Hobbs, M.L.; Hamblen, D.G. [and others

    1993-08-01

    A generalized one-dimensional, heterogeneous, steady-state, fixed-bed model for coal gasification and combustion is presented. The model, FBED-1, is a design and analysis tool that can be used to simulate a variety of gasification, devolatilization, and combustion processes. The model considers separate gas and solid temperatures, axially variable solid and gas flow rates, variable bed void fraction, coal drying, devolatilization based on chemical functional group composition, depolymerization, vaporization and crosslinking, oxidation, and gasification of char, and partial equilibrium in the gas phase.

  18. CACDA JIFFY III War Game. Volume II. Methodology

    Science.gov (United States)

    1980-09-01

    heliEopter assessments of ground forces is: - SSKPI ROUNDSiJk - ADUSTi - ABORTi GFKILL I - all k ,. TGT (9-39 where, for ordnance type .i fired by...probability. ABORTI - the probability that the missile will not be aborted during its flight because of loss of line of sight to target, suppression...values extracted from the table. The number of rounds, ROUNDSIjk, is modified by the ABORTI and ADUSTI factors only when the ordnance type i is a missile

  19. Asset management for Wyoming counties : volume I, II, III.

    Science.gov (United States)

    2011-08-01

    Vol. 1: In the fall of 2003, the Wyoming Department of Transportation (WYDOT) and the Wyoming T2/LTAP Center (T2/LTAP) began planning an asset management program to assist counties impacted by oil and gas drilling with management of their road system...

  20. Intrasystem Electromagnetic Compatibility Analysis Program. Volume III. Computer Program Documentation

    Science.gov (United States)

    1974-12-01

    Cor intied) PROGRAM NAME SIMBOL DEFINITION FQEPDB fep IN dB FQEPL LOWER INTERVAL BOUNDARY FREQ OF fep FQEPU UPPER INTERVAL BOUNDARY FREQ OF f .4, fep...I• TOR. VARIABLES L __G~~ NM SIMBOL DEFINITION BWFE 1 BANDWIDTH FACTOR OF EM’TR BANDWIDTH FACTOR OF RCPT EINTB INTEGRATBD MARGIN BROAD BAND COMPON

  1. Snohomish Estuary Wetlands Study Volume III. Classification and Mapping

    Science.gov (United States)

    1978-07-01

    Marine plant communities form the basis for some of the most complex i food webs known to man. Because of their complexity any destruction of these plant... NCV ) Ř fv;1 4 CV r% . coI * ".444 Ř m- 0mf n4 ~ ’ oC- . -4c C4 C CJL t o% P o I-""C4enc n S qw qt "* *n *nL P o% 0zwk oU a "C-4 2 C" Iv3gMNIV~ I.z -I

  2. Analyzing Global Interdependence. Volume III. Methodological Perspectives and Research Implications,

    Science.gov (United States)

    1974-11-01

    of different norm structures for developed market economies presage a similar kind of regime analysis in the Mesarovic- Pestel economic -energy context...the Choucri-North analysis of the 1870-1914 period and the Mesarovic- Pestel World Model are quite unusual. The Choucri-North model allows...Mesarovic- Pestel model also contains some particularly impressive impact assessment possibilities, joined with a rather rich, interactive policy analysis

  3. 76 FR 60511 - Amendment of Marine Safety Manual, Volume III

    Science.gov (United States)

    2011-09-29

    ...'' in the ``Keyword'' box. Click ``Search,'' and then click on the balloon shape in the ``Actions... Comments'' box, which will then become highlighted in blue. In the ``Keyword'' box, insert ``USCG- 2011...

  4. Cobalt(III) complex

    Indian Academy of Sciences (India)

    Administrator

    e, 40 µM complex, 10 hrs after dissolution, f, 40 µM complex, after irradiation dose 15 Gy. and H-atoms result in reduction of Co(III) to Co. (II). 6. It is interesting to see in complex containing multiple ligands what is the fate of electron adduct species formed by electron addition. Reduction to. Co(II) and intramolecular transfer ...

  5. Calculus III essentials

    CERN Document Server

    REA, Editors of

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Calculus III includes vector analysis, real valued functions, partial differentiation, multiple integrations, vector fields, and infinite series.

  6. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1998-01-01

    Zero-variance biasing procedures are normally associated with estimating a single mean or tally. In particular, a zero-variance solution occurs when every sampling is made proportional to the product of the true probability multiplied by the expected score (importance) subsequent to the sampling; i.e., the zero-variance sampling is importance weighted. Because every tally has a different importance function, a zero-variance biasing for one tally cannot be a zero-variance biasing for another tally (unless the tallies are perfectly correlated). The way to optimize the situation when the required tallies have positive correlation is shown

  7. Flow-Cell-Induced Dispersion in Flow-through Absorbance Detection Systems: True Column Effluent Peak Variance.

    Science.gov (United States)

    Dasgupta, Purnendu K; Shelor, Charles Phillip; Kadjo, Akinde Florence; Kraiczek, Karsten G

    2018-02-06

    Following a brief overview of the emergence of absorbance detection in liquid chromatography, we focus on the dispersion caused by the absorbance measurement cell and its inlet. A simple experiment is proposed wherein chromatographic flow and conditions are held constant but a variable portion of the column effluent is directed into the detector. The temporal peak variance (σ t,obs 2 ), which increases as the flow rate (F) through the detector decreases, is found to be well-described as a quadratic function of 1 / F . This allows the extrapolation of the results to zero residence time in the detector and thence the determination of the true variance of the peak prior to the detector (this includes contribution of all preceding components). This general approach should be equally applicable to detection systems other than absorbance. We also experiment where the inlet/outlet system remains the same but the path length is varied. This allows one to assess the individual contributions of the cell itself and the inlet/outlet system.to the total observed peak. The dispersion in the cell itself has often been modeled as a flow-independent parameter, dependent only on the cell volume. Except for very long path/large volume cells, this paradigm is simply incorrect.

  8. How the variance of some extraction variables may affect the quality of espresso coffees served in coffee shops.

    Science.gov (United States)

    Severini, Carla; Derossi, Antonio; Fiore, Anna G; De Pilli, Teresa; Alessandrino, Ofelia; Del Mastro, Arcangela

    2016-07-01

    To improve the quality of espresso coffee, the variables under the control of the barista, such as grinding grade, coffee quantity and pressure applied to the coffee cake, as well as their variance, are of great importance. A nonlinear mixed effect modeling was used to obtain information on the changes in chemical attributes of espresso coffee (EC) as a function of the variability of extraction conditions. During extraction, the changes in volume were well described by a logistic model, whereas the chemical attributes were better fit by a first-order kinetic. The major source of information was contained in the grinding grade, which accounted for 87-96% of the variance of the experimental data. The variability of the grinding produced changes in caffeine content in the range of 80.03 mg and 130.36 mg when using a constant grinding grade of 6.5. The variability in volume and chemical attributes of EC is large. Grinding had the most important effect as the variability in particle size distribution observed for each grinding level had a profound effect on the quality of EC. Standardization of grinding would be of crucial importance for obtaining all espresso coffees with a high quality. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  9. Variance Swaps in BM&F: Pricing and Viability of Hedge

    Directory of Open Access Journals (Sweden)

    Richard John Brostowicz Junior

    2010-07-01

    Full Text Available A variance swap can theoretically be priced with an infinite set of vanilla calls and puts options considering that the realized variance follows a purely diffusive process with continuous monitoring. In this article we willanalyze the possible differences in pricing considering discrete monitoring of realized variance. It will analyze the pricing of variance swaps with payoff in dollars, since there is a OTC market that works this way and thatpotentially serve as a hedge for the variance swaps traded in BM&F. Additionally, will be tested the feasibility of hedge of variance swaps when there is liquidity in just a few exercise prices, as is the case of FX optionstraded in BM&F. Thus be assembled portfolios containing variance swaps and their replicating portfolios using the available exercise prices as proposed in (DEMETERFI et al., 1999. With these portfolios, the effectiveness of the hedge was not robust in mostly of tests conducted in this work.

  10. Improving precision in gel electrophoresis by stepwisely decreasing variance components.

    Science.gov (United States)

    Schröder, Simone; Brandmüller, Asita; Deng, Xi; Ahmed, Aftab; Wätzig, Hermann

    2009-10-15

    Many methods have been developed in order to increase selectivity and sensitivity in proteome research. However, gel electrophoresis (GE) which is one of the major techniques in this area, is still known for its often unsatisfactory precision. Percental relative standard deviations (RSD%) up to 60% have been reported. In this case the improvement of precision and sensitivity is absolutely essential, particularly for the quality control of biopharmaceuticals. Our work reflects the remarkable and completely irregular changes of the background signal from gel to gel. This irregularity was identified as one of the governing error sources. These background changes can be strongly reduced by using a signal detection in the near-infrared (NIR) range. This particular detection method provides the most sensitive approach for conventional CCB (Colloidal Coomassie Blue) stained gels, which is reflected in a total error of just 5% (RSD%). In order to further investigate variance components in GE, an experimental Plackett-Burman screening design was performed. The influence of seven potential factors on the precision was investigated using 10 proteins with different properties analyzed by NIR detection. The results emphasized the individuality of the proteins. Completely different factors were identified to be significant for each protein. However, out of seven investigated parameters, just four showed a significant effect on some proteins, namely the parameters of: destaining time, staining temperature, changes of detergent additives (SDS and LDS) in the sample buffer, and the age of the gels. As a result, precision can only be improved individually for each protein or protein classes. Further understanding of the unique properties of proteins should enable us to improve the precision in gel electrophoresis.

  11. Working Around Cosmic Variance: Remote Quadrupole Measurements of the CMB

    Science.gov (United States)

    Adil, Arsalan; Bunn, Emory

    2018-01-01

    Anisotropies in the CMB maps continue to revolutionize our understanding of the Cosmos. However, the statistical interpretation of these anisotropies is tainted with a posteriori statistics. The problem is particularly emphasized for lower order multipoles, i.e. in the cosmic variance regime of the power spectrum. Naturally, the solution lies in acquiring a new data set – a rather difficult task given the sample size of the Universe.The CMB temperature, in theory, depends on: the direction of photon propagation, the time at which the photons are observed, and the observer’s location in space. In existing CMB data, only the first parameter varies. However, as first pointed out by Kamionkowski and Loeb, a solution lies in making the so-called “Remote Quadrupole Measurements” by analyzing the secondary polarization produced by incoming CMB photons via the Sunyaev-Zel’dovich (SZ) effect. These observations allow us to measure the projected CMB quadrupole at the location and look-back time of a galaxy cluster.At low redshifts, the remote quadrupole is strongly correlated to the CMB anisotropy from our last scattering surface. We provide here a formalism for computing the covariance and relation matrices for both the two-point correlation function on the last scattering surface of a galaxy cluster and the cross correlation of the remote quadrupole with the local CMB. We then calculate these matrices based on a fiducial model and a non-standard model that suppresses power at large angles for ~104 clusters up to z=2. We anticipate to make a priori predictions of the differences between our expectations for the standard and non-standard models. Such an analysis is timely in the wake of the CMB S4 era which will provide us with an extensive SZ cluster catalogue.

  12. Power flow evaluations for HERMES III

    International Nuclear Information System (INIS)

    Smith, D.L.; Ramirez, J.J.; Corley, J.P.; Hasti, D.E.

    1985-01-01

    A study has been conducted to evaluate the transfer of electro-magnetic pulses from water dielectric strip transmission lines into a diode insulator stack. The HERMES III Scale Model Experiments (HERMEX) included single-stage diodes as well as multistage models in which a variety of parallel/series combinations of strip transmission lines (strip lines) were used to evaluate the voltage adding efficiency at the diode. A technique has been established to estimate an equivalent shunt impedance across the diode due to the nearby uncharged water volume

  13. Silicon photonics III systems and applications

    CERN Document Server

    Lockwood, David

    2016-01-01

    This book is volume III of a series of books on silicon photonics. It reports on the development of fully integrated systems where many different photonics component are integrated together to build complex circuits. This is the demonstration of the fully potentiality of silicon photonics. It contains a number of chapters written by engineers and scientists of the main companies, research centers and universities active in the field. It can be of use for all those persons interested to know the potentialities and the recent applications of silicon photonics both in microelectronics, telecommunication and consumer electronics market.

  14. Variance of measurements from a calibration function derived from data which exhibit run-to-run differences

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1985-01-01

    The volume of liquid in a nuclear process tank is determined from a calibration equation which expresses volume as a function of liquid level. Successive calibration runs are made to obtain data from which to estimate either the calibration function or its inverse. For tanks equipped with high-precision measurement systems to determine liquid level, it frequently happens that run-to-run differences due to uncontrolled or uncontrollable ambient conditions are large relative to within-run measurement errors. In the strict sense, a calibration function cannot be developed from data which exhibit significant run-to-run differences. In practice, run-to-run differences are ignored when they are small relative to the accuracy required for measurements of the tank's contents. The use of standard statistical techniques in this situation can result in variance estimates which severely underestimate the actual uncertainty in volume measurements. This paper gives a method whereby reasonable estimates of the calibration uncertainty in volume determinations can be obtained in the presence of statistically significant run-to-run variability. 4 references, 3 figures, 1 table

  15. Natural radiation environment III

    International Nuclear Information System (INIS)

    Gesell, T.F.; Lowder, W.M.

    1980-01-01

    Separate abstracts were prepared for the 52 research papers presented at this symposium in April 1978. The major topics in this volume deal with penetrating radiation measurements, radiation surveys and population exposure, radioactivity in the indoor environment, and technologically enhanced natural radioactivity

  16. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    Science.gov (United States)

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this

  17. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    Science.gov (United States)

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  18. A comparison between temporal and subband minimum variance adaptive beamforming

    Science.gov (United States)

    Diamantis, Konstantinos; Voxen, Iben H.; Greenaway, Alan H.; Anderson, Tom; Jensen, Jørgen A.; Sboros, Vassilis

    2014-03-01

    This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time, provides a FWHM of 0.03 mm (0.14λ) for both implementations at a depth of 40 mm. This value is more than 20 times lower than the one achieved by conventional beamforming. The corresponding values of PSL are -58 dB and -63 dB for time and frequency domain MV beamformers, while a value no lower than -50 dB can be obtained from either Boxcar or Hanning weights. Interestingly, a single emission with central element #64 as the transmitting aperture provides results comparable to the full sequence. The values of FWHM are 0.04 mm and 0.03 mm and those of PSL are -42 dB and -46 dB for temporal and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is less significant in the case of a single emission, with the contrast level being estimated at -42 dB for the time domain and -43 dB for the frequency domain implementation. For the estimation of a single MV weight of a low resolution image formed by a single emission, 0.44 * 109 calculations per second are required for the temporal approach. The same numbers for the subband approach are 0.62 * 109 for the point and 1.33 * 109 for the cyst phantom. The comparison demonstrates similar

  19. Advanced Variance Reduction for Global k-Eigenvalue Simulations in MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Edward W. Larsen

    2008-06-01

    The "criticality" or k-eigenvalue of a nuclear system determines whether the system is critical (k=1), or the extent to which it is subcritical (k<1) or supercritical (k>1). Calculations of k are frequently performed at nuclear facilities to determine the criticality of nuclear reactor cores, spent nuclear fuel storage casks, and other fissile systems. These calculations can be expensive, and current Monte Carlo methods have certain well-known deficiencies. In this project, we have developed and tested a new "functional Monte Carlo" (FMC) method that overcomes several of these deficiencies. The current state-of-the-art Monte Carlo k-eigenvalue method estimates the fission source for a sequence of fission generations (cycles), during each of which M particles per cycle are processed. After a series of "inactive" cycles during which the fission source "converges," a series of "active" cycles are performed. For each active cycle, the eigenvalue and eigenfunction are estimated; after N >> 1 active cycles are performed, the results are averaged to obtain estimates of the eigenvalue and eigenfunction and their standard deviations. This method has several disadvantages: (i) the estimate of k depends on the number M of particles per cycle, (iii) for optically thick systems, the eigenfunction estimate may not converge due to undersampling of the fission source, and (iii) since the fission source in any cycle depends on the estimated fission source from the previous cycle (the fission sources in different cycles are correlated), the estimated variance in k is smaller than the real variance. For an acceptably large number M of particles per cycle, the estimate of k is nearly independent of M; this essentially takes care of item (i). Item (ii) can be addressed by taking M sufficiently large, but for optically thick systems a sufficiently large M can easily be unrealistic. Item (iii) cannot be accounted for by taking M or N sufficiently large; it is an inherent deficiency due

  20. Effect of Class III bone anchor treatment on airway.

    Science.gov (United States)

    Nguyen, Tung; De Clerck, Hugo; Wilson, Michael; Golden, Brent

    2015-07-01

    To compare airway volumes and minimum cross-section area changes of Class III patients treated with bone-anchored maxillary protraction (BAMP) versus untreated Class III controls. Twenty-eight consecutive skeletal Class III patients between the ages of 10 and 14 years (mean age, 11.9 years) were treated using Class III intermaxillary elastics and bilateral miniplates (two in the infra-zygomatic crests of the maxilla and two in the anterior mandible). The subjects had cone beam computed tomographs (CBCTs) taken before initial loading (T1) and 1 year out (T2). Twenty-eight untreated Class III patients (mean age, 12.4 years) had CBCTs taken and cephalograms generated. The airway volumes and minimum cross-sectional area measurements were performed using Dolphin Imaging 11.7 3D software. The superior border of the airway was defined by a plane that passes through the posterior nasal spine and basion, while the inferior border included the base of the epiglottis to the lower border of C3. From T1 to T2, airway volume from BAMP-treated subjects showed a statistically significant increase (1499.64 mm(3)). The area in the most constricted section of the airway (choke point) increased slightly (15.44 mm(2)). The airway volume of BAMP patients at T2 was 14136.61 mm(3), compared with 14432.98 mm(3) in untreated Class III subjects. Intraexaminer correlation coefficients values and 95% confidence interval values were all greater than .90, showing a high degree of reliability of the measurements. BAMP treatment did not hinder the development of the oropharynx.

  1. Study of the variance of a Monte Carlo calculation. Application to weighting; Etude de la variance d'un calcul de Monte Carlo. Application a la ponderation

    Energy Technology Data Exchange (ETDEWEB)

    Lanore, Jeanne-Marie [Commissariat a l' Energie Atomique - CEA, Centre d' Etudes Nucleaires de Fontenay-aux-Roses, Direction des Piles Atomiques, Departement des Etudes de Piles, Service d' Etudes de Protections de Piles (France)

    1969-04-15

    One of the main difficulties in Monte Carlo computations is the estimation of the results variance. Generally, only an apparent variance can be observed over a few calculations, often very different from the actual variance. By studying a large number of short calculations, the authors have tried to evaluate the real variance, and then to apply the obtained results to the optimization of the computations. The program used is the Poker one-dimensional Monte Carlo program. Calculations are performed in two types of fictitious environments: a body with constant cross section, without absorption, where all shocks are elastic and isotropic; a body with variable cross section (presenting a very pronounced peak and hole), with an anisotropy for high energy elastic shocks, and with the possibility of inelastic shocks (this body presents all the features that can appear in a real case)

  2. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...

  3. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  4. Is fMRI ?noise? really noise? Resting state nuisance regressors remove variance with network structure

    OpenAIRE

    Bright, Molly G.; Murphy, Kevin

    2015-01-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed ...

  5. Geometric representation of the mean-variance-skewness portfolio frontier based upon the shortage function

    OpenAIRE

    Kerstens, Kristiaan; Mounier, Amine; Van de Woestyne, Ignace

    2008-01-01

    The literature suggests that investors prefer portfolios based on mean, variance and skewness rather than portfolios based on mean-variance (MV) criteria solely. Furthermore, a small variety of methods have been proposed to determine mean-variance-skewness (MVS) optimal portfolios. Recently, the shortage function has been introduced as a measure of efficiency, allowing to characterize MVS optimalportfolios using non-parametric mathematical programming tools. While tracing the MV portfolio fro...

  6. AN ADAPTIVE OPTIMAL KALMAN FILTER FOR STOCHASTIC VIBRATION CONTROL SYSTEM WITH UNKNOWN NOISE VARIANCES

    Institute of Scientific and Technical Information of China (English)

    Li Shu; Zhuo Jiashou; Ren Qingwen

    2000-01-01

    In this paper, an optimal criterion is presented for adaptive Kalman filter in a control sys tem with unknown variances of stochastic vibration by constructing a function of noise variances and minimizing the function. We solve the model and measure variances by using DFP optimal method to guarantee the results of Kalman filter to be optimized. Finally, the control of vibration can be implemented by LQG method.

  7. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  8. A geometric approach to multiperiod mean variance optimization of assets and liabilities

    OpenAIRE

    Leippold, Markus; Trojani, Fabio; Vanini, Paolo

    2005-01-01

    We present a geometric approach to discrete time multiperiod mean variance portfolio optimization that largely simplifies the mathematical analysis and the economic interpretation of such model settings. We show that multiperiod mean variance optimal policies can be decomposed in an orthogonal set of basis strategies, each having a clear economic interpretation. This implies that the corresponding multi period mean variance frontiers are spanned by an orthogonal basis of dynamic returns. Spec...

  9. Thermodynamics of high-pressure ice polymorphs : ices III and V

    NARCIS (Netherlands)

    Tchijov, [No Value; Ayala, RB; Leon, GC; Nagornov, O

    Thermodynamic properties of high-pressure ice polymorphs, ices III and V, are studied theoretically. The results of TIP4P molecular dynamics simulations in the NPT ensemble are used to calculate the temperature dependence of the specific volume of ices III and V at pressures 0.25 and 0.5 GPa,

  10. Individual and collective bodies: using measures of variance and association in contextual epidemiology.

    Science.gov (United States)

    Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V

    2009-12-01

    Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.

  11. Estimating integrated variance in the presence of microstructure noise using linear regression

    Science.gov (United States)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  12. Variance-in-Mean Effects of the Long Forward-Rate Slope

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2005-01-01

    This paper contains an empirical analysis of the dependence of the long forward-rate slope on the long-rate variance. The long forward-rate slope and the long rate are described by a bivariate GARCH-in-mean model. In accordance with theory, a negative long-rate variance-in-mean effect for the long...... forward-rate slope is documented. Thus, the greater the long-rate variance, the steeper the long forward-rate curve slopes downward (the long forward-rate slope is negative). The variance-in-mean effect is both statistically and economically significant....

  13. A study of heterogeneity of environmental variance for slaughter weight in pigs

    DEFF Research Database (Denmark)

    Ibánez-Escriche, N; Varona, L; Sorensen, D

    2008-01-01

    This work presents an analysis of heterogeneity of environmental variance for slaughter weight (175 days) in pigs. This heterogeneity is associated with systematic and additive genetic effects. The model also postulates the presence of additive genetic effects affecting the mean and environmental...... variance. The study reveals the presence of genetic variation at the level of the mean and the variance, but an absence of correlation, or a small negative correlation, between both types of additive genetic effects. In addition, we show that both, the additive genetic effects on the mean and those...... on environmental variance have an important influence upon the future economic performance of selected individuals...

  14. Temporal variance reverses the impact of high mean intensity of stress in climate change experiments.

    Science.gov (United States)

    Benedetti-Cecchi, Lisandro; Bertocci, Iacopo; Vaselli, Stefano; Maggi, Elena

    2006-10-01

    Extreme climate events produce simultaneous changes to the mean and to the variance of climatic variables over ecological time scales. While several studies have investigated how ecological systems respond to changes in mean values of climate variables, the combined effects of mean and variance are poorly understood. We examined the response of low-shore assemblages of algae and invertebrates of rocky seashores in the northwest Mediterranean to factorial manipulations of mean intensity and temporal variance of aerial exposure, a type of disturbance whose intensity and temporal patterning of occurrence are predicted to change with changing climate conditions. Effects of variance were often in the opposite direction of those elicited by changes in the mean. Increasing aerial exposure at regular intervals had negative effects both on diversity of assemblages and on percent cover of filamentous and coarsely branched algae, but greater temporal variance drastically reduced these effects. The opposite was observed for the abundance of barnacles and encrusting coralline algae, where high temporal variance of aerial exposure either reversed a positive effect of mean intensity (barnacles) or caused a negative effect that did not occur under low temporal variance (encrusting algae). These results provide the first experimental evidence that changes in mean intensity and temporal variance of climatic variables affect natural assemblages of species interactively, suggesting that high temporal variance may mitigate the ecological impacts of ongoing and predicted climate changes.

  15. The genotype-environment interaction variance in rice-seed protein determination

    International Nuclear Information System (INIS)

    Ismachin, M.

    1976-01-01

    Many environmental factors influence the protein content of cereal seed. This fact procured difficulties in breeding for protein. Yield is another example on which so many environmental factors are of influence. The length of time required by the plant to reach maturity, is also affected by the environmental factors; even though its effect is not too decisive. In this investigation the genotypic variance and the genotype-environment interaction variance which contribute to the total variance or phenotypic variance was analysed, with purpose to give an idea to the breeder how selection should be made. It was found that genotype-environment interaction variance is larger than the genotypic variance in contribution to total variance of protein-seed determination or yield. In the analysis of the time required to reach maturity it was found that genotypic variance is larger than the genotype-environment interaction variance. It is therefore clear, why selection for time required to reach maturity is much easier than selection for protein or yield. Selected protein in one location may be different from that to other locations. (author)

  16. Optimal control of LQG problem with an explicit trade-off between mean and variance

    Science.gov (United States)

    Qian, Fucai; Xie, Guo; Liu, Ding; Xie, Wenfang

    2011-12-01

    For discrete-time linear-quadratic Gaussian (LQG) control problems, a utility function on the expectation and the variance of the conventional performance index is considered. The utility function is viewed as an overall objective of the system and can perform the optimal trade-off between the mean and the variance of performance index. The nonlinear utility function is first converted into an auxiliary parameters optimisation problem about the expectation and the variance. Then an optimal closed-loop feedback controller for the nonseparable mean-variance minimisation problem is designed by nonlinear mathematical programming. Finally, simulation results are given to verify the algorithm's effectiveness obtained in this article.

  17. Replication Variance Estimation under Two-phase Sampling in the Presence of Non-response

    Directory of Open Access Journals (Sweden)

    Muqaddas Javed

    2014-09-01

    Full Text Available Kim and Yu (2011 discussed replication variance estimator for two-phase stratified sampling. In this paper estimators for mean have been proposed in two-phase stratified sampling for different situation of existence of non-response at first phase and second phase. The expressions of variances of these estimators have been derived. Furthermore, replication-based jackknife variance estimators of these variances have also been derived. Simulation study has been conducted to investigate the performance of the suggested estimators.

  18. A class of multi-period semi-variance portfolio for petroleum exploration and development

    Science.gov (United States)

    Guo, Qiulin; Li, Jianzhong; Zou, Caineng; Guo, Yujuan; Yan, Wei

    2012-10-01

    Variance is substituted by semi-variance in Markowitz's portfolio selection model. For dynamic valuation on exploration and development projects, one period portfolio selection is extended to multi-period. In this article, a class of multi-period semi-variance exploration and development portfolio model is formulated originally. Besides, a hybrid genetic algorithm, which makes use of the position displacement strategy of the particle swarm optimiser as a mutation operation, is applied to solve the multi-period semi-variance model. For this class of portfolio model, numerical results show that the mode is effective and feasible.

  19. A mean–variance objective for robust production optimization in uncertain geological scenarios

    DEFF Research Database (Denmark)

    Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne

    2014-01-01

    directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio...... optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...

  20. The Variance between Recommended and Nursing Staff Levels at Womack Army Medical Center

    National Research Council Canada - National Science Library

    Holcek, Robert A

    2007-01-01

    .... This study considered five possible rationales for the existing variances - workload changes, staff experience, observation patients, recovery patients, and outpatient procedures - for 117 work...