WorldWideScience

Sample records for relative paleointensity estimation

  1. Paleointensity in ignimbrites and other volcaniclastic flows

    Science.gov (United States)

    Bowles, J. A.; Gee, J. S.; Jackson, M. J.

    2011-12-01

    Ash flow tuffs (ignimbrites) are common worldwide, frequently contain fine-grained magnetite hosted in the glassy matrix, and often have high-quality 40Ar/39Ar ages. This makes them attractive candidates for paleointensity studies, potentially allowing for a substantial increase in the number of well-dated paleointensity estimates. However, the timing and nature of remanence acquisition in ignimbrites are not sufficiently understood to allow confident interpretation of paleointensity data from ash flows. The remanence acquisition may be a complex function of mineralogy and thermal history. Emplacement conditions and post-emplacement processes vary considerably between and within tuffs and may potentially affect the ability to recover ancient field intensity information. To better understand the relevant magnetic recording assemblage(s) and remanence acquisition processes we have collected samples from two well-documented historical ignimbrites, the 1980 ash flows at Mt. St. Helens (MSH), Washington, and the 1912 flows from Mt. Katmai in the Valley of Ten Thousand Smokes (VTTS), Alaska. Data from these relatively small, poorly- to non-welded historical flows are compared to the more extensive and more densely welded 0.76 Ma Bishop Tuff. This sample set enables us to better understand the geologic processes that destroy or preserve paleointensity information so that samples from ancient tuffs may be selected with care. Thellier-type paleointensity experiments carried out on pumice blocks sampled from the MSH flows resulted in a paleointensity of 55.8 μT +/- 0.8 (1 standard error). This compares favorably with the actual value of 56.0 μT. Excluded specimens of poor technical quality were dominantly from sites that were either emplaced at low temperature (600°C) temperatures does not corrupt the paleointensity signal, and additional data will be presented which explores this more fully.

  2. Relative Paleointensity of the Geomagnetic Field 12-20 kyr. From Sediment Cores, Lake Moreno (Patagonia, Argentina)

    Science.gov (United States)

    Gogorza, C. S.; Irurzun, M. A.; Chaparro, M. A.; Lirio, J. M.; Nunez, H.; Sinito, A. M.

    2007-05-01

    Four cores labeled Lmor1, Lmor2, Lmor3, Lmor98-1, Lmor98-2 from the bottom sediments of Lake Moreno (south-western Argentina) have been used to estimate regional geomagnetic paleointensity. Lake Moreno is on the east side of the Andean Cordillera Patagónica; it is located in the Llao Llao area, San Carlos de Bariloche, Argentina (41° S, 71° 30'W). The following measurements were performed: Natural Remanent Magnetization (NRM), magnetic susceptibility at low and high frequency (specific, X and volumetric, k), Isothermal Remanent Magnetization (IRM) reaching the Saturation Isothermal Remanent Magnetization (SIRM), Back Field, Anhysteric Remanent Magnetization with a direct field of 0.1mT and an alternating field between 2.5 and 100mT (ARM100mT). Associated parameters were calculated: S-ratio, Remanent Coercitive Field (BCR, anhysteric volumetric susceptibility (kanh), SIRM/k, ARM100mT/k, and SIRM/ ARM100mT. The rock magnetic studies indicate that the magnetic mineralogy of the clay-rich sediments is dominated by pseudo- single domain magnetite in a narrow range of grain size (between 1 and 4μm) and concentration (between 0.05 and 0.1%), thereby meeting established criteria for relative paleointensity studies. The remanent magnetization at 20mT (NRM20mT) has been normalized using the anhysteric remanent magnetization at 20mT (ARM20mT), the saturation of the isothermal remanent magnetization at 20mT (SIRM20mT) and k. A comparison of these results with relative paleointensity records obtained in previous works, Lake Escondido (Gogorza et al., 2004) and Lake El Trébol (Gogorza et al., 2006) allows obtaining detailed information about the disagreement observed in the period 12-20 kyr between both records. References Gogorza, C.S.G., J.M. Lirio, H. Nunez, M.A.E. Chaparro, H.R. Bertorello, A.M. Sinito. Paleointensity studies on Holocene-Pleistocene sediments from Lake Escondido, Argentina, Phys. Earth and Planet. Inter. 145: 219-238, 2004. Gogorza, C.S.G., M.A. Irurzun

  3. Variations of the Geomagnetic Field During the Holocene-Pleistocene: Relative Paleointensity Records From South-Western Argentina

    Science.gov (United States)

    Gogorza, C. S.

    2008-05-01

    I present a review of the research carried out by the Group of Geomagnetism at Universidad Nacional del Centro (Argentina) on paleointensity records from bottom sediments from three lakes: Escondido (Gogorza et al., 2004), Moreno (Gogorza et al., 2006) and El Trébol (Gogorza et al., 2007; Irurzun et al., 2008) (South-Western Argentina, 41° S, 71° 30'W). Based on these studies, we construct a first relative (RPI) stack for South-Western Argentina covering the last 21,000 14C years BP. The degree of down-core homogeneity of magnetic mineral content as well as magnetic mineral concentration and grain sizes vary between all lakes and are quantified by high-resolution rock magnetic measurements. Rock magnetic studies suggest that the main carriers of magnetization are ferrimagnetic minerals, predominantly pseudo-single domain magnetite The remanent magnetization at 20 mT (NRM20mT) was normalized using the anhysteric remanent magnetization at 20mT (ARM20mT), the saturation of the isothermal remanent at 20 mT (SIRM20mT) and the low field magnetic susceptibility {k}. Coherence function analysis indicates that the normalised records are free of environmental influences. Our paleointensity (NRM20mT/ ARM20mT) versus age curve shows a good agreement with published records from other parts of the world suggesting that, in suitable sediments, paleointensity of the geomagnetic field can give a globally coherent, dominantly dipolar signal. References Gogorza, C.S.G., Irurzun, M.A., Chaparro, M.A.E., Lirio, J.M., Nuñez, H., Bercoff, P.G., Sinito, A.M. Relative Paleointensity of the Geomagnetic Field over the last 21,000 years bp from Sediment Cores, Lake El Trébol, (Patagonia, Argentina). Earth, Planets and Space. V58(10), 1323-1332. 2006. Gogorza, C.S.G., Sinito, A.M., Lirio, J.M., Nuñez, H., Chaparro, M.A.E., Bertorello, H.R. Paleointensity Studies on Holocene-Pleistocene Sediments from Lake Escondido, Argentina. Physical of the Earth and Planetary Interiors, Elsevier, ISSN

  4. Magnetic constraints on early lunar evolution revisited: Limits on accuracy imposed by methods of paleointensity measurements

    Science.gov (United States)

    Banerjee, S. K.

    1984-01-01

    It is impossible to carry out conventional paleointensity experiments requiring repeated heating and cooling to 770 C without chemical, physical or microstructural changes on lunar samples. Non-thermal methods of paleointensity determination have been sought: the two anhysteretic remanent magnetization (ARM) methods, and the saturation isothermal remanent magnetization (IRMS) method. Experimental errors inherent in these alternative approaches have been investigated to estimate the accuracy limits on the calculated paleointensities. Results are indicated in this report.

  5. Reading the muddy compass : relative paleointensities of the earth's magnetic field derived from deep-sea sediments

    NARCIS (Netherlands)

    Kok, Y.S.

    1998-01-01

    This thesis has been structured in three parts: Part I discusses three methodological studies, Part II addresses the saw-toothed pattern observed in some paleointensity records spanning the last 4 million years, and Part III examines geomagnetic paleointensity stacks.

  6. Absolute Paleointensity Techniques: Developments in the Last 10 Years (Invited)

    Science.gov (United States)

    Bowles, J. A.; Brown, M. C.

    2009-12-01

    The ability to determine variations in absolute intensity of the Earth’s paleomagnetic field has greatly enhanced our understanding of geodynamo processes, including secular variation and field reversals. Igneous rocks and baked clay artifacts that carry a thermal remanence (TRM) have allowed us to study field variations over timescales ranging from decades to billions of years. All absolute paleointensity techniques are fundamentally based on repeating the natural process by which the sample acquired its magnetization, i.e. a laboratory TRM is acquired in a controlled field, and the ratio of the natural TRM to that acquired in the laboratory is directly proportional to the ancient field. Techniques for recovering paleointensity have evolved since the 1930s from relatively unsophisticated (but revolutionary for their time) single step remagnetizations to the various complicated, multi-step procedures in use today. These procedures can be broadly grouped into two categories: 1) “Thellier-type” experiments that step-wise heat samples at a series of temperatures up to the maximum unblocking temperature of the sample, progressively removing the natural remanence (NRM) and acquiring a laboratory-induced TRM; and 2) “Shaw-type” experiments that combine alternating field demagnetization of the NRM and laboratory TRM with a single heating to a temperature above the sample’s Curie temperature, acquiring a total TRM in one step. Many modifications to these techniques have been developed over the years with the goal of identifying and/or accommodating non-ideal behavior, such as alteration and multi-domain (MD) remanence, which may lead to inaccurate paleofield estimates. From a technological standpoint, perhaps the most significant development in the last decade is the use of microwave (de)magnetization in both Thellier-type and Shaw-type experiments. By using microwaves to directly generate spin waves within the magnetic grains (rather than using phonons

  7. Paleointensity Behavior and Intervals Between Geomagnetic Reversals in the Last 167 Ma

    Science.gov (United States)

    Kurazhkovskii, A. Yu.; Kurazhkovskaya, N. A.; Klain, B. I.

    2018-01-01

    The results of comparative analysis of the behavior of paleointensity and polarity (intervals between reversals) of the geomagnetic field for the last 167 Ma are presented. Similarities and differences in the behavior of these characteristics of the geomagnetic field are discussed. It is shown that bursts of paleointensity and long intervals between reversals occurred at high mean values of paleointensity in the Cretaceous and Paleogene. However, there are differences between the paleointensity behavior and the reversal regime: (1) the characteristic times of paleointensity variations are less than the characteristic times of the frequency of geomagnetic reversals, (2) the achievement of maximum values of paleointensity at the Cretaceous-Paleogene boundary and the termination of paleointensity bursts after the boundary of 45-40 Ma are not marked by explicit features in the geomagnetic polarity behavior.

  8. Tsunakawa-Shaw method - an absolute paleointensity technique using alternating field demagnetization

    Science.gov (United States)

    Yamamoto, Y.; Mochizuki, N.; Shibuya, H.; Tsunakawa, H.

    2015-12-01

    Among geologic materials volcanic rocks have been typically used to deduce an absolute paleointensity. In the last decade, however, there seems a becoming consensus that volcanic rocks are not so ideal materials due to such as magnetic grains other than non-interacting single domain particles. One approach to obtain a good paleointensity estimate from the rocks is to reduce and correct the non-ideality, suppress alterations in laboratory and screen out suspicious results. We have been working on a development and an application of the Tsunakawa-Shaw method, which has been previously called the LTD-DHT Shaw method. This method is an AF(alternating field)-based technique and thus a paleointensity is estimated using coercivity spectra. To reduce the non-ideality, all remanences undergo low-temperature demagnetization (LTD) before any AF demagnetizations to remove multi-domain like component. To correct the non-ideality, anhysteretic remanent magnetizations (ARMs) are imparted with their directions parallel to natural remanent magnetizations and laboratory-imparted thermoremanent magnetizations (TRMs) and measured before and after laboratory heating. These ARMs are used to correct remanence anisotropies, possible interaction effects originated from the non-ideal grains and TRM changes caused by laboratory alterations. TRMs are imparted by heating specimens above their Curie temperatures and then cooling to room temperature at once to simulate nature conditions. These cycles are done in vacuum to suppress alterations in laboratory. Obtained results are judged by selection criteria, including a check for validity of the ARM corrections.It has been demonstrated that successful paleointensities are obtained from historical lavas in Japan and Hawaii, and from baked clay samples from a reconstructed ancient kiln, with the flow-mean precision of 5-10%. In case of old volcanic rocks, however, the method does not necessarily seem to be perfect. We will summarize these points in

  9. The Multispecimen Method for Absolute Paleointensity Determination

    Science.gov (United States)

    Dekkers, M. J.; de Groot, L. V.; Monster, M.

    2015-12-01

    Paleointensity methods have seen a large improvement in the 21th century. This included optimizing classic Thellier-style protocols along with establishing stringent sets of quality criteria, developing microwave excitation as an alternative to thermal treatment, selecting sample material that contains the most suitable remanence carriers (i.e. single domain magnetic particles), calibrating non-heating paleointensity methods, and the introduction of the multispecimen paleointensity (MSP) protocol. An MSP experiment is carried out at one specific temperature selected to avoid thermochemical alteration; a series of specimens is heated and cooled in various applied furnace fields oriented parallel to the specimen's NRM. The furnace field value at which no change in NRM occurs is the paleofield. While the rationale of the MSP approach is surprisingly straightforward, some of the original claims (Dekkers and Böhnel, 2006) are by now shown to be untenable. This pertains to the claimed domain state independence in the original MSP method, although the Fabian and Leonhardt (2010) extended protocol largely corrects for domain state effects. Here we describe the optimal workflow for MSP experiments derived from our collection of historic flows from four volcanic edifices: Mt. Etna, Hawaii, the Canary Islands, and the Azores. By comparing the experimental outcome derived from historic flows with known paleointensities we found that technically acceptable experiments may yield overestimates, correct determinations, as well as underestimates of the paleofield. The so-called "ARM test" (de Groot et al., 2012) can distinguish between those three options. Based on TRM and ARM being analogues, this test compares ARM acquisition curves of sister samples before and after heating to the MSP experiment temperature. Simulated paleointensity experiments following this workflow consistently deliver the correct answer (Monster et al., submitted).

  10. Paleointensities of the Auckland Excursion from Volcanic Rocks in New Zealand

    Science.gov (United States)

    Mochizuki, N.; Tsunakawa, H.; Shibuya, H.; Cassidy, J.; Smith, I. E.

    2001-12-01

    Shibuya et al. (1992) reported the Auckland excursion from several basaltic lava flows of monogenetic volcanic centers (Auckland Volcanic Field, New Zealand. The Auckland excursion was recorded in five centers in three intermediate direction groups of north-down, west and south. We carried out paleointensity and rock-magnetic studies in order to obtain the absolute paleointensities associated with three intermediate geomagnetic fields. Thermomagnetic analyses indicated typical Curie temperatures of 150-200, 450-500 and/or 550-580 oC. The Day plot (Day et al., 1977) showed a linear trend in the pseudo-single-domain range of magnetic carriers. Those results, combined with the reflection microscope observations, identified the magnetic carriers as titanomagnetites with wide variation in titanium content and grain size. First, the Coe's version of the Thellier method (Coe, 1967) was applied to the samples. Several samples seemed to give paleointensities ranging from 3.2 to 6.4 μ T (Shibuya and Cassidy, 1995 AGU fall meeting), but they were often affected by thermal alteration in the furnace even from fairly low temperature steps like 200oC. We were forced to introduce correction for thermal alterations in laboratory heating, using low temperature part of the Arai plot. We, therefore, applied the double heating technique (DHT) of Shaw method (Tsunakawa and Shaw, 1994), which was capable of detecting inappropriate results by the ARM correction, to the samples. The low temperature demagnetization (LTD) was combined with DHT (Yamamoto et al., submitted) before AF demagnetization and samples were heated in a vacuum of 10-100 Pa. Sixty-one samples from the five lava flows were subjected to the LTD-DHT Shaw method. Twenty-three of these samples yielded successful results passing the selection criteria. Five out of six paleointensities from the Crater Hill lava were consistent with each other. A mean paleointensity was given to be 10.9+/- 1.9 μ T (N=5) for the Crater Hill

  11. Absolute paleointensity of the Earth's magnetic field during Jurassic: case study of La Negra Formation (northern Chile)

    Science.gov (United States)

    Morales, Juan; Goguitchaichvili, Avto; Alva-Valdivia, Luis M.; Urrutia-Fucugauchi, Jaime

    2003-08-01

    We carried out a detailed rock-magnetic and paleointensity study of the ˜187-Ma volcanic succession from northern Chile. A total of 32 consecutive lava flows (about 280 oriented standard paleomagnetic cores) were collected at the Tocopilla locality. Only 26 samples with apparently preserved primary magnetic mineralogy and without secondary magnetization components were pre-selected for Thellier paleointensity determination. Eleven samples coming from four lava flows yielded reliable paleointensity estimates. The flow-mean virtual dipole moments range from 3.7±0.9 to 7.1±0.5 (10 22 A m 2). This corresponds to a mean value of (5.0±1.8)×10 22 A m 2, which is in reasonably good agreement with other comparable quality paleointensity determinations from the Middle Jurassic. Given the large dispersion and the very poor distribution of reliable absolute intensity data, it is hard to draw any firm conclusions regarding the time evolution of the geomagnetic field. To cite this article: J. Morales et al., C. R. Geoscience 335 (2003).

  12. Intrinsic paleointensity bias and the long-term history of the geodynamo.

    Science.gov (United States)

    Smirnov, Aleksey V; Kulakov, Evgeniy V; Foucher, Marine S; Bristol, Katie E

    2017-02-01

    Many geodynamo models predict an inverse relationship between geomagnetic reversal frequency and field strength. However, most of the absolute paleointensity data, obtained predominantly by the Thellier method from bulk volcanic rocks, fail to confirm this relationship. Although low paleointensities are commonly observed during periods of high reversal rate (notably, in the late Jurassic), higher than present-day intensity values are rare during periods of no or few reversals (superchrons). We have identified a fundamental mechanism that results in a pervasive and previously unrecognized low-field bias that affects most paleointensity data in the global database. Our results provide an explanation for the discordance between the experimental data and numerical models, and lend additional support to an inverse relationship between the reversal rate and field strength as a fundamental property of the geodynamo. We demonstrate that the accuracy of future paleointensity analyses can be improved by integration of the Thellier protocol with low-temperature demagnetizations.

  13. Chemical magnetization when determining Thellier paleointensity experiments in oceanic basalts

    Science.gov (United States)

    Tselebrovskiy, Alexey; Maksimochkin, Valery

    2017-04-01

    The natural remanent magnetization (NRM) of oceanic basalts selected in the rift zones of the Mid-Atlantic Ridge (MAR) and the Red Sea has been explored. Laboratory simulation shows that the thermoremanent magnetization and chemical remanent magnetization (CRM) in oceanic basalts may be separated by using Tellier-Coe experiment. It was found that the rate of CRM destruction is about four times lower than the rate of the partial thermoremanent magnetization formation in Thellier cycles. The blocking temperatures spectrum of chemical component shifted toward higher temperatures in comparison with the spectrum of primary thermoremanent magnetization. It was revealed that the contribution of the chemical components in the NRM increases with the age of oceanic basalts determined with the analysis of the anomalous geomagnetic field (AGF) and spreading theory. CRM is less than 10% at the basalts aged 0.2 million years, less than 50% at basalts aged 0.35 million years, from 60 to 80% at basalts aged 1 million years [1]. Geomagnetic field paleointensity (Hpl) has been determined through the remanent magnetization of basalt samples of different ages related to Brunhes, Matuyama and Gauss periods of the geomagnetic field polarity. The value of the Hpl determined by basalts of the southern segment of MAR is ranged from 17.5 to 42.5 A/m, by the Reykjanes Ridge basalts — from 20.3 to 44 A/m, by the Bouvet Ridge basalts — from 21.7 to 34.1 A/m. VADM values calculated from these data are in good agreement with the international paleointensity database [2] and PISO-1500 model [3]. Literature 1. Maksimochkin V., Tselebrovskiy A., (2015) The influence of the chemical magnetization of oceanic basalts on determining the geomagnetic field paleointensity by the thellier method, moscow university physics bulletin, 70(6):566-576, 2. Perrin, M., E. Schnepp, and V. Shcherbakov (1998), Update of the paleointensity database, Eos Trans. AGU, 79, 198. 3. Channell JET, Xuan C, Hodell DA (2009

  14. Testing the Multispecimen Absolute Paleointensity Method with Archaeological Baked Clays and Bricks: New Data for Central Europe

    Science.gov (United States)

    Schnepp, Elisabeth; Leonhardt, Roman

    2014-05-01

    The domain-state corrected multiple-specimen paleointensity determination technique (MSP-DSC, Fabian & Leonhardt, EPSL 297, 84, 2010) has been tested for archaeological baked clays and bricks. The following procedure was applied: (1) Exclusion of secondary overprints using alternating field (AF) or thermal demagnetization and assignment of characteristic remanent magnetization (ChRM) direction. (2) Determination of magneto mineralogical alteration using anhysteretic remanent magnetization (ARM) or temperature dependence of susceptibility. (3) Measurement of ARM anisotropy tensor, calculation of the ancient magnetic field direction. (4) Sister specimens were subjected to the MSP-DSC technique aligned (anti-)parallel to the ancient magnetic field direction. (5) Several checks were applied in order to exclude data points from further evaluation: (a) The accuracy of orientation (< 10°), (b) absence of secondary components (< 10°), (c) use of a considerable NRM fraction (20 to 80%), (d) weak alteration (smaller than for domain state change) and finally (e) domain state correction was applied. Bricks and baked clays from archaeological sites with ages between 645 BC and 2003 AD have been subjected to MSP-DSC absolute paleointensity (PI) determination. Aims of study are to check precision and reliability of the method. The obtained PI values are compared with direct field observation, the IGRF, the GUFM1 or Thellier results. The Thellier experiments often show curved lines and pTRM checks fail for higher temperatures. Nevertheless in the low temperature range straight lines have been obtained but they provide scattered paleointensity values. Mean paleointensites have relative errors often exceeding 10%, which are not considered as high quality PI estimates. MSP-DSC experiments for the structures older than 300 years are still under progress. The paleointensities obtained from the MSP-DSC experiments for the young materials (after 1700 AD) have small relative errors of a

  15. Climatic influence in NRM and 10 Be-derived geomagnetic paleointensity data

    NARCIS (Netherlands)

    1999-01-01

    One can determine geomagnetic paleointensities from natural remanent magnetizations (NRM) and by inverting production rates of cosmogenic isotopes such as 10 Be and 14 C. Recently, two independently derived 200-kyr stacks [Y. Guyodo, J.-P. Valet, Relative variations in geomagnetic intensity from

  16. A comparison of Thellier-type and multispecimen paleointensity determinations on Pleistocene and historical lava flows from Lanzarote (Canary Islands, Spain)

    Science.gov (United States)

    Calvo-Rathert, Manuel; Morales-Contreras, Juan; Carrancho, Ángel; Goguitchaichvili, Avto

    2016-09-01

    Sixteen Miocene, Pleistocene, and historic lava flows have been sampled in Lanzarote (Canary Islands) for paleointensity analysis with both the Coe and multispecimen methods. Besides obtaining new data, the main goal of the study was the comparison of paleointensity results determined with two different techniques. Characteristic Remanent Magnetization (ChRM) directions were obtained in 15 flows, and 12 were chosen for paleointensity determination. In Thellier-type experiments, a selection of reliable paleointensity determinations (43 of 78 studied samples) was performed using sets of criteria of different stringency, trying to relate the quality of results to the strictness of the chosen criteria. Uncorrected and fraction and domain-state corrected multispecimen paleointensity results were obtained in all flows. Results with the Coe method on historical flows either agree with the expected values or show moderately lower ones, but multispecimen determinations display a large deviation from the expected result in one case. No relation can be detected between correct or anomalous results and paleointensity determination quality or rock-magnetic properties. However, results on historical flows suggest that agreement between both methods could be a good indicator of correct determinations. Comparison of results obtained with both methods on seven Pleistocene flows yields an excellent agreement in four and disagreements in three cases. Pleistocene determinations were only accepted if either results from both methods agreed or a result was based on a sufficiently large number (n > 4) of individual Thellier-type determinations. In most Pleistocene flows, a VADM around 5 × 1022 Am2 was observed, although two flows displayed higher values around 9 × 1022 Am2.

  17. Paleointensities on 8 ka obsidian from Mayor Island, New Zealand

    Directory of Open Access Journals (Sweden)

    A. Ferk

    2011-11-01

    Full Text Available The 8 ka BP (6050 BCE pantelleritic obsidian flow on Mayor Island, Bay of Plenty, New Zealand, has been investigated using 30 samples from two sites. Due to a very high paramagnetic/ferromagnetic ratio, it was not possible to determine the remanence carriers. This is despite the fact that the samples were studied intensively at low, room, and high temperatures. We infer that a stable remanence within the samples is carried by single- or close to single-domain particles. Experiments to determine the anisotropy of thermoremanence tensor and the dependency on cooling rate were hampered due to alteration resulting from the repeated heating of the samples to temperatures just below the glass transition. Nonetheless, a well-defined mean paleointensity of 57.0 ± 1.0 μT, based on individual high quality paleointensity determinations, was obtained. This field value compares very well to a paleointensity of 58.1 ± 2.9 μT, which Tanaka et al. (2009 obtained for 5500 BCE at a site 100 km distant. Agreement with geomagnetic field models, however, is poor. Thus, gathering more high-quality paleointensity data for the Pacific region and for the southern hemisphere in general to better constrain global field models is very important.

  18. Saw-toothed pattern of sedimentary paleointensity records explained by cumulative viscous remanence

    NARCIS (Netherlands)

    Kok, Yvo S.; Tauxe, Lisa

    1996-01-01

    The relative paleointensity of the earth's magnetic field from ODP Site 851 has been characterized by progressive decay w x towards polarity reversals, followed by sharp recovery of pre-reversal values 1 . We resampled the Gilbert-Gaub reversal boundary of this deep-sea core, and show that during

  19. Equatorial Paleointensities from Kenya and the Well-behaved Geocentric Axial Dipole

    Science.gov (United States)

    Wang, H.; Kent, D. V.

    2017-12-01

    A previous study of Plio-Pleistocene lavas from the equatorial Galapagos Islands (latitude 1ºS) that used an adjustment for multidomain (MD) effects [Wang and Kent, 2013 G-cubed] obtained a mean paleointensity of 21.6 ± 11.0 µT (1σ, same in the following) from 27 lava flows [Wang et al., 2015 PNAS]. This is about half of the present-day value. Here, in a pilot study to check this result, we utilized previously thermally demagnetized specimens of Plio-Pleistocene lavas from the Mt. Kenya region (latitude 0º) and fresh specimens from the Loiyangalani region (latitude 3ºN) of Kenya that were previously studied for paleosecular variation [Opdyke et al., 2010 G-cubed] for paleointensity studies. We selected 2-3 specimens from each of 30 lava sites from Mt. Kenya region and 31 lava sites from Loiyangalani region with coherent directions and not exhibiting any indications of having been struck by severe lightning. Rock magnetic data show that the main magnetization carriers are fine-grained pseudo-single-domain magnetite with saturation remanence to saturation magnetization ratios (Mr/Ms) ranging from 0.05 to 0.60 [Opdyke et al., 2010, G-cubed]. Our preliminary MD-adjusted paleointensity results (Loiyangalani specimens with tTRM thermal alteration check [Wang and Kent, 2013 G-cubed]; Mt. Kenya specimens with an alternate thermal alteration check) show that the overall mean values are 15.3 ± 5.7 µT for the Mt. Kenya region (from 7 lava flows) and 16.4 ± 5.2 µT for the Loiyangalani region (from 8 lava flows). Along with paleointensities from Antarctica (latitude 78ºS, 33.4 ± 13.9 µT from 38 lava flows) [Lawrence et al., 2009 G-cubed], Iceland (latitude 64ºN, 37.7 ± 14.2 µT from 10 lava flows) [Cromwell et al., 2015 JGR] and Galapagos [Wang et al., 2015 PNAS], our preliminary Kenya lava results support a geocentric axial dipole (GAD) model of the time-averaged field in both direction (tan[inclination] = 2×tan[latitude]) and paleointensity (equatorial

  20. Paleointensity determination on Neoarchaean dikes within the Vodlozerskii terrane of the Karelian craton

    Science.gov (United States)

    Shcherbakova, V. V.; Lubnina, N. V.; Shcherbakov, V. P.; Zhidkov, G. V.; Tsel'movich, V. A.

    2017-09-01

    The results of paleomagnetic studies and paleointensity determinations from two Neoarchaean Shala dikes with an age of 2504 Ma, located within the Vodlozerskii terrane of the Karelian craton, are presented. The characteristic components of primary magnetization with shallow inclinations I = -5.7 and 1.9 are revealed; the reliability of the determinations is supported by two contact tests. High paleointensity values are obtained by the Thellier-Coe and Wilson techniques. The calculated values of the virtual dipole moment (11.5 and 13.8) × 1022 A m2 are noticeably higher than the present value of 7.8 × 1022 A m2. Our results, in combination with the previous data presented in the world database, support the hypothesized existence of a period of high paleointensity in the Late Archaean-Early Proterozoic.

  1. Absolute paleointensities during a mid Miocene reversal of the Earth's magnetic field recorded on Gran Canaria (Canary Islands)

    Science.gov (United States)

    Leonhardt, R.; Soffel, H. C.

    2001-12-01

    An extensive paleointensity study was carried out on an approximately 14.1 Myr old reverse to normal transition of the geomagnetic field. One hundred eighty-eight samples from a mid Miocene volcanic sequence on Gran Canaria (Canary Islands) were subjected to Thellier-type paleointensity determinations. Samples for paleointensity experiments were selected on the basis of high Curie temperatures, low viscosity indexes, and limited variations of the remanence-carrying magnetic content during thermal treatment. A modified Thellier technique, which facilitates the recognition of MD tails and the formation of new magnetic remanences with higher blocking temperatures than the actual heating step, was used on the majority of the samples. The application of this technique proved to be very successful and we obtained reliable paleointensity results for 35% of the 87 sampled lava flows. In general, the intensity of the reversed and normal magnetized parts of the sequence, before and after the transition, is lower than the field intensity expected for the mid Miocene. This observation is very likely related to a long term reduction of the field close to transitions. The mean field intensity after the reversal ( ~ 17 μ T) is about twice the value of that recorded in the rocks prior to the reversal. This observation points at a fast recovery of the dipolar structure of the field after this reversal. Very low paleointensities with values < 5 μ T were obtained during an excursion, preceding the actual transition, and also close to significant changes of the local field directions during the reversal. This is interpreted as non-dipolar components becoming dominant for short periods and provoking a rapid change of local field directions. During the transition 15 successive lava flows recorded similar local field directions corresponding to a cluster of virtual geomagnetic poles close to South America. Chronologically, within this cluster the paleointensity increases from about 9

  2. Comparison of Thellier-type and multispecimen absolute paleointensities obtained on Miocene to historical lava flows from Lanzarote (Canary Islands, Spain)

    Science.gov (United States)

    Calvo-Rathert, M.; Morales, J.; Carrancho, Á.; Gogichaishvili, A.

    2015-12-01

    A paleomagnetic, rock-magnetic and paleointensity study has been carried out on 16 Miocene, Pleistocene, Quaternary and historical lava flows from Lanzarote (Canary Islands, Spain) with two main goals: (i) Compare paleointensity results obtained with two different techniques (Thellier-type and multispecimen) and (ii) obtain new paleointensity data. Initial rock-magnetic experiments on selected samples from each site were carried out to find out the carriers of remanence and to determine their thermal stability and grain size. They included the measurement of thermomagnetic curves, hysteresis parameters and IRM acquisition curves. Mostly reversible but also non-reversible curves were recorded in thermomagnetic experiments, with low-Ti titanomagnetite being the main carrier of remanence in most studied flows. Paleomagnetic analysis showed in most cases a single component and a characteristic component could be determined in 15 flows, all displaying normal-polarity. 83 samples from 13 flows were chosen for paleointensity experiments. In order to compare paleointensity results from exactly the same samples, they were cut into smaller specimens so that in each case a specimen was available to be used for a Thellier-type paleointensity determination, another one for a multispecimen paleointensity experiment and another one for rock-magnetic experiments. Thermomagnetic curves could be therefore measured on all samples subjected to paleointensity experiments. Thellier-type paleointensity determinations were performed with the Coe method between room temperature and 581°C on small (0.9 cm diameter and 1 to 2.5 cm length) specimens. After heating, samples were left cooling down naturally during several hours. Multispecimen paleointensity determinations were carried out using the method of Dekkers and Böhnel. The aforementioned sub-samples were cut into 8 specimens and pressed into salt pellets in order to obtain standard cylindrical specimens. A set of eight experiments

  3. The Influence of Cooling Rates on Paleointensity of Volcanic Glasses: an Experimental Approach on Synthetic Glass

    Science.gov (United States)

    von Aulock, F. W.; Ferk, A.; Leonhardt, R.; Hess, K.-U.; Dingwell, D. B.

    2009-04-01

    The suitability of volcanic glass for paleointensity determinations has been proposed in many studies throughout the last years. Besides the mainly single domain magnetic remanence carriers and the pristine character of the volcanic glass, this was also reasoned by the possibility to correct paleointensity data for cooling rate dependency using relaxation geospeedometry. This method gives the cooling rate of a glass at the glass transition interval which marks the change of a ductile supercooled liquid to a brittle glass. In this study the cooling rate correction as carried out for example by Leonhardt et al. 2006 is tested on synthetic volcanic glass. In order to obtain a stable multicomponent glass with ideal magnetic properties, a natural phonolithic glass from Tenerife (Spain) was melted to avoid heterogeneity and degassing. Further it was tempered for 5 hours at 900 °C to yield a sufficient concentration of magnetic remanence carriers. To exclude nucleation or crystallisation 7 samples were then heated to about 50 °C above the glass transition temperature at around 720 °C and quenched at different rates from 0.1 to 15 K/min. After carrying out a paleointensity experiment using a modified Thellier method, which incorporated alteration, additivity and tail checks, the dependence of the thermoremance on cooling rate was investigated. Using the original cooling rates we corrected the data and obtained paleointensities of around 46 T, which is a good approximation of the ambient field of 48 T. Taking into account that the uncorrected mean paleointensity is about 57 T, this suggests that cooling rate correction is not only working, but also a necessary tool to yield the true field value. R. Leonhardt , J. Matzka, A.R.L. Nichols , D.B. Dingwell Cooling rate correction of paleointensity determination for volcanic glasses by relaxation geospeedometry; Earth and Planetary Science Letters 243 (2006) 282-292

  4. Rock Magnetic Properties, Paleosecular Variation Record and Relative Paleointensity Stack between 11 and 21 14C kyr B.P. From Sediment Cores, Lake Moreno (Argentina)

    Science.gov (United States)

    Gogorza, C. S.; Irurzun, M. A.; Lirio, J. M.; Nunez, H.; Chaparro, M. A.; Sinito, A. M.

    2008-05-01

    We conducted a detailed study of natural remanence and rock magnetic properties on sediments cores from lake Moreno (South-Western Argentina). Based on these measurements, we constructed a paleosecular variation (PSV) record (Irurzun et al., 2008) and a relative paleointensity stack for the period 11-21 14C. The Declination and Inclination logs of the characteristic remanent magnetization for the cores as function of shortened depth are obtained. The data from all cores were combined to obtain a composite record using the Fisher method. Comparison between stacked inclination and declination records of lake Moreno and results obtained in previous works, lake Escondido (Gogorza et al., 1999; Gogorza et al., 2002) and lake El Trébol (Irurzun et al., 2008), shows good agreement. This agreement made possible to transform the stacked curves into time series that spans the interval 11 and 21 14C kyr B.P. Rock magnetic properties of the sediments cores showed uniform magnetic mineralogy and grain size, suggesting that they were suitable for relative paleointensity studies. The remanent magnetization at 20mT (NRM20mT) was normalized using the anhysteric remanent magnetization at 20mT (ARM20mT), the saturation of the isothermal remanent magnetization at 20mT (SIRM20mT) and the low field magnetic susceptibility {k}. Coherence analysis showed that the normalized records were not affected by local environmental conditions. The recorded pseudo-Thellier paleointensity was compared with records obtained from conventional normalizing methods. Comparing the paleointensity curves with others obtained previously in other lakes in the area has allowed us to reach reliable conclusions about centennial-scale features. References: Gogorza, C.S.G., Sinito, A.M., Di Tommaso, I., Vilas, J.F., Creer, K., Núnez, H. Holocene Geomagnetic Secular Variations Recorded by Sediments from Escondido lake (South Argentina). Earth, Planets and Space, V51(2), 93- 106. 1999. Gogorza, C.S.G., Sinito, A

  5. Magnetic paleointensities in fault pseudotachylytes and implications for earthquake lightnings

    Science.gov (United States)

    Leibovitz, Natalie Ruth

    Fault pseudotachylytes commonly form by frictional melting due to seismic slip. These fine-grained clastic rocks result from melt quenching and may show a high concentration of fine ferromagnetic grains. These grains are potentially excellent recorders of the rock natural remanent magnetization (NRM). The magnetization processes of fault pseudotachylytes are complex and may include the following: i) near coseismic thermal remanent magnetization (TRM) acquired upon cooling of the melt; ii) coseismic lightning induced remanent magnetization (LIRM) caused by earthquake lightnings (EQL); iii) post seismic chemical remanent magnetization (CRM) related to both devitrification and alteration. Deciphering these magnetization components is crucial to the interpretation of paleointensities to see if coseismic phenomena such as EQL's were recorded within these rocks. Hence the paleomagnetic record of fault pseudotachylytes provides an independent set of new constraints on coseismic events. Fault pseudotachylytes from the Santa Rosa Mountains, California host a magnetic assemblage dominated by stoichiometric magnetite, formed from the breakdown of ferromagnesian silicates and melt oxidation at high temperature. Magnetite grain size in these pseudotachylytes compares to that of magnetite formed in friction experiments. Paleomagnetic data on these 59 Ma-old fault rocks reveal not only anomalous magnetization directions, inconsistent with the coseismic geomagnetic field, but also anomalously high magnetization intensities. Here we discuss results of rock magnetism and paleointensity experiments designed to quantify the intensity of coseismic magnetizing fields. The REM' paleointensity method, previously tested on meteorites, is particularly well suited to investigate NRMs resulting from non-conventional and multiple magnetization processes. Overall findings indicate an isothermal remanent magnetization (IRM) in some, but not all, specimens taken from four different Santa Rosa

  6. New absolute paleointensity determinations for the Permian-Triassic boundary from the Kuznetsk Trap Basalts.

    Science.gov (United States)

    Kulakov, E.; Metelkin, D. V.; Kazansky, A.

    2015-12-01

    We report the results of a pilot absolute paleointensity study of the ~250 Ma basalts of Kuznetsk traps (Kuznetsk Basin, Altai-Sayan folded area). Studied samples are characterized by a reversed polarity of natural remanent magnetization that corresponds to the lower part of Siberian Trap basalts sequence. Geochemical similarity of Kuznets basalts with those from Norilsk region supports this interpretation. Primary origin of thermal remanence in our sample is confirmed by a positive backed contact test. Rock magnetic analyses indicate that the ChRM is carried by single-domain titanomagnetite. The Coe-version of the Thellier-Therllier double-heating method was utilized for the paleointensity determinations. In contrast to the previous studies of the Permian-Triassic Siberian trap basalts, our data indicate that by the P-T boundary the paleofield intensity was relatively high and comparable with geomagnetic field strength for the last 10 millions of years. New results question the duration of the "Mesozoic dipole-low".

  7. Transitional paleointensities from Kauai, Hawaii, and geomagnetic reversal models

    Science.gov (United States)

    Bogue, Scott W.; Coe, Robert S.

    1984-01-01

    Previously presented paleointensity results from an R-N transition zone in Kauai, Hawaii, show that field intensity dropped from 0. 431 Oe to 0. 101 Oe while the field remained within 30 degree of the reversed axial dipole direction. A recovery in intensity and the main directional change followed this presumably short period of low field strength. As the reversal neared completion, the field has an intensity of 0. 217 Oe while still 40 degree from the final direction. The relationship of paleointensity to field direction during the early part of the reversal thus differs from that toward the end, a feature that only some reversal models are consistent with. For example, a model in which a standing nondipole component persists through the dipole reversal predicts only symmetric intensity patterns. In contrast, zonal flooding models generate suitably complex field behavior if multiple flooding schemes operate during a single reversal or if the flooding process is itself asymmetric.

  8. Paleointensity determinations during the Akaroa polarity reversal, New Zealand: New input from the multispecimen parallel differential pTRM method

    Science.gov (United States)

    Camps, P.; Fanjat, G.; Poidras, T.; Hoffman, K. A.; Carvallo, C.; kennedy, B.

    2011-12-01

    We resampled two polarity reversals of late Miocene age (~ 9 Ma) recorded successively in the Akaroa volcano (Hoffman, 1986, Nature). Our main objective was to check old paleointensity determinations (Sherwood & Shaw, 1986, J. Geomag. Geoelec.) that yielded stronger values during the transitional period than during stable periods that preceded and followed the reversals. This observation is opposite to what is generally observed. An increase in intensity during the reversal would provide an extreme example of increasing secular variation. However, the experimental method used for determining the paleointensity, method of Shaw, is strongly questioned by the scientific community. A check of these data by the conventional Thellier method was required. Unfortunately, among the 72 sampled flows, only 4 yielded rock magnetic properties well suited for Thellier determinations. In most of the flows, the presence of large Multi-Domain grains of Ti-magnetite, which are frequently associated with Ti-maghemite, precludes any Thellier paleointensity determinations. We implement the domain-state independent paleointensity method (the multispecimen parallel differential pTRM, Dekkers & Bohnel, 2006, EPSL; Fabian & Leonhardt, 2010, EPSL) for 16 lava flows in which the MD Ti-magnetite are not oxidized. Thellier paleointensities obtained do not confirm the Sherwood results but show more scattered values of the intensity even during the stable periods of the field. To complete the data, multispecimen mesearements are being to be done.

  9. Paleointensity Variation of The Earth's Magnetic Field Obtained from Neogene and Quaternary Volcanic Rocks in Central Anatolian Plateau

    Science.gov (United States)

    Kaya, Nurcan; Makaroǧlu, Özlem; Hisarlı, Z. Mümtaz

    2017-04-01

    We present the variation of the earth magnetic field intensity obtained from Neogene and Quaternary volcanic rocks located in the Central Anatolian plateau. Total of four hundred and fifty volcanic rocks were sub-sampled in eighteen different sites around the study region. A modified Thellier method including the Leonhardt protocol was used to determine paleointensity values. Paleointensity results from ten sites were accepted according to the confidence criteria . According to first results the average total paleointensity field values, indicated by F, are 51.797±5.044 μT for site NK8,NK17,NK18,NK15 with age of 4.4-10.7 my, 51.91±4.651 for site NK4, NK3, NK12, NK6, NK11, NK14 with age of 0.1-2.6 m.y. The average VDMs (Virtual Dipol Moments) correspond to 8.39x1022 , 8.92x1022 Am2 for the four Neogene and six Quaternary rocks sites respectively. Our data were correlated with IAGA database that were obtained from the surrounding area. The correlation showed that the paleointensity data from the Central Anatolia plateau considerably agree with the IAGA data.

  10. Correlation and Stacking of Relative Paleointensity and Oxygen Isotope Data

    Science.gov (United States)

    Lurcock, P. C.; Channell, J. E.; Lee, D.

    2012-12-01

    The transformation of a depth-series into a time-series is routinely implemented in the geological sciences. This transformation often involves correlation of a depth-series to an astronomically calibrated time-series. Eyeball tie-points with linear interpolation are still regularly used, although these have the disadvantages of being non-repeatable and not based on firm correlation criteria. Two automated correlation methods are compared: the simulated annealing algorithm (Huybers and Wunsch, 2004) and the Match protocol (Lisiecki and Lisiecki, 2002). Simulated annealing seeks to minimize energy (cross-correlation) as "temperature" is slowly decreased. The Match protocol divides records into intervals, applies penalty functions that constrain accumulation rates, and minimizes the sum of the squares of the differences between two series while maintaining the data sequence in each series. Paired relative paleointensity (RPI) and oxygen isotope records, such as those from IODP Site U1308 and/or reference stacks such as LR04 and PISO, are warped using known warping functions, and then the un-warped and warped time-series are correlated to evaluate the efficiency of the correlation methods. Correlations are performed in tandem to simultaneously optimize RPI and oxygen isotope data. Noise spectra are introduced at differing levels to determine correlation efficiency as noise levels change. A third potential method, known as dynamic time warping, involves minimizing the sum of distances between correlated point pairs across the whole series. A "cost matrix" between the two series is analyzed to find a least-cost path through the matrix. This least-cost path is used to nonlinearly map the time/depth of one record onto the depth/time of another. Dynamic time warping can be expanded to more than two dimensions and used to stack multiple time-series. This procedure can improve on arithmetic stacks, which often lose coherent high-frequency content during the stacking process.

  11. New paleomagnetic and paleointensity results from late pliocene volcanic sequences from southern Georgia (Caucasus)

    Energy Technology Data Exchange (ETDEWEB)

    Calvo-Rathert, Manuel; Bogalo, Maria-Felicidad; Carrancho, Angel; Villalain, Juan Jose [Universidad de Burgos, Burgos (Spain). Departamento de Fisica, EPS; Goguichaichvili, Avto [Universidad Nacional Autonoma de Mexico, Morelia (Mexico). Laboratorio de Magnetismo Natural, Instituto de Geofisica; Vegas-Tubia, Nestor [Universidad del Pais Vasco, Bilbao (Spain). Departamento de Geodinamica; Sologashvili, Jemal [Ivane Javakhishvili State University of Tbilisi, Tbilisi (Georgia). Department of Geophysics

    2009-07-01

    Complete text of publication follows. Paleomagnetic and rock-magnetic experiments were carried out on 21 basaltic lava flows belonging to four different sequences of late Pliocene age from southern Georgia (Caucasus): Dmanisi (11 flows), Diliska (5 flows), Kvemo Orozmani (5 flows), and Zemo Karabulaki (3 flows). Paleomagnetic analysis generally showed the presence of a single component (mainly in the Dmanisi sequence) but also two more or less superimposed components in several other cases. All sites except one clearly displayed a normal-polarity characteristic component. Rock-magnetic experiments included measurement of thermomagnetic curves and hysteresis parameters. Susceptibility-versus-temperature curves measured in argon atmosphere on whole-rock powdered samples yielded low-Ti titanomagnetite as main carrier of remanence, although a lower T{sub C}-component was also observed in several cases. Both reversible and non-reversible k-T curves were measured. A pilot paleointensity study was performed with the Coe (1967) method on two samples of each of those sites considered suitable after interpretation of rock-magnetic and paleomagnetic data from all sites. The pilot study showed that reliable paleointensity results were mainly obtained from sites of the Dmanisi sequence. This thick sequence of basaltic lava flows records the upper end of the normal-polarity Olduvai subchron, a fact confirmed by {sup 40}Ar/{sup 39}Ar dating of the uppermost lava flow and overlying volcanogenic ashes, which yields ages of 1.8 to 1.85 My. A second paleointensity experiment was carried out only on samples belonging to the Dmanisi sequence. Preliminary results show that paleointensities often are low, their values lying between 10 and 20 muT in many cases. For comparison, present day field is 47 muT. The Dmanisi sequence of lava flows directly underlies the Dmanisi paleoanthropologic site, in which the end of the Olduvai subchron is recorded.

  12. Archeomagnetic dating of the eruption of Xitle volcano (Mexico) from a reappraisal of the paleointensity with the MSP-DSC protocol.

    Science.gov (United States)

    Bravo-Ayala, Manuel; Camps, Pierre; Alva-Valdivia, Luis; Poidras, Thierry; Nicol, Patrick

    2014-05-01

    The Xitle volcano, located south of Mexico City, is a monogenic volcano that has provided seven lava flows in a time interval of a few years. The age of these eruptions, estimated by means of radiocarbon dates on charcoal from beneath the flows, is still very poorly known, ranging from 4765±90 BC to 520±200 AD (see Siebe, JVGR, 2000 for a review). This lava field was emplaced over the archaeological city of Cuicuilco whose occupation is estimated between 700 BC and 150 AD. Thus a question is still pending: Is the downfall of Cuicuilco directly attributable to the eruption of Xitle? It seems that the answer is negative if we consider the latest radiocarbon dating by Siebe (2000), which sets the age of the eruption to 280±35 AD, that is significantly younger to the abandon of the city. Because this new age has direct implications on the history of the movements of ancient populations in the Central Valley of Mexico, we propose in the present study to check this estimate by archaeomagnetic dating. Xitle lava have been investigated several times for paleomagnetism, including directional analyses and absolute paleointensity determinations (see Alva, EPS, 57, 839-853, 2005 for a review). The characteristic Remanence direction is precisely determined. It is much more difficult to estimate precisely the paleointensity with the Thellier method: values scatter between 40 and 90 μT in a single flow (Alva, 2005). We propose here to estimate the paleointensity by means of the MSP-DSC protocol (Fabian and Leonhardt, 2010) with the new ultra-fast heating furnace FUReMAG developed in Montpellier (France). The sampling was performed along four profiles, one vertical through the entire thickness of the flow and three horizontal (at the top, middle and the bottom of the flow). Our preliminary results show that there is no difference between the values found in the different profiles, all providing a value around 62 μT. The comparison of our results (Dec = 359.0°, Inc = 35.2

  13. Magnetic paleointensities recorded in fault pseudotachylytes and implications for earthquake lightnings

    Science.gov (United States)

    Leibovitz, Natalie; Ferré, Eric; Geissman, John; Gattacceca, Jérôme

    2015-04-01

    Fault pseudotachylytes commonly form by frictional melting due to seismic slip. These fine-grained clastic rocks result from melt quenching and may show a high concentration of fine ferromagnetic grains. These grains are potentially excellent recorders of the rock natural remanent magnetization (NRM). The magnetization processes of fault pseudotachylytes are complex and may include the following: i) near coseismic thermal remanent magnetization (TRM) acquired upon cooling of the melt; ii) coseismic lightning induced remanent magnetization (LIRM) caused by earthquake lightnings (EQL); iii) post seismic chemical remanent magnetization (CRM) related to both devitrification and alteration. Deciphering these magnetization components is crucial to the interpretation of microstructures and the timing of microstructural development. Hence the paleomagnetic record of fault pseudotachylytes provides an independent set of new constraints on coseismic and post-seismic deformation. Fault pseudotachylytes from the Santa Rosa Mountains, California host a magnetic assemblage dominated by stoichiometric magnetite, formed from the breakdown of ferromagnesian silicates and melt oxidation at high temperature. Magnetite grain size in these pseudotachylytes compares to that of magnetites formed in friction experiments. Paleomagnetic data on these 59 Ma-old fault rocks reveal not only anomalous magnetization directions, inconsistent with the coseismic geomagnetic field, but also anomalously high magnetization intensities. Here we discuss preliminary results of paleointensity experiments designed to quantify the intensity of coseismic magnetizing fields. The REM' paleointensity method is particularly well suited to investigate NRMs resulting from non-conventional and multiple magnetization processes. The anomalously high NRM recorded in a few, but not all, specimens points to LIRM as the dominant origin of magnetization.

  14. 10Be and relative paleointensity signals across the last geomagnetic reversal

    Science.gov (United States)

    Savranskaia, T.; Valet, J. P.; Bassinot, F. C.; Meynadier, L.; Simon, Q.; Bourles, D. L.; Thouveny, N.; Thevarasan, A.; Villedieu, A.; Choy, S.; Gacem, L.

    2017-12-01

    Two techniques can be used to determine the evolution of the geomagnetic field intensity in the past. The first one relies on records of relative paleointensity (RPI) in sediments. Although they remain relatively sparse detailed records of 10Be production (expressed in terms of 10Be/9Be) provide an alternative approach. However integration of 10Be within the sediment is not better understood than the magnetization process, and therefore paleofield studies should greatly benefit from the integration of both datasets. In order to achieve this goal, it is crucial to compare and analyze the signals over a common time period. We selected five sedimentary cores from the Indian, Pacific and Atlantic Oceans and focused on the last reversal which is characterized by the largest intensity changes. Since 10Be is homogenized in the atmosphere, the same amount of 10Be should be recorded everywhere. We found different amounts of 10Be at each site during the last reversal which appear roughly correlated with accumulation rate. In contrast the 10Be amplitude is similar at all locations while higher amplitude signals are expected for low deposition rates. Taking advantage of the distribution of tektites layers, the beryllium signals have been deconvolved, but this procedure did not strikingly change the results. Despite atmospheric mixing we wonder whether 10Be production was slightly different at each location in presence of a multipolar transitional field. The comparison between the 10Be and RPI signals reveals large similarities but also puzzling differences. In particular, the relationship between the two signals is not the same during periods of stable polarity as during the transitional interval. A precursor with low intensity is present in several RPI records but not clearly marked on the beryllium records. We also addressed the question of a possible offset between the two signals that would be indicative of a delayed magnetization acquisition. After correlating and

  15. Geomagnetic Paleointensity Variations as a Cheap, High-Resolution Geochronometer for Recent Mid-Ocean Ridge Processes

    Science.gov (United States)

    DYMENT, J.; HEMOND, C.

    2001-12-01

    of the data confirms the quality of the oceanic crust as a recorder of the geomagnetic variations. Future work in the framework of Project GIMNAUT include 1) the processing and interpretation of the available magnetic signals to obtain a detailed sequence of the geomagnetic fluctuations for the last 800 ka; 2) the dating of collected samples with different radiochronologic methods such as K-Ar and Ar-Ar for samples older than 100-150 ka and 230Th-238U for samples aged between 300-10 ka; and 3) the calibration of the geomagnetic intensity variation sequence as a high resolution geochronometer for the last 800 ka. Such a magnetic geochronometer would present an obvious interest for mid-ocean ridge studies, because of its low cost and simplicity of operation: it would only require the addition of a deep-sea magnetometer onto existing means of investigation such as submersibles, ROVs or AUVs. Beyond this application, this magnetic geochronometer could also be used for accurate dating of pelagic sedimentary sequences, through the analysis of relative paleointensities on cores, or of continental or island volcanic flows, through the determination of absolute paleointensities by the Thellier-Thellier method. (*) N. Arnaud, C. Bassoullet, M.. Benoit, A. Briais, F. Chabaux, A.K. Chaubey, A. Chauvin, P. Gente, H. Guillou, H. Horen, M. Kitazawa, B. Le Gall, M. Maia, M. Ravilly

  16. Biogenic magnetite, detrital hematite, and relative paleointensity in Quaternary sediments from the Southwest Iberian Margin

    Science.gov (United States)

    Channell, J. E. T.; Hodell, D. A.; Margari, V.; Skinner, L. C.; Tzedakis, P. C.; Kesler, M. S.

    2013-08-01

    Magnetic properties of late Quaternary sediments on the SW Iberian Margin are dominated by bacterial magnetite, observed by transmission electron microscopy (TEM), with contributions from detrital titanomagnetite and hematite. Reactive hematite, together with low organic matter concentrations and the lack of sulfate reduction, lead to dissimilatory iron reduction and availability of Fe(II) for abundant magnetotactic bacteria. Magnetite grain-size proxies (κARM/κ and ARM/IRM) and S-ratios (sensitive to hematite) vary on stadial/interstadial timescales, contain orbital power, and mimic planktic δ18O. The detrital/biogenic magnetite ratio and hematite concentration are greater during stadials and glacial isotopic stages, reflecting increased detrital (magnetite) input during times of lowered sea level, coinciding with atmospheric conditions favoring hematitic dust supply. Magnetic susceptibility, on the other hand, has a very different response being sensitive to coarse detrital multidomain (MD) magnetite associated with ice-rafted debris (IRD). High susceptibility and/or magnetic grain-size coarsening, mark Heinrich stadials (HS), particularly HS2, HS3, HS4, HS5, HS6 and HS7, as well as older Heinrich-like detrital layers, indicating the sensitivity of this region to fluctuations in the position of the polar front. Relative paleointensity (RPI) records have well-constrained age models based on planktic δ18O correlation to ice-core chronologies, however, they differ from reference records (e.g. PISO) particularly in the vicinity of glacial maxima, mainly due to inefficient normalization of RPI records in intervals of enhanced hematite input.

  17. Paleomagnetic direction and paleointensity variations during the Matuyama-Brunhes polarity transition from a marine succession in the Chiba composite section of the Boso Peninsula, central Japan

    Science.gov (United States)

    Okada, Makoto; Suganuma, Yusuke; Haneda, Yuki; Kazaoka, Osamu

    2017-03-01

    The youngest geomagnetic polarity reversal, the Matuyama-Brunhes (M-B) boundary, provides an important plane of data for sediments, ice cores, and lavas. The geomagnetic field intensity and directional changes that occurred during the reversal also provide important information for understanding the dynamics of the Earth's outer core, which generates the magnetic field. However, the reversal process is relatively rapid in terms of the geological timescale; therefore, adequate temporal resolution of the geomagnetic field record is essential for addressing these topics. Here, we report a new high-resolution paleomagnetic record from a continuous marine succession in the Chiba composite section of the Kokumoto Formation of the Kazusa Group, Japan, that reveals detailed behaviors of the virtual geomagnetic poles (VGPs) and relative paleointensity changes during the M-B polarity transition. The resultant relative paleointensity and VGP records show a significant paleointensity minimum near the M-B boundary, which is accompanied by a clear "polarity switch." A newly obtained high-resolution oxygen isotope chronology for the Chiba composite section indicates that the M-B boundary is located in the middle of marine isotope stage (MIS) 19 and yields an age of 771.7 ka for the boundary. This age is consistent with those based on the latest astronomically tuned marine and ice core records and with the recalculated age of 770.9 ± 7.3 ka deduced from the U-Pb zircon age of the Byk-E tephra. To the best of our knowledge, our new paleomagnetic data represent one of the most detailed records on this geomagnetic field reversal that has thus far been obtained from marine sediments and will therefore be key for understanding the dynamics of the geomagnetic dynamo and for calibrating the geological timescale.[Figure not available: see fulltext.

  18. Further details on the applicability of Thellier paleointensity method: The effect of magnitude of laboratory field

    Science.gov (United States)

    Morales, Juan; Goguitchaichvili, Avto; Alva-Valdivia, Luis M.; Urrutia-Fucugauchi, Jaime

    2006-06-01

    Twenty years after Tanaka and Kono's pioneering contribution (Tanaka and Kono, 1984), we give some new details on the effect of applied field strength during Thellier paleointensity experiments. Special attention is paid to the relation of magnitude of laboratory field and Coe's quality factors (Coe et al., 1978). Full thermoremanent magnetizations were imparted on natural samples containing low-Ti titanomagnetites of pseudo-single domain structure in a 40-μT magnetic field from 600 °C to room temperature. The samples were subjected to the routine Thellier procedure using a wide range of applied laboratory fields. Results indicate that values of laboratory fields may be accurately reproduced within 2% of standard error. The quality factors, however, decrease when the magnitude of 'ancient' field does not match to applied laboratory fields. To cite this article: J. Morales et al., C. R. Geoscience 338 (2006).

  19. A Two Million Year Equatorial Paleogeomagnetic and Relative Paleointensity Record from IODP Site U1489 in the West Pacific Warm Pool: Towards an Improved Tuning Target.

    Science.gov (United States)

    Hatfield, R. G.; Stoner, J. S.; Kumagai, Y.

    2017-12-01

    International Ocean Discovery Program (IODP) Expedition 363 drilled nine sites in the West Pacific Warm Pool in October-December 2016. IODP Site U1489 (02°07.19'N, 141°01.67'E, 3421 meters water depth) located on the Eauripik Rise was drilled to a depth of 270 meters below sea floor using the advanced piston corer. Shipboard data revealed the upper 112 meters composite depth (mcd) consist of clay-rich nanno fossil ooze and contain all twenty-two geomagnetic reversals over the last 5 million years (Myrs). Shipboard generated rock magnetic data and post-cruise hysteresis data suggest the paleomagnetic record is carried by fine-grained pseudo-single domain magnetite. A shipboard estimate of relative paleointensity (RPI) was generated by normalizing the natural remanent magnetization (NRM) intensity after 15mT peak alternating field (AF) demagnetization of the shipboard half core measurement by whole round magnetic susceptibility (MS). Coherence of the NRM15mT/MS record with existing RPI stacks over the last 2 Myrs highlighted the potential for development of a RPI record back to the earliest Pliocene. Here we present the first u-channel measurements of the upper 40 mcd from Site U1489 spanning the last 2 Myrs. The NRM was measured at 1 cm intervals after stepwise AF demagnetization in peak fields of 15-100mT. Component inclination plots around that predicted by a geocentric axial dipole field and maximum angular deviation values are so far generally < 3° implying the paleomagnetic record is well resolved at Site U1489. Measurements of MS and anhysteretic remanent magnetization (ARM) characterize the environmental variability and provide a normalizer for the NRM to generate an estimate of RPI. The chronology is iteratively developed, initially based on polarity reversals boundaries, then by tuning MS to astronomical precession. We compare our RPI estimates to PISO-1500 and NARPI-2200 whose chronologies are based upon δ18O of benthic foraminifera to assess the

  20. MSP-Tool: a VBA-based software tool for the analysis of multispecimen paleointensity data

    Science.gov (United States)

    Monster, Marilyn; de Groot, Lennart; Dekkers, Mark

    2015-12-01

    The multispecimen protocol (MSP) is a method to estimate the Earth's magnetic field's past strength from volcanic rocks or archeological materials. By reducing the amount of heating steps and aligning the specimens parallel to the applied field, thermochemical alteration and multi-domain effects are minimized. We present a new software tool, written for Microsoft Excel 2010 in Visual Basic for Applications (VBA), that evaluates paleointensity data acquired using this protocol. In addition to the three ratios (standard, fraction-corrected and domain-state-corrected) calculated following Dekkers and Böhnel (2006) and Fabian and Leonhardt (2010) and a number of other parameters proposed by Fabian and Leonhardt (2010), it also provides several reliability criteria. These include an alteration criterion, whether or not the linear regression intersects the y axis within the theoretically prescribed range, and two directional checks. Overprints and misalignment are detected by isolating the remaining natural remanent magnetization (NRM) and the partial thermoremanent magnetization (pTRM) gained and comparing their declinations and inclinations. The NRM remaining and pTRM gained are then used to calculate alignment-corrected multispecimen plots. Data are analyzed using bootstrap statistics. The program was tested on lava samples that were given a full TRM and that acquired their pTRMs at angles of 0, 15, 30 and 90° with respect to their NRMs. MSP-Tool adequately detected and largely corrected these artificial alignment errors.

  1. Experimental and numerical simulation of the acquisition of chemical remanent magnetization and the Thellier procedure

    Science.gov (United States)

    Shcherbakov, V. P.; Sycheva, N. K.; Gribov, S. K.

    2017-09-01

    The results of the Thellier-Coe experiments on paleointensity determination on the samples which contain chemical remanent magnetization (CRM) created by thermal annealing of titanomagnetites are reported. The results of the experiments are compared with the theoretical notions. For this purpose, Monte Carlo simulation of the process of CRM acquisition in the system of single-domain interacting particles was carried out; the paleointensity determination method based on the Thellier-Coe procedure was modeled; and the degree of paleointensity underestimation was quantitatively estimated based on the experimental data and on the numerical results. Both the experimental investigations and computer modeling suggest the following main conclusion: all the Arai-Nagata diagrams for CRM in the high-temperature area (in some cases up to the Curie temperature T c) contain a relatively long quasi-linear interval on which it is possible to estimate the slope coefficient k and, therefore, the paleointensity. Hence, if chemical magnetization (or remagnetization) took place in the course of the magnetomineralogical transformations of titanomagnetite- bearing igneous rocks during long-lasting cooling or during repeated heatings, it can lead to incorrect results in determining the intensity of the geomagnetic field in the geological past.

  2. A high-resolution, 60 kyr record of the relative geomagnetic field intensity from Lake Towuti, Indonesia

    Science.gov (United States)

    Kirana, Kartika Hajar; Bijaksana, Satria; King, John; Tamuntuan, Gerald Hendrik; Russell, James; Ngkoimani, La Ode; Dahrin, Darharta; Fajar, Silvia Jannatul

    2018-02-01

    Past changes in the Earth's magnetic field can be highlighted through reconstructions of magnetic paleointensity. Many magnetic field variation features are global, and can be used for the detailed correlation and dating of sedimentary records. On the other hand, sedimentary magnetic records also exhibit features on a regional, rather than a global scale. Therefore, the development of regional scale magnetic field reconstructions is necessary to optimize magnetic paleointensity dating. In this paper, a 60 thousand year (kyr) paleointensity record is presented, using the core TOW10-9B of Lake Towuti, located in the island of Sulawesi, Indonesia, as a part of the ongoing research towards understanding the Indonesian environmental history, and reconstructing a high-resolution regional magnetic record from dating the sediments. Located in the East Sulawesi Ophiolite Belt, the bedrock surrounding Lake Towuti consists of ultramafic rocks that render the lake sediments magnetically strong, creating challenges in the reconstruction of the paleointensity record. These sediment samples were subject to a series of magnetic measurements, followed by testing the obtained paleointensity records resulting from normalizing natural remanent magnetization (NRM) against different normalizing parameters. These paleointensity records were then compared to other regional, as well as global, records of magnetic paleointensity. The results show that for the magnetically strong Lake Towuti sediments, an anhysteretic remanent magnetization (ARM) is the best normalizer. A series of magnetic paleointensity excursions are observed during the last 60 kyr, including the Laschamp excursion at 40 kyr BP, that provide new information about the magnetic history and stratigraphy of the western tropical Pacific region. We conclude that the paleointensity record of Lake Towuti is reliable and in accordance with the high-quality regional and global trends.

  3. Multiple-specimen absolute paleointensity determination with the MSP-DSC protocol: Advantages and drawbacks.

    Science.gov (United States)

    Camps, P.; Fanjat, G.; Poidras, T.; Carvallo, C.; Nicol, P.

    2012-04-01

    The MSP-DSC protocol (Dekkers & Bohnel, 2006, EPSL; Fabian & Leonhardt, 2010, EPSL) is a recent development in the methodology for documenting the intensity of the ancient Earth magnetic field. Applicable both on rocks or archaeological artifacts it allows us to use samples that until now were not measured because their magnetic properties do not meet selection criteria required by conventional methods. However, this new experimental protocol requires that samples be heated and cooled under a field parallel to its natural remanent magnetization (NRM). Currently, standard paleointensity furnaces do not match precisely this constraint. Yet, such new measurement protocol seems very promising since it would possibly double the number of available data. We are developing in Montpellier (France), a very fast-heating oven with infrared dedicated to this protocol. Two key points determine its characteristics. The first is to heat uniformly a rock sample of a 10-cc-standard volume as fast as possible. The second is to apply to the sample during the heating (and the cooling) a precise magnetic induction field, perfectly controlled in 3D. We tested and calibrated a preliminary version of this oven along with the MSP-DSC protocol with 3 historical lava flows, 2 from Reunion Island (erupted in 2002 and 2007) and one from Etna (erupted in 1983). These lava flows were selected because they have different magnetic behaviors. Reunion 2002 is rather SD-PSD-like, while Reunion 2007 is PSD-MD-like, and Etna 1983 is MD-like. The paleointensity determinations obtained with the original protocol of Dekkers and Bohnel (2006, EPSL) are within +- 1 μT of the known field for the three lava flows. The same precision is obtained when we applied the fraction correction (MSP-FC protocol). However, we systematically observed a loss in the linearity of the MSP-FC plots. In addition, like Muxworthy and Taylor (2011, GJI), we found that the Domain State Correction is difficult to apply since alpha

  4. New archaeomagnetic data recovered from the study of celtiberic remains from central Spain (Numantia and Ciadueña, 3rd-1st centuries BC). Implications on the fidelity of the Iberian paleointensity database

    Science.gov (United States)

    Osete, M. L.; Chauvin, A.; Catanzariti, G.; Jimeno, A.; Campuzano, S. A.; Benito-Batanero, J. P.; Tabernero-Galán, C.; Roperch, P.

    2016-11-01

    Variations of geomagnetic field in the Iberian Peninsula prior to roman times are poorly constrained. Here we report new archaeomagnetic results from four ceramic collections and two combustion structures recovered in two pre-roman (celtiberic) archaeological sites in central Spain. The studied materials have been dated by archaeological evidences and supported by five radiocarbon dates. Rock magnetic experiments indicate that the characteristic remanent manetization (ChRM) is carried by a low coercivity magnetic phase with Curie temperatures of 530-575 °C, most likely Ti-poor titanomagnetite/titanomaghemite. Archaeointensity determinations were carried out by using the classical Thellier-Thellier protocol including tests and corrections for magnetic anisotropy and cooling rate dependency. Two magnetic behaviours were depicted during the laboratory treatment. Black potsherds and poor heated samples from the kilns, presented two magnetization components, alterations or curved Arai plots and were therefore rejected. In contrast, well heated specimens (red ceramic fragments and well heated samples from the kilns) show one single well defined component of magnetization going through the origin and linear Arai plots providing successful archaeointensity determinations. The effect of anisotropy of the thermoremanent magnetization (ATRM) on paleointensity analysis was systematically investigated obtaining very high ATRM corrections on fine pottery specimens. In some cases, differences between the uncorrected and ATRM corrected paleointensity values reached up to 86 %. The mean intensity values obtained from three selected set of samples were 64.3 ± 5.8 μT; 56.8 ± 3.8 and 56.7 ± 4.6 μT (NUS2, CI2 and CIA, respectively), which contribute to better understand the evolution of the palaeofield intensity in central Iberia during the 3rd-1st centuries BC. The direction of the field at first century BC has also been determined from oriented samples from CIA kilns (D = 357

  5. Paléointensités géomagnétiques absolues complémentaires de la Basse Californie : évaluation des données du Pliocène et du Pléistocène inférieur et moyen

    Science.gov (United States)

    Morales, Juan; Goguitchaichvili, Avto; Cañon-Tapia, Edgardo; Negrete, Raquel

    2003-11-01

    From a large collection (more than 300 oriented cores) of Baja California Mio-Pliocene volcanic units, sampled for magnetostratigraphy and tectonics, 46 samples were selected for Thellier paleointensity experiments because of their low viscosity index, stable remanent magnetization and close to reversible continuous thermomagnetic curves. 19 samples, coming from 4 individual basaltic lava flows, yielded reliable paleointensity estimates with the flow-mean virtual dipole moments (VDM) ranging from 3.6 to 6.2 ×10 22 A m 2. Our results, although not numerous, are of high technical quality and comparable to other paleointensity data recently obtained on younger lava flows. The NRM fractions used for paleointensity determination range from 38 to 79% and the quality factors vary between 4.8 and 16.7, being normally greater than 5. The combination of Baja California data with the available comparable quality Plio-Plesitocene paleointensity results yields a mean VDM of 6.3 ×10 22 A m 2, which is almost 80% of the present geomagnetic axial dipole. Reliable paleointensity results for the last 5 Ma are still scarce and of dissimilar quality, which makes it hard to draw any firm conclusions regarding the Pliocene and Early/Middle Pleistocene evolution of the geomagnetic field. To cite this article: J. Morales et al., C. R. Geoscience 335 (2003).

  6. New Magnetic and 10Be/9Be results from ODP site 851 (East Equatorial Pacific)

    Science.gov (United States)

    Valet, J. P.; Savranskaia, T.; Anojh, T.; Meynadier, L.; Thouveny, N.; Gacem, L.; L2NC, A. T.; Bassinot, F. C.; Simon, Q.

    2017-12-01

    The paleomagnetic record from ODP site 851 was the first long data of relative paleointensity that attempted to describe 4 Ma of geomagnetic variations. Among other features, it was characterized by an asymmetrical saw-tooth pattern of the intensity changes across reversals. The upper part of the record (0 to 1.1 Ma) was documented by stepwise alternating field (af) demagnetization of U-channels, while the deeper part could not be sampled by U-channels and instead combined shipboard measurements and stepwise demagnetized single samples within specific intervals. Thermal demagnetization was also conducted within specific intervals to assess the absence of viscous component. We performed a new detailed study using U-channels and single samples that were taken along a continuous splice section that covers the upper 80 meters of sediment. Stepwise demagnetization of the natural magnetization and of the anhysteretic magnetization were carried out for all samples and U-channels in order to improve the resolution and the reliability of relative paleointensity for the older part of the record. The new results improve the detailed magnetostratigraphy that was formerly established and provide additional details to the paleointensity results. In parallel, 10Be/9Be measurements were carried out at the same levels as the magnetic measurements to test further the controversial asymmetrical pattern of relative paleointensity. Unfortunately, the 10Be/9Be results did not provide any consistent signal. This failure most likely results from high carbonate concentration (about 85%) that yields poor adsorption of beryllium by the sediment particles and therefore generates large fluctuations. The reliability of the paleointensity record is linked to downcore homogeneity of the sediment that is characterized by little variability of carbonate content and therefore little changes in the magnetization response to the field. Summarizing poor clay content appears to be a favorable situation

  7. Uncertainty relations for approximation and estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jaeha, E-mail: jlee@post.kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Tsutsui, Izumi, E-mail: izumi.tsutsui@kek.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Theory Center, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)

    2016-05-27

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  8. Uncertainty relations for approximation and estimation

    International Nuclear Information System (INIS)

    Lee, Jaeha; Tsutsui, Izumi

    2016-01-01

    We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér–Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position–momentum and the time–energy relations in one framework albeit handled differently. - Highlights: • Several inequalities interpreted as uncertainty relations for approximation/estimation are derived from a single ‘versatile inequality’. • The ‘versatile inequality’ sets a limit on the approximation of an observable and/or the estimation of a parameter by another observable. • The ‘versatile inequality’ turns into an elaboration of the Robertson–Kennard (Schrödinger) inequality and the Cramér–Rao inequality. • Both the position–momentum and the time–energy relation are treated in one framework. • In every case, Aharonov's weak value arises as a key geometrical ingredient, deciding the optimal choice for the proxy functions.

  9. Estimating the relative utility of screening mammography.

    Science.gov (United States)

    Abbey, Craig K; Eckstein, Miguel P; Boone, John M

    2013-05-01

    The concept of diagnostic utility is a fundamental component of signal detection theory, going back to some of its earliest works. Attaching utility values to the various possible outcomes of a diagnostic test should, in principle, lead to meaningful approaches to evaluating and comparing such systems. However, in many areas of medical imaging, utility is not used because it is presumed to be unknown. In this work, we estimate relative utility (the utility benefit of a detection relative to that of a correct rejection) for screening mammography using its known relation to the slope of a receiver operating characteristic (ROC) curve at the optimal operating point. The approach assumes that the clinical operating point is optimal for the goal of maximizing expected utility and therefore the slope at this point implies a value of relative utility for the diagnostic task, for known disease prevalence. We examine utility estimation in the context of screening mammography using the Digital Mammographic Imaging Screening Trials (DMIST) data. We show how various conditions can influence the estimated relative utility, including characteristics of the rating scale, verification time, probability model, and scope of the ROC curve fit. Relative utility estimates range from 66 to 227. We argue for one particular set of conditions that results in a relative utility estimate of 162 (±14%). This is broadly consistent with values in screening mammography determined previously by other means. At the disease prevalence found in the DMIST study (0.59% at 365-day verification), optimal ROC slopes are near unity, suggesting that utility-based assessments of screening mammography will be similar to those found using Youden's index.

  10. On the relation between S-Estimators and M-Estimators of multivariate location and covariance

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1987-01-01

    We discuss the relation between S-estimators and M-estimators of multivariate location and covariance. As in the case of the estimation of a multiple regression parameter, S-estimators are shown to satisfy first-order conditions of M-estimators. We show that the influence function IF (x;S F) of

  11. Relative Pose Estimation Algorithm with Gyroscope Sensor

    Directory of Open Access Journals (Sweden)

    Shanshan Wei

    2016-01-01

    Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.

  12. Blind estimation of a ship's relative wave heading

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Iseki, Toshio

    2012-01-01

    This article proposes a method to estimate a ship’s relative heading against the waves. The procedure relies purely on ship- board measurements of global responses such as motion components, accelerations and the bending moment amidships. There is no particular (mathematical) model connected to t...... to the estimate, and therefore it is called a ’blind estimate’. The approach is in this introductory study tested by analysing simulated data. The analysis reveals that it is possible to estimate a ship’s relative heading on the basis of shipboard measurements only....

  13. Paleomagnetic intensity of Aso pyroclastic flows: Additional results with LTD-DHT Shaw method, Thellier method with pTRM-tail check

    Science.gov (United States)

    Maruuchi, T.; Shibuya, H.

    2009-12-01

    For the sake to calibrate the absolute value of the ’relative paleointensity variation curve’ drawn from sediment cores, Takai et al. (2002) proposed to use pyroclastic flows co-bearing with wide spread tephras. The pyroclastic flows prepare volcanic rocks with TRM, which let us determine absolute paleointensity, and the tephras prepare the correlation with sediment stratigraphy. While 4 out of 6 pyroclastic flows are consistent with Sint-800 paleointensity variation curve, two flows, Aso-2 and Aso-4, show weaker and stronger than Sint-800 beyond the error, respectively. We revisited the paleointensity study of Aso pyroclastic flows, adding LTD- DHT Shaw method, the pTRM-tail check in Thellier experiment, and LTD-DHT Shaw method by using volcanic glasses. We prepared 11 specimens from 3 sites of Aso-1 welded tuff for LTD-DHT Shaw method experiments, and obtained 6 paleointensities satisfied a set of strict criteria. They yield an average paleointensity of 21.3±5.8uT, which is smaller than 31.0±3.4uT provided by Takai et al. (2002). For Aso-2 welded tuff, 11 samples from 3 sites were submitted to Thellier experiments, and 6 passed a set of pretty stringent criteria including pTRM-tail check, which is not performed by Takai et al. (2002). They give an average paleointensity of 20.2±1.5uT, which is virtually identical to 20.2±1.0uT (27 samples) given by Takai et al. (2002). Although the success rate was not good in LTD-DHT Shaw method, 2 out of 12 specimens passed the criteria, and gave 25.8±3.4uT, which is consistent with Takai et al. (2002). In addition, we obtained a reliable paleointensity from a volcanic glass in LTD-DHT Shaw method, it gives a paleointensity of 23.6 uT. It is also consitent with Takai et al. (2002). For Aso-3 welded tuff, we performed only LTD-DHT Shaw method for one specimen from one site yet. It gives a paleointensity of 43.0uT, which is higher than 31.8±3.6uT given by Takai et al. (2002). Eight sites were set for Aso-4 welded tuff

  14. Intensity of the Earth's Magnetic Field over the past 6 million years ; A case study from Basaltic Rocks in East Anatolian

    Science.gov (United States)

    Kaya, Nurcan; Baydemir, Niyazi; Cengiz Cinku, Mualla; Hisarli, Z. Mümtaz; Keskin, Mehmet; Leonhardt, Roman

    2015-04-01

    The aim of this study was to determine the intensity variation of the earth magnetic field by using Miocene and Quaternary basaltic rocks in Eastern Anatolian region. A total of ninety one volcanic rocks at twelve different sites are sampled around the Van region. A modified Thellier method was used to determine paleointensity values. Paleointensity results from five sites were accepted according to our confidence criteria. The paleointensity values from the five reliable sites with normal polarity show relatively low paleointensity values compared to the present field of 47 µT. The total paleointensity field values F are 33.96± 3.54 µT for site VAN5 with an age of 5.5 m.y, 19.98± 6.79 µT for site VAN7 with an age of 4.3 m.y, 26.07 ±8.41 µT for site VAN8 with an age of 0.1 m.y, 29.98 ±1.71 µT for site VAN11 with an age of 0.4 m.y and 31.08 ±2.88 µT for site VAN12 with an age of 5.5 m.y. The average VDMs (Virtual Dipol Moments) correspond to 6.01x10²² Am² for the three Miocene sites and to 5.73x10²² Am² for the Quaternary rocks. Our data is in good coherence to previous studies of similar age ranges.

  15. Estimating maneuvers for precise relative orbit determination using GPS

    Science.gov (United States)

    Allende-Alba, Gerardo; Montenbruck, Oliver; Ardaens, Jean-Sébastien; Wermuth, Martin; Hugentobler, Urs

    2017-01-01

    Precise relative orbit determination is an essential element for the generation of science products from distributed instrumentation of formation flying satellites in low Earth orbit. According to the mission profile, the required formation is typically maintained and/or controlled by executing maneuvers. In order to generate consistent and precise orbit products, a strategy for maneuver handling is mandatory in order to avoid discontinuities or precision degradation before, after and during maneuver execution. Precise orbit determination offers the possibility of maneuver estimation in an adjustment of single-satellite trajectories using GPS measurements. However, a consistent formulation of a precise relative orbit determination scheme requires the implementation of a maneuver estimation strategy which can be used, in addition, to improve the precision of maneuver estimates by drawing upon the use of differential GPS measurements. The present study introduces a method for precise relative orbit determination based on a reduced-dynamic batch processing of differential GPS pseudorange and carrier phase measurements, which includes maneuver estimation as part of the relative orbit adjustment. The proposed method has been validated using flight data from space missions with different rates of maneuvering activity, including the GRACE, TanDEM-X and PRISMA missions. The results show the feasibility of obtaining precise relative orbits without degradation in the vicinity of maneuvers as well as improved maneuver estimates that can be used for better maneuver planning in flight dynamics operations.

  16. Parametric Bayesian Estimation of Differential Entropy and Relative Entropy

    Directory of Open Access Journals (Sweden)

    Maya Gupta

    2010-04-01

    Full Text Available Given iid samples drawn from a distribution with known parametric form, we propose the minimization of expected Bregman divergence to form Bayesian estimates of differential entropy and relative entropy, and derive such estimators for the uniform, Gaussian, Wishart, and inverse Wishart distributions. Additionally, formulas are given for a log gamma Bregman divergence and the differential entropy and relative entropy for the Wishart and inverse Wishart. The results, as always with Bayesian estimates, depend on the accuracy of the prior parameters, but example simulations show that the performance can be substantially improved compared to maximum likelihood or state-of-the-art nonparametric estimators.

  17. Closed-Loop Surface Related Multiple Estimation

    NARCIS (Netherlands)

    Lopez Angarita, G.A.

    2016-01-01

    Surface-related multiple elimination (SRME) is one of the most commonly used methods for suppressing surface multiples. However, in order to obtain an accurate surface multiple estimation, dense source and receiver sampling is required. The traditional approach to this problem is performing data

  18. Cross-property relations and permeability estimation in model porous media

    International Nuclear Information System (INIS)

    Schwartz, L.M.; Martys, N.; Bentz, D.P.; Garboczi, E.J.; Torquato, S.

    1993-01-01

    Results from a numerical study examining cross-property relations linking fluid permeability to diffusive and electrical properties are presented. Numerical solutions of the Stokes equations in three-dimensional consolidated granular packings are employed to provide a basis of comparison between different permeability estimates. Estimates based on the Λ parameter (a length derived from electrical conduction) and on d c (a length derived from immiscible displacement) are found to be considerably more reliable than estimates based on rigorous permeability bounds related to pore space diffusion. We propose two hybrid relations based on diffusion which provide more accurate estimates than either of the rigorous permeability bounds

  19. Estimating relative demand for wildlife: Conservation activity indicators

    Science.gov (United States)

    Gray, Gary G.; Larson, Joseph S.

    1982-09-01

    An alternative method of estimating relative demand among nonconsumptive uses of wildlife and among wildlife species is proposed. A demand intensity score (DIS), derived from the relative extent of an individual's involvement in outdoor recreation and conservation activities, is used as a weighting device to adjust the importance of preference rankings for wildlife uses and wildlife species relative to other members of a survey population. These adjusted preference rankings were considered to reflect relative demand levels (RDLs) for wildlife uses and for species by the survey population. This technique may be useful where it is not possible or desirable to estimate demand using traditional economic means. In one of the findings from a survey of municipal conservation commission members in Massachusetts, presented as an illustration of this methodology, poisonous snakes were ranked third in preference among five groups of reptiles. The relative demand level for poisonous snakes, however, was last among the five groups.

  20. Paleoarchean and Cambrian observations of the geodynamo in light of new estimates of core thermal conductivity

    Science.gov (United States)

    Tarduno, John; Bono, Richard; Cottrell, Rory

    2015-04-01

    Recent estimates of core thermal conductivity are larger than prior values by a factor of approximately three. These new estimates suggest that the inner core is a relatively young feature, perhaps as young as 500 million years old, and that the core-mantle heat flux required to drive the early dynamo was greater than previously assumed (Nimmo, 2015). Here, we focus on paleomagnetic studies of two key time intervals important for understanding core evolution in light of the revisions of core conductivity values. 1. Hadean to Paleoarchean (4.4-3.4 Ga). Single silicate crystal paleointensity analyses suggest a relatively strong magnetic field at 3.4-3.45 Ga (Tarduno et al., 2010). Paleointenity data from zircons of the Jack Hills (Western Australia) further suggest the presence of a geodynamo between 3.5 and 3.6 Ga (Tarduno and Cottrell, 2014). We will discuss our efforts to test for the absence/presence of the geodynamo in older Eoarchean and Hadean times. 2. Ediacaran to Early Cambrian (~635-530 Ma). Disparate directions seen in some paleomagnetic studies from this time interval have been interpreted as recording inertial interchange true polar wander (IITPW). Recent single silicate paleomagnetic analyses fail to find evidence for IITPW; instead a reversing field overprinted by secondary magnetizations is defined (Bono and Tarduno, 2015). Preliminary analyses suggest the field may have been unusually weak. We will discuss our on-going tests of the hypothesis that this interval represents the time of onset of inner core growth. References: Bono, R.K. & Tarduno, J.A., Geology, in press (2015); Nimmo, F., Treatise Geophys., in press (2015); Tarduno, J.A., et al., Science (2010); Tarduno, J.A. & Cottrell, R.D., AGU Fall Meeting (2014).

  1. Parametric Bayesian Estimation of Differential Entropy and Relative Entropy

    OpenAIRE

    Gupta; Srivastava

    2010-01-01

    Given iid samples drawn from a distribution with known parametric form, we propose the minimization of expected Bregman divergence to form Bayesian estimates of differential entropy and relative entropy, and derive such estimators for the uniform, Gaussian, Wishart, and inverse Wishart distributions. Additionally, formulas are given for a log gamma Bregman divergence and the differential entropy and relative entropy for the Wishart and inverse Wishart. The results, as always with Bayesian est...

  2. Relative azimuth inversion by way of damped maximum correlation estimates

    Science.gov (United States)

    Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.

    2012-01-01

    Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.

  3. Estimating Body Related Soft Biometric Traits in Video Frames

    Directory of Open Access Journals (Sweden)

    Olasimbo Ayodeji Arigbabu

    2014-01-01

    Full Text Available Soft biometrics can be used as a prescreening filter, either by using single trait or by combining several traits to aid the performance of recognition systems in an unobtrusive way. In many practical visual surveillance scenarios, facial information becomes difficult to be effectively constructed due to several varying challenges. However, from distance the visual appearance of an object can be efficiently inferred, thereby providing the possibility of estimating body related information. This paper presents an approach for estimating body related soft biometrics; specifically we propose a new approach based on body measurement and artificial neural network for predicting body weight of subjects and incorporate the existing technique on single view metrology for height estimation in videos with low frame rate. Our evaluation on 1120 frame sets of 80 subjects from a newly compiled dataset shows that the mentioned soft biometric information of human subjects can be adequately predicted from set of frames.

  4. Exploring the potential of acquisition curves of the anhysteretic remanent magnetization as a tool to detect subtle magnetic alteration induced by heating

    Science.gov (United States)

    de Groot, Lennart V.; Dekkers, Mark J.; Mullender, Tom A. T.

    2012-03-01

    Recently, many new methods and improved protocols to determine the absolute paleointensity of lavas reliably have been proposed. Here we study eight recent flows from three different volcanic edifices (Mt. Etna, La Palma and Hawaii) with the so-called multispecimen parallel differential pTRM (MSP) method including the recently proposed domain-state correction (MSP-DSC) (Fabian and Leonhardt, 2010). Surprisingly, apart from approximately correct paleointensity values, we observe major underestimates of the paleofield. These deviations are possibly related to alteration that is not revealed by rock-magnetic analysis. We explore the potential of high-resolution acquisition curves of the anhysteretic remanent magnetization (ARM) to detect subtle alteration in the samples. It appears that assessing changes in the ARM acquisition properties before and after heating to the desired MSP temperature discriminates between underestimates and approximately correct estimations of the paleofield in the outcomes of the MSP-DSC protocol. By combining observations from the domain-state corrected MSP protocol and ARM acquisition experiments before and after heating, an extended MSP protocol is suggested which makes it possible to assess the best set temperature for the MSP-DSC protocol and to label MSP results as being approximately correct, or an underestimate of the paleofield.

  5. Using field feedback to estimate failure rates of safety-related systems

    International Nuclear Information System (INIS)

    Brissaud, Florent

    2017-01-01

    The IEC 61508 and IEC 61511 functional safety standards encourage the use of field feedback to estimate the failure rates of safety-related systems, which is preferred than generic data. In some cases (if “Route 2_H” is adopted for the 'hardware safety integrity constraints”), this is even a requirement. This paper presents how to estimate the failure rates from field feedback with confidence intervals, depending if the failures are detected on-line (called 'detected failures', e.g. by automatic diagnostic tests) or only revealed by proof tests (called 'undetected failures'). Examples show that for the same duration and number of failures observed, the estimated failure rates are basically higher for “undetected failures” because, in this case, the duration observed includes intervals of time where it is unknown that the elements have failed. This points out the need of using a proper approach for failure rates estimation, especially for failures that are not detected on-line. Then, this paper proposes an approach to use the estimated failure rates, with their uncertainties, for PFDavg and PFH assessment with upper confidence bounds, in accordance with IEC 61508 and IEC 61511 requirements. Examples finally show that the highest SIL that can be claimed for a safety function can be limited by the 90% upper confidence bound of PFDavg or PFH. The requirements of the IEC 61508 and IEC 61511 relating to the data collection and analysis should therefore be properly considered for the study of all safety-related systems. - Highlights: • This paper deals with requirements of the IEC 61508 and IEC 61511 for using field feedback to estimate failure rates of safety-related systems. • This paper presents how to estimate the failure rates from field feedback with confidence intervals for failures that are detected on-line. • This paper presents how to estimate the failure rates from field feedback with confidence intervals for failures that are only revealed by

  6. Methodology for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The HINTS is designed to produce reliable estimates at the national and regional levels. GIS maps using HINTS data have been used to provide a visual representation of possible geographic relationships in HINTS cancer-related variables.

  7. Oscillation estimates relative to p-homogeneous forms and Kato measures data

    Directory of Open Access Journals (Sweden)

    Marco Biroli

    2006-11-01

    Full Text Available We state pointwise estimate for the positive subsolutions associated to a p-homogeneous form and nonnegative Radon measures data. As a by-product we establish an oscillation’s estimate for the solutions relative to Kato measures data.

  8. Estimating small area health-related characteristics of populations: a methodological review

    Directory of Open Access Journals (Sweden)

    Azizur Rahman

    2017-05-01

    Full Text Available Estimation of health-related characteristics at a fine local geographic level is vital for effective health promotion programmes, provision of better health services and population-specific health planning and management. Lack of a micro-dataset readily available for attributes of individuals at small areas negatively impacts the ability of local and national agencies to manage serious health issues and related risks in the community. A solution to this challenge would be to develop a method that simulates reliable small-area statistics. This paper provides a significant appraisal of the methodologies for estimating health-related characteristics of populations at geographical limited areas. Findings reveal that a range of methodologies are in use, which can be classified as three distinct set of approaches: i indirect standardisation and individual level modelling; ii multilevel statistical modelling; and iii micro-simulation modelling. Although each approach has its own strengths and weaknesses, it appears that microsimulation- based spatial models have significant robustness over the other methods and also represent a more precise means of estimating health-related population characteristics over small areas.

  9. Relative Pose Estimation and Accuracy Verification of Spherical Panoramic Image

    Directory of Open Access Journals (Sweden)

    XIE Donghai

    2017-11-01

    Full Text Available This paper improves the method of the traditional 5-point relative pose estimation algorithm, and proposes a relative pose estimation algorithm which is suitable for spherical panoramic images. The algorithm firstly computes the essential matrix, then decomposes the essential matrix to obtain the rotation matrix and the translation vector using SVD, and finally the reconstructed three-dimensional points are used to eliminate the error solution. The innovation of the algorithm lies the derivation of panorama epipolar formula and the use of the spherical distance from the point to the epipolar plane as the error term for the spherical panorama co-planarity function. The simulation experiment shows that when the random noise of the image feature points is within the range of pixel, the error of the three Euler angles is about 0.1°, and the error between the relative translational displacement and the simulated value is about 1.5°. The result of the experiment using the data obtained by the vehicle panorama camera and the POS shows that:the error of the roll angle and pitch angle can be within 0.2°, the error of the heading angle can be within 0.4°, and the error between the relative translational displacement and the POS can be within 2°. The result of our relative pose estimation algorithm is used to generate the spherical panoramic epipolar images, then we extract the key points between the spherical panoramic images and calculate the errors in the column direction. The result shows that the errors is less than 1 pixel.

  10. Estimates of the relative specific yield of aquifers from geo-electrical ...

    African Journals Online (AJOL)

    This paper discusses a method of estimating aquifer specific yield based on surface resistivity sounding measurements supplemented with data on water conductivity. The practical aim of the method is to suggest a parallel low cost method of estimating aquifer properties. The starting point is the Archie's law, which relates ...

  11. The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence

    Science.gov (United States)

    Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo

    2018-05-01

    The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.

  12. Estimation of nuclear power-related expenditures in fiscal 1982

    International Nuclear Information System (INIS)

    1981-01-01

    In fiscal 1982 (April to March, 1983), the research and development on nuclear power should be promoted actively and extensively by taking the appropriate measures. In view of the importance, the budgetary expenditures are to be estimated duly for the purpose, considering also the stringent financial situation. The budgetary expenditures for nuclear power estimated for the fiscal year 1982 are about 292,800 Million in total and the obligation act limit is about 139,900 Million. The following matters are described: nuclear power-related measures for securing nuclear power safety, promotion of nuclear power generation, establishment of the nuclear fuel cycle, development of power reactors, research on nuclear fusion, strengthening of the foundation in nuclear power research, development and utilization, promotion of international cooperation, etc.; estimated budgetary expenditures; tables of budgetary demands in various categories. (J.P.N.)

  13. ESTIMATION OF THE KNOWLEDGE SPILLOVER EFFECTS BETWEEN FIRMS IN BIO-RELATED INDUSTRIES

    OpenAIRE

    Kim, Hanho; Kim, Jae-Kyung

    2005-01-01

    Knowledge spillover is a kind of externality originating from imperfect appropriation of R&D performances, which implies that the knowledge created by one agent could be transmitted to other related agents by affecting their R&D or other economic performances. For the estimation of knowledge spillover effects based on firm-level patent data between firms in bio-related industries, patents production function, as a proxy of knowledge production function, is formulated and estimated. Knowledge ...

  14. Kernel PLS Estimation of Single-trial Event-related Potentials

    Science.gov (United States)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  15. Accurate relative location estimates for the North Korean nuclear tests using empirical slowness corrections

    Science.gov (United States)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.

    2017-01-01

    Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified

  16. Fundamental relations of mineral specific magnetic carriers for paleointensity determination

    Czech Academy of Sciences Publication Activity Database

    Kletetschka, Günther; Wieczorek, M. A.

    2017-01-01

    Roč. 272, November 2017 (2017), s. 44-49 ISSN 0031-9201 Institutional support: RVO:67985831 Keywords : Paleofield determination * TRM * Planetary magnetic anomalies * Néel’s theory of magnetism * Magnetic acquisition * Moon * Mars Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics OBOR OECD: Particles and field physics Impact factor: 2.075, year: 2016

  17. Worldwide F(ST) estimates relative to five continental-scale populations.

    Science.gov (United States)

    Steele, Christopher D; Court, Denise Syndercombe; Balding, David J

    2014-11-01

    We estimate the population genetics parameter FST (also referred to as the fixation index) from short tandem repeat (STR) allele frequencies, comparing many worldwide human subpopulations at approximately the national level with continental-scale populations. FST is commonly used to measure population differentiation, and is important in forensic DNA analysis to account for remote shared ancestry between a suspect and an alternative source of the DNA. We estimate FST comparing subpopulations with a hypothetical ancestral population, which is the approach most widely used in population genetics, and also compare a subpopulation with a sampled reference population, which is more appropriate for forensic applications. Both estimation methods are likelihood-based, in which FST is related to the variance of the multinomial-Dirichlet distribution for allele counts. Overall, we find low FST values, with posterior 97.5 percentiles estimates, and are also about half the magnitude of STR-based estimates from population genetics surveys that focus on distinct ethnic groups rather than a general population. Our findings support the use of FST up to 3% in forensic calculations, which corresponds to some current practice.

  18. Calibrated Tully-Fisher relations for improved estimates of disc rotation velocities

    NARCIS (Netherlands)

    Reyes, R.; Mandelbaum, R.; Gunn, J. E.; Pizagno II, Jim; Lackner, C. N.

    2011-01-01

    In this paper, we derive scaling relations between photometric observable quantities and disc galaxy rotation velocity V-rot or Tully-Fisher relations (TFRs). Our methodology is dictated by our purpose of obtaining purely photometric, minimal-scatter estimators of V-rot applicable to large galaxy

  19. Absolute magnitude estimation and relative judgement approaches to subjective workload assessment

    Science.gov (United States)

    Vidulich, Michael A.; Tsang, Pamela S.

    1987-01-01

    Two rating scale techniques employing an absolute magnitude estimation method, were compared to a relative judgment method for assessing subjective workload. One of the absolute estimation techniques used was an unidimensional overall workload scale and the other was the multidimensional NASA-Task Load Index technique. Thomas Saaty's Analytic Hierarchy Process was the unidimensional relative judgment method used. These techniques were used to assess the subjective workload of various single- and dual-tracking conditions. The validity of the techniques was defined as their ability to detect the same phenomena observed in the tracking performance. Reliability was assessed by calculating test-retest correlations. Within the context of the experiment, the Saaty Analytic Hierarchy Process was found to be superior in validity and reliability. These findings suggest that the relative judgment method would be an effective addition to the currently available subjective workload assessment techniques.

  20. Estimators of the Relations of Equivalence, Tolerance and Preference Based on Pairwise Comparisons with Random Errors

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2012-01-01

    Full Text Available This paper presents a review of results of the author in the area of estimation of the relations of equivalence, tolerance and preference within a finite set based on multiple, independent (in a stochastic way pairwise comparisons with random errors, in binary and multivalent forms. These estimators require weaker assumptions than those used in the literature on the subject. Estimates of the relations are obtained based on solutions to problems from discrete optimization. They allow application of both types of comparisons - binary and multivalent (this fact relates to the tolerance and preference relations. The estimates can be verified in a statistical way; in particular, it is possible to verify the type of the relation. The estimates have been applied by the author to problems regarding forecasting, financial engineering and bio-cybernetics. (original abstract

  1. Group-Contribution based Property Estimation and Uncertainty analysis for Flammability-related Properties

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens

    2016-01-01

    regression and outlier treatment have been applied to achieve high accuracy. Furthermore, linear error propagation based on covariance matrix of estimated parameters was performed. Therefore, every estimated property value of the flammability-related properties is reported together with its corresponding 95......%-confidence interval of the prediction. Compared to existing models the developed ones have a higher accuracy, are simple to apply and provide uncertainty information on the calculated prediction. The average relative error and correlation coefficient are 11.5% and 0.99 for LFL, 15.9% and 0.91 for UFL, 2...

  2. Estimating the temporal distribution of exposure-related cancers

    International Nuclear Information System (INIS)

    Carter, R.L.; Sposto, R.; Preston, D.L.

    1993-09-01

    The temporal distribution of exposure-related cancers is relevant to the study of carcinogenic mechanisms. Statistical methods for extracting pertinent information from time-to-tumor data, however, are not well developed. Separation of incidence from 'latency' and the contamination of background cases are two problems. In this paper, we present methods for estimating both the conditional distribution given exposure-related cancers observed during the study period and the unconditional distribution. The methods adjust for confounding influences of background cases and the relationship between time to tumor and incidence. Two alternative methods are proposed. The first is based on a structured, theoretically derived model and produces direct inferences concerning the distribution of interest but often requires more-specialized software. The second relies on conventional modeling of incidence and is implemented through readily available, easily used computer software. Inferences concerning the effects of radiation dose and other covariates, however, are not always obtainable directly. We present three examples to illustrate the use of these two methods and suggest criteria for choosing between them. The first approach was used, with a log-logistic specification of the distribution of interest, to analyze times to bone sarcoma among a group of German patients injected with 224 Ra. Similarly, a log-logistic specification was used in the analysis of time to chronic myelogenous leukemias among male atomic-bomb survivors. We used the alternative approach, involving conventional modeling, to estimate the conditional distribution of exposure-related acute myelogenous leukemias among male atomic-bomb survivors, given occurrence between 1 October 1950 and 31 December 1985. All analyses were performed using Poisson regression methods for analyzing grouped survival data. (J.P.N.)

  3. Surgical Care Required for Populations Affected by Climate-related Natural Disasters: A Global Estimation.

    Science.gov (United States)

    Lee, Eugenia E; Stewart, Barclay; Zha, Yuanting A; Groen, Thomas A; Burkle, Frederick M; Kushner, Adam L

    2016-08-10

    Climate extremes will increase the frequency and severity of natural disasters worldwide.  Climate-related natural disasters were anticipated to affect 375 million people in 2015, more than 50% greater than the yearly average in the previous decade. To inform surgical assistance preparedness, we estimated the number of surgical procedures needed.   The numbers of people affected by climate-related disasters from 2004 to 2014 were obtained from the Centre for Research of the Epidemiology of Disasters database. Using 5,000 procedures per 100,000 persons as the minimum, baseline estimates were calculated. A linear regression of the number of surgical procedures performed annually and the estimated number of surgical procedures required for climate-related natural disasters was performed. Approximately 140 million people were affected by climate-related natural disasters annually requiring 7.0 million surgical procedures. The greatest need for surgical care was in the People's Republic of China, India, and the Philippines. Linear regression demonstrated a poor relationship between national surgical capacity and estimated need for surgical care resulting from natural disaster, but countries with the least surgical capacity will have the greatest need for surgical care for persons affected by climate-related natural disasters. As climate extremes increase the frequency and severity of natural disasters, millions will need surgical care beyond baseline needs. Countries with insufficient surgical capacity will have the most need for surgical care for persons affected by climate-related natural disasters. Estimates of surgical are particularly important for countries least equipped to meet surgical care demands given critical human and physical resource deficiencies.

  4. Language Adaptation for Extending Post-Editing Estimates for Closely Related Languages

    Directory of Open Access Journals (Sweden)

    Rios Miguel

    2016-10-01

    Full Text Available This paper presents an open-source toolkit for predicting human post-editing efforts for closely related languages. At the moment, training resources for the Quality Estimation task are available for very few language directions and domains. Available resources can be expanded on the assumption that MT errors and the amount of post-editing required to correct them are comparable across related languages, even if the feature frequencies differ. In this paper we report a toolkit for achieving language adaptation, which is based on learning new feature representation using transfer learning methods. In particular, we report performance of a method based on Self-Taught Learning which adapts the English-Spanish pair to produce Quality Estimation models for translation from English into Portuguese, Italian and other Romance languages using the publicly available Autodesk dataset.

  5. Full Vector Studies of the Last 10 Thousand Years Derived From The East Maui Volcano Hawaii

    Science.gov (United States)

    Herrero-Bervera, E.; Dekkers, M. J.; Bohnel, H.; Hagstrum, J. T.; Champion, D. E.

    2010-12-01

    We have determined the paleointensity of nine lavas flows that recorded the last 10 kyrs of geomagnetic field behavior from the youngest and largest of the two edifices of the island of Maui (i.e. Hana Volcanics, East Maui) with the multispecimen parallel differential pTRM method [Dekkers and Böhnel, EPSL, 248, 508-517, 2006]. The flows are characterized by irreversible Curie curves indicating two kinds of magnetic carriers: one almost pure magnetite and the second one Ti-rich magnetite with possible traces of titanomaghemite. The coercivity of remanence (Hcr) suggests that low-coercivity grains carry the NRM. Magnetic minerals from all of these flows are scattered within the PSD range with the exception of site HKAM (age 4.07±+0.09 ka) that lies in the SD range. The multispecimen method involves giving a laboratory pTRM to pristine specimens in different field strengths parallel to the original TRM; note that all pTRMs are given within the same range. From an existing sample collection for paleosecular variation studies [Herrero-Bervera and Valet, PEPI, 161, 267-280, 2007] we processed samples from 9 flows for paleointensity determinations ranging in age from 0.830.06 ka to 8.19±0.06 ka. pTRMs were given by in-field heating and cooling from 175 and to 260°C to avoid alteration. Low-field susceptibility variation appeared to be less than 10%, and sample sets from a few flows were heated to two different temperatures to check for consistency of results. All flows yielded good quality data. The paleointensity values increase to ~46 microTesla at ~ 2.2 ka and drop to ~22 microTesla at ~3.5 ka. At ~8.2 ka, ~39 microTesla is obtained, i.e. slightly higher than the present-day value (36 microTesla). Our paleointensity results (at least 7 flows) correlate well with the absolute paleointensity global determinations. The influence of a recently proposed domain-state correction [Fabian and Leonhardt, 2010, EPSL] on the paleointensity values will be investigated and shown.

  6. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation

    Directory of Open Access Journals (Sweden)

    Xi Liu

    2016-09-01

    Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  7. A Gaussian IV estimator of cointegrating relations

    DEFF Research Database (Denmark)

    Bårdsen, Gunnar; Haldrup, Niels

    2006-01-01

    In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi-nonparametricestimators. T......In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi...... in cointegrating regressions. These instruments are almost idealand simulations show that the IV estimator using such instruments alleviatethe endogeneity problem extremely well in both finite and large samples....

  8. Relative abundance estimations of Chengal trees in a tropical rainforest by using modified canopy fractional cover (mCFC)

    International Nuclear Information System (INIS)

    Hassan, N

    2014-01-01

    Tree species composition estimations are important to sustain forest management. This study estimates relative abundance of useful timber tree species (chengal) using Hyperion EO-1 satellite data. For the estimation, modified Canopy Fractional Cover (mCFC) was developed using Canopy Fractional Cover (CFC). mCFC was more sensitive to estimate relative abundance of chengal trees rather than Mixture Tuned Matched Filtering (MTMF). Meanwhile, MTMF was more sensitive to estimate the relative abundance of undisturbed forest. Accuracy suggests that the mCFC model is better to explain relative abundance of chengal trees than MTMF. Therefore, it can be concluded that relative abundance of tree species extracted from Hyperion EO-1 satellite data using modified Canopy Fractional Cover is an obtrusive approach used for identifying tree species composition

  9. A paleomagnetic record in loess-paleosol sequences since late Pleistocene in the arid Central Asia

    Science.gov (United States)

    Li, Guanhua; Xia, Dunsheng; Appel, Erwin; Wang, Youjun; Jia, Jia; Yang, Xiaoqiang

    2018-03-01

    Geomagnetic excursions during Brunhes epoch have been brought to the forefront topic in paleomagnetic study, as they provide key information about Earth's interior dynamics and could serve as another tool for stratigraphic correlation among different lithology. Loess-paleosol sequences provide good archives for decoding geomagnetic excursions. However, the detailed pattern of these excursions was not sufficiently clarified due to pedogenic influence. In this study, paleomagnetic analysis was performed in loess-paleosol sequences on the northern piedmont of the Tianshan Mountains (northwestern China). By radiocarbon and luminance dating, the loess section was chronologically constrained to mainly the last c.130 ka, a period when several distinct geomagnetic excursions were involved. The rock magnetic properties in this loess section are dominated by magnetite and maghemite in a pseudo-single-domain state. The rock magnetic properties and magnetic anisotropy indicate weakly pedogenic influence for magnetic record. The stable component of remanent magnetization derived from thermal demagnetization revealed the presence of two intervals of directional anomalies with corresponding intensity lows in the Brunhes epoch. The age control in the key layers indicates these anomalies are likely associated with the Laschamp and Blake excursions, respectively. In addition, relative paleointensity in the loess section is basically compatible with other regional and global relative paleointensity records and indicates two low-paleointensity zones, possibly corresponding to the Blake and Laschamp excursions, respectively. As a result, this study suggests that the loess section may have the potential to record short-lived excursions, which largely reflect the variation of dipole components in the global archives.

  10. Methodology to estimate the relative pressure field from noisy experimental velocity data

    International Nuclear Information System (INIS)

    Bolin, C D; Raguin, L G

    2008-01-01

    The determination of intravascular pressure fields is important to the characterization of cardiovascular pathology. We present a two-stage method that solves the inverse problem of estimating the relative pressure field from noisy velocity fields measured by phase contrast magnetic resonance imaging (PC-MRI) on an irregular domain with limited spatial resolution, and includes a filter for the experimental noise. For the pressure calculation, the Poisson pressure equation is solved by embedding the irregular flow domain into a regular domain. To lessen the propagation of the noise inherent to the velocity measurements, three filters - a median filter and two physics-based filters - are evaluated using a 2-D Couette flow. The two physics-based filters outperform the median filter for the estimation of the relative pressure field for realistic signal-to-noise ratios (SNR = 5 to 30). The most accurate pressure field results from a filter that applies in a least-squares sense three constraints simultaneously: consistency between measured and filtered velocity fields, divergence-free and additional smoothness conditions. This filter leads to a 5-fold gain in accuracy for the estimated relative pressure field compared to without noise filtering, in conditions consistent with PC-MRI of the carotid artery: SNR = 5, 20 x 20 discretized flow domain (25 X 25 computational domain).

  11. Relative abundance estimations of chengal tree in a tropical rainforest by using modified Canopy Fractional Cover (mCFC)

    International Nuclear Information System (INIS)

    Hassan, N

    2014-01-01

    Tree species composition estimations are important to sustain forest management. This study challenged estimates of relative abundance of useful timber tree species (chengal) using Hyperion EO-1 satellite data. For the estimation, modified Canopy Fractional Cover (mCFC) was developed using Canopy Fractional Cover (CFC). mCFC was more sensitive to estimate relative abundance of chengal trees rather than Mixture Tuned Matched Filtering (MTMF). Meanwhile, MTMF was more sensitive to estimate the relative abundance of undisturbed forest. Accuracy suggests that the mCFC model is better to explain relative abundance of chengal trees than MTMF. Therefore, it can be concluded that relative abundance of trees species extracted from Hyperion EO-1 satellite data using modified Canopy Fractional Cover is an obtrusive approach used for identifying trees species composition

  12. Estimates of the pion-nucleon sigma term using dispersion relations and taking into account the relation between chiral and scale invariance breaking

    International Nuclear Information System (INIS)

    Efrosinin, V.P.; Zaikin, D.A.

    1983-01-01

    We study the possible reasons for the disagreement between the estimates of the pion-nucleon sigma term obtained by the method of dispersion relations with extrapolation to the Cheng-Dashen point and by other methods which do not involve this extrapolation. One reason for the disagreement may be the nonanalyticity of the πN amplitude in the variable t for ν = 0. We propose a method for estimating the sigma term using the threshold data for the πN amplitude, in which the effect of this nonanalyticity is minimized. We discuss the relation between scale invariance violation and chiral symmetry breaking and give the corresponding estimate of the sigma term. The two estimates are similar (42 and 34 MeV) and are in agreement when the uncertainties of the two methods are taken into consideration

  13. Estimation of salient regions related to chronic gastritis using gastric X-ray images.

    Science.gov (United States)

    Togo, Ren; Ishihara, Kenta; Ogawa, Takahiro; Haseyama, Miki

    2016-10-01

    Since technical knowledge and a high degree of experience are necessary for diagnosis of chronic gastritis, computer-aided diagnosis (CAD) systems that analyze gastric X-ray images are desirable in the field of medicine. Therefore, a new method that estimates salient regions related to chronic gastritis/non-gastritis for supporting diagnosis is presented in this paper. In order to estimate salient regions related to chronic gastritis/non-gastritis, the proposed method monitors the distance between a target image feature and Support Vector Machine (SVM)-based hyperplane for its classification. Furthermore, our method realizes removal of the influence of regions outside the stomach by using positional relationships between the stomach and other organs. Consequently, since the proposed method successfully estimates salient regions of gastric X-ray images for which chronic gastritis and non-gastritis are unknown, visual support for inexperienced clinicians becomes feasible. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Method-related estimates of sperm vitality.

    Science.gov (United States)

    Cooper, Trevor G; Hellenkemper, Barbara

    2009-01-01

    Comparison of methods that estimate viability of human spermatozoa by monitoring head membrane permeability revealed that wet preparations (whether using positive or negative phase-contrast microscopy) generated significantly higher percentages of nonviable cells than did air-dried eosin-nigrosin smears. Only with the latter method did the sum of motile (presumed live) and stained (presumed dead) preparations never exceed 100%, making this the method of choice for sperm viability estimates.

  15. Estimating one's own and one's relatives' multiple intelligence: a study from Argentina.

    Science.gov (United States)

    Furnham, Adrian; Chamorro-Premuzic, Tomas

    2005-05-01

    Participants from Argentina (N = 217) estimated their own, their partner's, their parents' and their grandparents' overall and multiple intelligences. The Argentinean data showed that men gave higher overall estimates than women (M = 110.4 vs. 105.1) as well as higher estimates on mathematical and spatial intelligence. Participants thought themselves slightly less bright than their fathers (2 IQ points) but brighter than their mothers (6 points), their grandfathers (8 points), but especially their grandmothers (11 points). Regressions showed that participants thought verbal and mathematical IQ to be the best predictors of overall IQ. Results were broadly in agreement with other studies in the area. A comparison was also made with British data using the same questionnaire. British participants tended to give significantly higher self-estimates than for relatives, though the pattern was generally similar. Results are discussed in terms of the studies in the field.

  16. The application of particle filters in single trial event-related potential estimation

    International Nuclear Information System (INIS)

    Mohseni, Hamid R; Nazarpour, Kianoush; Sanei, Saeid; Wilding, Edward L

    2009-01-01

    In this paper, an approach for the estimation of single trial event-related potentials (ST-ERPs) using particle filters (PFs) is presented. The method is based on recursive Bayesian mean square estimation of ERP wavelet coefficients using their previous estimates as prior information. To enable a performance evaluation of the approach in the Gaussian and non-Gaussian distributed noise conditions, we added Gaussian white noise (GWN) and real electroencephalogram (EEG) signals recorded during rest to the simulated ERPs. The results were compared to that of the Kalman filtering (KF) approach demonstrating the robustness of the PF over the KF to the added GWN noise. The proposed method also outperforms the KF when the assumption about the Gaussianity of the noise is violated. We also applied this technique to real EEG potentials recorded in an odd-ball paradigm and investigated the correlation between the amplitude and the latency of the estimated ERP components. Unlike the KF method, for the PF there was a statistically significant negative correlation between amplitude and latency of the estimated ERPs, matching previous neurophysiological findings

  17. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Dansereau Richard M

    2007-01-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  18. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Mohammad H. Radfar

    2006-11-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  19. Determining an Estimate of an Equivalence Relation for Moderate and Large Sized Sets

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2017-01-01

    Full Text Available This paper presents two approaches to determining estimates of an equivalence relation on the basis of pairwise comparisons with random errors. Obtaining such an estimate requires the solution of a discrete programming problem which minimizes the sum of the differences between the form of the relation and the comparisons. The problem is NP hard and can be solved with the use of exact algorithms for sets of moderate size, i.e. about 50 elements. In the case of larger sets, i.e. at least 200 comparisons for each element, it is necessary to apply heuristic algorithms. The paper presents results (a statistical preprocessing, which enable us to determine the optimal or a near-optimal solution with acceptable computational cost. They include: the development of a statistical procedure producing comparisons with low probabilities of errors and a heuristic algorithm based on such comparisons. The proposed approach guarantees the applicability of such estimators for any size of set. (original abstract

  20. Estimating the prevalence of 26 health-related indicators at neighbourhood level in the Netherlands using structured additive regression.

    Science.gov (United States)

    van de Kassteele, Jan; Zwakhals, Laurens; Breugelmans, Oscar; Ameling, Caroline; van den Brink, Carolien

    2017-07-01

    Local policy makers increasingly need information on health-related indicators at smaller geographic levels like districts or neighbourhoods. Although more large data sources have become available, direct estimates of the prevalence of a health-related indicator cannot be produced for neighbourhoods for which only small samples or no samples are available. Small area estimation provides a solution, but unit-level models for binary-valued outcomes that can handle both non-linear effects of the predictors and spatially correlated random effects in a unified framework are rarely encountered. We used data on 26 binary-valued health-related indicators collected on 387,195 persons in the Netherlands. We associated the health-related indicators at the individual level with a set of 12 predictors obtained from national registry data. We formulated a structured additive regression model for small area estimation. The model captured potential non-linear relations between the predictors and the outcome through additive terms in a functional form using penalized splines and included a term that accounted for spatially correlated heterogeneity between neighbourhoods. The registry data were used to predict individual outcomes which in turn are aggregated into higher geographical levels, i.e. neighbourhoods. We validated our method by comparing the estimated prevalences with observed prevalences at the individual level and by comparing the estimated prevalences with direct estimates obtained by weighting methods at municipality level. We estimated the prevalence of the 26 health-related indicators for 415 municipalities, 2599 districts and 11,432 neighbourhoods in the Netherlands. We illustrate our method on overweight data and show that there are distinct geographic patterns in the overweight prevalence. Calibration plots show that the estimated prevalences agree very well with observed prevalences at the individual level. The estimated prevalences agree reasonably well with the

  1. Prevalence Estimates for Pharmacological Neuroenhancement in Austrian University Students: Its Relation to Health-Related Risk Attitude and the Framing Effect of Caffeine Tablets

    Directory of Open Access Journals (Sweden)

    Pavel Dietz

    2018-06-01

    Full Text Available Background: Pharmacological neuroenhancement (PN is defined as the use of illicit or prescription drugs by healthy individuals for cognitive-enhancing purposes. The present study aimed (i to investigate whether including caffeine tablets in the definition of PN within a questionnaire increases the PN prevalence estimate (framing effect, (ii to investigate whether the health-related risk attitude is increased in students who use PN.Materials and methods: Two versions of a paper-and-pencil questionnaire (first version included caffeine tablets in the definition of PN, the second excluded caffeine tablets were distributed among university students at the University of Graz, Austria. The unrelated question model (UQM was used to estimate the 12-month PN prevalence and the German version of the 30-item Domain-Specific Risk-Taking (DOSPERT scale to assess the health-related risk attitude. Moreover, large-sample z-tests (α = 0.05 were performed for comparing the PN prevalence estimates of two groups.Results: Two thousand four hundred and eighty-nine questionnaires were distributed and 2,284 (91.8% questionnaires were included in analysis. The overall PN prevalence estimate for all students was 11.9%. One-tailed large-sample z-tests revealed that the PN estimate for students with higher health-related risk attitude was significantly higher compared to students with lower health-related risk attitude (15.6 vs. 8.5%; z = 2.65, p = 0.004. Furthermore, when caffeine tablets were included into the example of PN, the prevalence estimate of PN was significantly higher compared to the version without caffeine tablets (14.9 vs. 9.0%; z = 2.20, p = 0.014.Discussion: This study revealed that the PN prevalence estimate increases when caffeine tablets are included in the definition of PN. Therefore, future studies investigating the prevalence of, and predictors for, PN should be performed and interpreted with respect to potential framing effects. This study further

  2. An estimator for the relative entropy rate of path measures for stochastic differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Opper, Manfred, E-mail: manfred.opper@tu-berlin.de

    2017-02-01

    We address the problem of estimating the relative entropy rate (RER) for two stochastic processes described by stochastic differential equations. For the case where the drift of one process is known analytically, but one has only observations from the second process, we use a variational bound on the RER to construct an estimator.

  3. Spatial variations in estimated chronic exposure to traffic-related air pollution in working populations: A simulation

    Directory of Open Access Journals (Sweden)

    Cloutier-Fisher Denise

    2008-07-01

    Full Text Available Abstract Background Chronic exposure to traffic-related air pollution is associated with a variety of health impacts in adults and recent studies show that exposure varies spatially, with some residents in a community more exposed than others. A spatial exposure simulation model (SESM which incorporates six microenvironments (home indoor, work indoor, other indoor, outdoor, in-vehicle to work and in-vehicle other is described and used to explore spatial variability in estimates of exposure to traffic-related nitrogen dioxide (not including indoor sources for working people. The study models spatial variability in estimated exposure aggregated at the census tracts level for 382 census tracts in the Greater Vancouver Regional District of British Columbia, Canada. Summary statistics relating to the distributions of the estimated exposures are compared visually through mapping. Observed variations are explored through analyses of model inputs. Results Two sources of spatial variability in exposure to traffic-related nitrogen dioxide were identified. Median estimates of total exposure ranged from 8 μg/m3 to 35 μg/m3 of annual average hourly NO2 for workers in different census tracts in the study area. Exposure estimates are highest where ambient pollution levels are highest. This reflects the regional gradient of pollution in the study area and the relatively high percentage of time spent at home locations. However, for workers within the same census tract, variations were observed in the partial exposure estimates associated with time spent outside the residential census tract. Simulation modeling shows that some workers may have exposures 1.3 times higher than other workers residing in the same census tract because of time spent away from the residential census tract, and that time spent in work census tracts contributes most to the differences in exposure. Exposure estimates associated with the activity of commuting by vehicle to work were

  4. Palaeointensity determinations and rock magnetic properties on rocks from Izu-Bonin-Mariana fore-arc (IODP Exp. 352).

    Science.gov (United States)

    Carvallo, Claire; Camps, Pierre; Sager, Will; Poidras, Thierry

    2017-04-01

    IODP Expedition 352 cored igneous rocks from the Izu-Bonin-Mariana fore-arc crust: Sites U1440 and U1441 recovered Eocene basalts and related rocks whereas Sites U1439 and U1442 recovered Eocene boninites and related rocks. We selected samples from Holes U1439C, U1440B and U1440A for paleointensity measurements. Hysteresis measurements and high and low-temperature magnetization curves show that samples from Hole U1440B undergo magnetochemical changes when heated and are mostly composed of single-domain (SD) or pseudo-single-domain (PSD) titanomaghemite. In contrast, the same measurements show that most selected samples from Holes U1439C and U1442A are thermally stable and are composed of either SD or PSD titanomagnetite with very little titanium content, or SD ferromagnetic grains with a large paramagnetic contribution. Thellier-Thellier paleointensity experiments carried out on U1439C and U1442A samples give a good success rate of 25/60 and Virtual Dipole Moment values between 1.3 and 3.5 ×1022 Am2. Multispecimen paleointensity experiments carried out on 55 samples from Hole U1440B (divided into 4 groups) and 20 from Hole U1439C gave poor quality result, but they seem to indicate a VDM around 4-6 ×1022 Am2 in Hole U1440B fore-arc basalts. These results are in agreement with the low few VDM values previously measured on rocks from Eocene. However, they do not support an inverse relationship between intensity of the field and rate of reversal, since the rate of reversal in Eocene was rather low.

  5. New archaeomagnetic data recovered from the study of celtiberic remains from central Spain (Numancia and Ciadueña, III-I BC).

    Science.gov (United States)

    Osete, María Luisa; Chauvin, Annick; Catanzariti, Gianluca; Jimeno, Alfredo; Campuzano, Saioa A.; Benito-Batanero, Juan Pedro; Roperch, Pierrick

    2016-04-01

    Variation of geomagnetic field changes in the Iberian Peninsula between prior to roman times remain very poorly constrained. Here we report results from the archeomagnetic study carried out on four set of ceramics and one combustion structure recovered in two pre-roman (celtiberic) archeological sites in central Spain. Rock magnetic experiments indicate the ChRM is carried by magnetite. Archaeointensity determinations were carried out by using the classical Thellier-Thellier experiment including tests and corrections for magnetic anisotropy and magnetic cooling rate dependency. Well heated specimens (red ceramic fragments and well heated samples from the kiln) show one single well defined component of magnetisation going through the origin and a linear arai plot providing successful archaeointensity determinations. The effect of anisotropy of the termoremanent magnetization (ATRM) on paleointensity analysis was specially investigated obtaining very high ATRM corrections on fine pottery specimens. With differences between the uncorrected and ATRM corrected paleointensity values that reached up to 80-100%. Mean intensity values obtained from three selected groups were 61.1 ±5.9μT; 57.6±3.3 and 56.4± 4.7μT which allows delineate the evolution of the paleofield intensity in central Iberia during the III-I centuries BC. The new archaeointensity data disagrees with previous results from Iberian ceramics which were not corrected by the ATRM effect. But they are in agreement with the most recent French paleointensity curve and latest European intensity model. Both based on a selection of high quality paleointensity data. This result reinforces the idea that the puzzling scatter often observed in the global paleointensity database is likely due to differences in the laboratory protocol. Further data from well contrasted laboratory protocols are still necessary to delineate confidently the evolution of the geomagnetic paleofield during the first millennium BC.

  6. Uncertainty related to Environmental Data and Estimated Extreme Events

    DEFF Research Database (Denmark)

    Burcharth, H. F.

    The design loads on rubble mound breakwaters are almost entirely determined by the environmental conditions, i.e. sea state, water levels, sea bed characteristics, etc. It is the objective of sub-group B to identify the most important environmental parameters and evaluate the related uncertainties...... including those corresponding to extreme estimates typically used for design purposes. Basically a design condition is made up of a set of parameter values stemming from several environmental parameters. To be able to evaluate the uncertainty related to design states one must know the corresponding joint....... Consequently this report deals mainly with each parameter separately. Multi parameter problems are briefly discussed in section 9. It is important to notice that the quantified uncertainties reported in section 7.7 represent what might be regarded as typical figures to be used only when no more qualified...

  7. Runtime and Inversion Impacts on Estimation of Moisture Retention Relations by Centrifuge

    Science.gov (United States)

    Sigda, J. M.; Wilson, J. L.

    2003-12-01

    the impact of different runtimes and different inversion techniques on estimated moisture retention parameters. Moisture retention data were collected for a number of poorly lithified sands and indurated deformed sands using the UFA centrifuge system (Conca and Wright, 1990). Parameters for the van Genuchten model were estimated for short and long runtimes with one inversion technique. Model parameters were re-estimated for one other inversion technique and a simple averaging approach which does not involve inversion. Our results demonstrate that the averaging approach greatly underestimates the van Genuchten n parameter relative to the inversion techniques. Insufficient runtimes also have a significant impact on estimated parameters. Our analysis indicates a need, barring method standardization, for practitioners to include information about inversion technique and runtime criteria when presenting centrifuge moisture retention results.

  8. The Prevalence of Age-Related Eye Diseases and Visual Impairment in Aging: Current Estimates

    Science.gov (United States)

    Klein, Ronald; Klein, Barbara E. K.

    2013-01-01

    Purpose. To examine prevalence of five age-related eye conditions (age-related cataract, AMD, open-angle glaucoma, diabetic retinopathy [DR], and visual impairment) in the United States. Methods. Review of published scientific articles and unpublished research findings. Results. Cataract, AMD, open-angle glaucoma, DR, and visual impairment prevalences are high in four different studies of these conditions, especially in people over 75 years of age. There are disparities among racial/ethnic groups with higher age-specific prevalence of DR, open-angle glaucoma, and visual impairment in Hispanics and blacks compared with whites, higher prevalence of age-related cataract in whites compared with blacks, and higher prevalence of late AMD in whites compared with Hispanics and blacks. The estimates are based on old data and do not reflect recent changes in the distribution of age and race/ethnicity in the United States population. There are no epidemiologic estimates of prevalence for many visually-impairing conditions. Conclusions. Ongoing prevalence surveys designed to provide reliable estimates of visual impairment, AMD, age-related cataract, open-angle glaucoma, and DR are needed. It is important to collect objective data on these and other conditions that affect vision and quality of life in order to plan for health care needs and identify areas for further research. PMID:24335069

  9. Work related injuries: estimating the incidence among illegally employed immigrants

    Directory of Open Access Journals (Sweden)

    Fadda Emanuela

    2010-12-01

    Full Text Available Abstract Background Statistics on occupational accidents are based on data from registered employees. With the increasing number of immigrants employed illegally and/or without regular working visas in many developed countries, it is of interest to estimate the injury rate among such unregistered workers. Findings The current study was conducted in an area of North-Eastern Italy. The sources of information employed in the present study were the Accidents and Emergencies records of a hospital; the population data on foreign-born residents in the hospital catchment area (Health Care District 4, Primary Care Trust 20, Province of Verona, Veneto Region, North-Eastern Italy; and the estimated proportion of illegally employed workers in representative samples from the Province of Verona and the Veneto Region. Of the 419 A&E records collected between January and December 2004 among non European Union (non-EU immigrants, 146 aroused suspicion by reporting the home, rather than the workplace, as the site of the accident. These cases were the numerator of the rate. The number of illegally employed non-EU workers, denominator of the rate, was estimated according to different assumptions and ranged from between 537 to 1,338 individuals. The corresponding rates varied from 109.1 to 271.8 per 1,000 non-EU illegal employees, against 65 per 1,000 reported in Italy in 2004. Conclusions The results of this study suggest that there is an unrecorded burden of illegally employed immigrants suffering from work related injuries. Additional efforts for prevention of injuries in the workplace are required to decrease this number. It can be concluded that the Italian National Institute for the Insurance of Work Related Injuries (INAIL probably underestimates the incidence of these accidents in Italy.

  10. Work related injuries: estimating the incidence among illegally employed immigrants.

    Science.gov (United States)

    Mastrangelo, Giuseppe; Rylander, Ragnar; Buja, Alessandra; Marangi, Gianluca; Fadda, Emanuela; Fedeli, Ugo; Cegolon, Luca

    2010-12-08

    Statistics on occupational accidents are based on data from registered employees. With the increasing number of immigrants employed illegally and/or without regular working visas in many developed countries, it is of interest to estimate the injury rate among such unregistered workers. The current study was conducted in an area of North-Eastern Italy. The sources of information employed in the present study were the Accidents and Emergencies records of a hospital; the population data on foreign-born residents in the hospital catchment area (Health Care District 4, Primary Care Trust 20, Province of Verona, Veneto Region, North-Eastern Italy); and the estimated proportion of illegally employed workers in representative samples from the Province of Verona and the Veneto Region. Of the 419 A&E records collected between January and December 2004 among non European Union (non-EU) immigrants, 146 aroused suspicion by reporting the home, rather than the workplace, as the site of the accident. These cases were the numerator of the rate. The number of illegally employed non-EU workers, denominator of the rate, was estimated according to different assumptions and ranged from between 537 to 1,338 individuals. The corresponding rates varied from 109.1 to 271.8 per 1,000 non-EU illegal employees, against 65 per 1,000 reported in Italy in 2004. The results of this study suggest that there is an unrecorded burden of illegally employed immigrants suffering from work related injuries. Additional efforts for prevention of injuries in the workplace are required to decrease this number. It can be concluded that the Italian National Institute for the Insurance of Work Related Injuries (INAIL) probably underestimates the incidence of these accidents in Italy.

  11. Estimation of the fiscal year 1985 expenditures in nuclear energy relation

    International Nuclear Information System (INIS)

    1985-01-01

    In Japan, the electric power by nuclear energy accounts for about 20 % of the total power at present. Then, radiation is utilized extensively in such fields as industries, agriculture and medicine. The expenditures (budgets) estimated for the fiscal year 1985 are about 343.8 billion yen plus contract authorization limitation about 146.7 billion yen. In connection with the expenditures estimation (of which a breakdown is given in tables), the research and development plans for nuclear energy relation for fiscal year 1985 are presented: strengthening in nuclear energy safety, promotion of nuclear power generation, establishment of the nuclear fuel cycle, development of advanced power reactors, research on nuclear fusion, promotion of radiation utilizations, strengthening in the research and development infrastructure, promotion of international cooperation, etc. (Mori, K.)

  12. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    Science.gov (United States)

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  13. Geomagnetic modulation of the late Pleistocene cosmic-ray flux as determined by 10Be from Blake Outer Ridge marine sediments

    International Nuclear Information System (INIS)

    McHargue, L.R.; Donahue, D.; Damon, P.E.; Sonett, C.P.; Biddulph, D.; Burr, G.

    2000-01-01

    The cosmic-ray flux incident upon the Earth during the late Pleistocene, 20-60 kyr B.P., was studied by measuring the cosmogenic radionuclide 10 Be from a marine sediment core at site CH88-10P on the Blake Outer Ridge. The paleointensity of the geomagnetic field for this core was determined by various methods. The variance in the concentration of 10 Be in the authigenic fraction of the sediments from Blake Ridge closely correlates with the inverse of the variance in the paleointensity of the geomagnetic field. The 10 Be signal lags, up to 1000 years of sedimentation, the measured paleointensity of the sediments. In contrast, the data from several other elements, some climatically sensitive, and from beryllium show relationship neither to 10 Be nor to the paleomagnetic data. The relationship between 10 Be concentration and the dipole field intensity (M/M o ) as measured in the sediments is consistent with theoretical models

  14. The relative pose estimation of aircraft based on contour model

    Science.gov (United States)

    Fu, Tai; Sun, Xiangyi

    2017-02-01

    This paper proposes a relative pose estimation approach based on object contour model. The first step is to obtain a two-dimensional (2D) projection of three-dimensional (3D)-model-based target, which will be divided into 40 forms by clustering and LDA analysis. Then we proceed by extracting the target contour in each image and computing their Pseudo-Zernike Moments (PZM), thus a model library is constructed in an offline mode. Next, we spot a projection contour that resembles the target silhouette most in the present image from the model library with reference of PZM; then similarity transformation parameters are generated as the shape context is applied to match the silhouette sampling location, from which the identification parameters of target can be further derived. Identification parameters are converted to relative pose parameters, in the premise that these values are the initial result calculated via iterative refinement algorithm, as the relative pose parameter is in the neighborhood of actual ones. At last, Distance Image Iterative Least Squares (DI-ILS) is employed to acquire the ultimate relative pose parameters.

  15. Prevalence estimates of combat-related post-traumatic stress disorder: critical review.

    Science.gov (United States)

    Richardson, Lisa K; Frueh, B Christopher; Acierno, Ronald

    2010-01-01

    The aim of the present study was to provide a critical review of prevalence estimates of combat-related post-traumatic stress disorder (PTSD) among military personnel and veterans, and of the relevant factors that may account for the variability of estimates within and across cohorts, including methodological and conceptual factors accounting for differences in prevalence rates across nations, conflicts/wars, and studies. MEDLINE and PsycINFO databases were examined for literature on combat-related PTSD. The following terms were used independently and in combinations in this search: PTSD, combat, veterans, military, epidemiology, prevalence. The point prevalence of combat-related PTSD in US military veterans since the Vietnam War ranged from approximately 2% to 17%. Studies of recent conflicts suggest that combat-related PTSD afflicts between 4% and 17% of US Iraq War veterans, but only 3-6% of returning UK Iraq War veterans. Thus, the prevalence range is narrower and tends to have a lower ceiling among combat veterans of non-US Western nations. Variability in prevalence is likely due to differences in sampling strategies; measurement strategies; inclusion and measurement of the DSM-IV clinically significant impairment criterion; timing and latency of assessment and potential for recall bias; and combat experiences. Prevalence rates are also likely affected by issues related to PTSD course, chronicity, and comorbidity; symptom overlap with other psychiatric disorders; and sociopolitical and cultural factors that may vary over time and by nation. The disorder represents a significant and costly illness to veterans, their families, and society as a whole. Further carefully conceptualized research, however, is needed to advance our understanding of disorder prevalence, as well as associated information on course, phenomenology, protective factors, treatment, and economic costs.

  16. Species tree estimation for the late blight pathogen, Phytophthora infestans, and close relatives.

    Science.gov (United States)

    Blair, Jaime E; Coffey, Michael D; Martin, Frank N

    2012-01-01

    To better understand the evolutionary history of a group of organisms, an accurate estimate of the species phylogeny must be known. Traditionally, gene trees have served as a proxy for the species tree, although it was acknowledged early on that these trees represented different evolutionary processes. Discordances among gene trees and between the gene trees and the species tree are also expected in closely related species that have rapidly diverged, due to processes such as the incomplete sorting of ancestral polymorphisms. Recently, methods have been developed for the explicit estimation of species trees, using information from multilocus gene trees while accommodating heterogeneity among them. Here we have used three distinct approaches to estimate the species tree for five Phytophthora pathogens, including P. infestans, the causal agent of late blight disease in potato and tomato. Our concatenation-based "supergene" approach was unable to resolve relationships even with data from both the nuclear and mitochondrial genomes, and from multiple isolates per species. Our multispecies coalescent approach using both Bayesian and maximum likelihood methods was able to estimate a moderately supported species tree showing a close relationship among P. infestans, P. andina, and P. ipomoeae. The topology of the species tree was also identical to the dominant phylogenetic history estimated in our third approach, Bayesian concordance analysis. Our results support previous suggestions that P. andina is a hybrid species, with P. infestans representing one parental lineage. The other parental lineage is not known, but represents an independent evolutionary lineage more closely related to P. ipomoeae. While all five species likely originated in the New World, further study is needed to determine when and under what conditions this hybridization event may have occurred.

  17. [Estimation on the indirect economic burden of disease-related premature deaths in China, 2012].

    Science.gov (United States)

    Yang, Juan; Feng, Luzhao; Zheng, Yaming; Yu, Hongjie

    2014-11-01

    To estimate the indirect economic burden of disease-related premature deaths in China, 2012. Both human capital approach and friction cost methods were used to compute the indirect economic burden of premature deaths from the following sources: mortality from the national disease surveillance system in 2012, average annual income per capita from the China Statistic Yearbook in 2012, population size from the 2010 China census, and life expectancy in China from the World Health Organization life table. Data from the Human Capital Approach Estimates showed that the indirect economic burden of premature deaths in China was 425.1 billion in 2012, accounting for 8‰ of the GDP. The indirect economic burden of chronic non-communicable diseases associated premature deaths was accounted for the highest proportion(67.1%, 295.4 billion), followed by those of injuries related premature deaths (25.6% , 108.9 billion), infectious diseases, maternal and infants diseases, and malnutrition related deaths (6.4% , 26.9 billion). The top five premature deaths that cause the indirect economic burden were malignancy, cardiovascular diseases, unintentional injuries, intentional injuries, and diseases of the respiratory system. The indirect economic burden of premature deaths mainly occurred in the population of 20-59 year-olds. Under the Friction Cost method, the estimates appeared to be 0.11%-3.49% of the total human capital approach estimates. Premature death caused heavy indirect economic burden in China. Chronic non-communicable diseases and injuries seemed to incur the major disease burden. The indirect economic burden of premature deaths mainly occurred in the working age group.

  18. The estimation of differential counting measurements of possitive quantities with relatively large statistical errors

    International Nuclear Information System (INIS)

    Vincent, C.H.

    1982-01-01

    Bayes' principle is applied to the differential counting measurement of a positive quantity in which the statistical errors are not necessarily small in relation to the true value of the quantity. The methods of estimation derived are found to give consistent results and to avoid the anomalous negative estimates sometimes obtained by conventional methods. One of the methods given provides a simple means of deriving the required estimates from conventionally presented results and appears to have wide potential applications. Both methods provide the actual posterior probability distribution of the quantity to be measured. A particularly important potential application is the correction of counts on low radioacitvity samples for background. (orig.)

  19. Species tree estimation for the late blight pathogen, Phytophthora infestans, and close relatives.

    Directory of Open Access Journals (Sweden)

    Jaime E Blair

    Full Text Available To better understand the evolutionary history of a group of organisms, an accurate estimate of the species phylogeny must be known. Traditionally, gene trees have served as a proxy for the species tree, although it was acknowledged early on that these trees represented different evolutionary processes. Discordances among gene trees and between the gene trees and the species tree are also expected in closely related species that have rapidly diverged, due to processes such as the incomplete sorting of ancestral polymorphisms. Recently, methods have been developed for the explicit estimation of species trees, using information from multilocus gene trees while accommodating heterogeneity among them. Here we have used three distinct approaches to estimate the species tree for five Phytophthora pathogens, including P. infestans, the causal agent of late blight disease in potato and tomato. Our concatenation-based "supergene" approach was unable to resolve relationships even with data from both the nuclear and mitochondrial genomes, and from multiple isolates per species. Our multispecies coalescent approach using both Bayesian and maximum likelihood methods was able to estimate a moderately supported species tree showing a close relationship among P. infestans, P. andina, and P. ipomoeae. The topology of the species tree was also identical to the dominant phylogenetic history estimated in our third approach, Bayesian concordance analysis. Our results support previous suggestions that P. andina is a hybrid species, with P. infestans representing one parental lineage. The other parental lineage is not known, but represents an independent evolutionary lineage more closely related to P. ipomoeae. While all five species likely originated in the New World, further study is needed to determine when and under what conditions this hybridization event may have occurred.

  20. First archaeointensity results from the historical period of Cambodia, Southeast Asia

    Science.gov (United States)

    Higa, J. T.; Cai, S.; Tauxe, L.; Hendrickson, M.

    2017-12-01

    Understanding variations of the geomagnetic field has applications regarding the behavior of the Earth's outer core, dating of archeological artifacts, and the phenomenon that shields life from solar radiation. However, archaeointensity studies of the Holocene have been mostly limited to localities in Europe and the Middle East; archaeomagnetic surveys from Southeast Asia are almost non-existent. This investigation aims to establish a secular variation curve of geomagnetic field intensity for Cambodia. We sampled ancient iron smelting mounds from the Khmer Empire, located in present day Cambodia, and are analyzing them for paleointensity. The specimens are thought to be from the historical period, likely between 1000-1500 CE. Our samples, which include furnace fragments, iron slag, and ceramic tuyères, contain magnetic minerals that record the paleointensity of Earth's magnetic field at the time it was fired. Using the IZZI paleointensity method (Yu et al., 2004), which gradually replaces the sample's natural remanent magnetization with a thermal remanent magnetization obtained in a known lab field, we can determine the geomagnetic intensities preserved in these specimens. Based on our preliminary experiments, the tuyères, and perhaps also the fresh slag, will in all likelihood yield the most ideal results. Following additional measurements from these best-fit samples, we will determine the paleointensities of Cambodia for the time period from which the artifacts originated. This will commence the establishment of regional geomagnetic reference curves in Southeast Asia and also improve the global model.

  1. Cancer Related-Knowledge - Small Area Estimates

    Science.gov (United States)

    These model-based estimates are produced using statistical models that combine data from the Health Information National Trends Survey, and auxiliary variables obtained from relevant sources and borrow strength from other areas with similar characteristics.

  2. Risk Estimates and Risk Factors Related to Psychiatric Inpatient Suicide

    DEFF Research Database (Denmark)

    Madsen, Trine; Erlangsen, Annette; Nordentoft, Merete

    2017-01-01

    trends, and socio-demographic and clinical risk factors of suicide in psychiatric inpatients. Psychiatric inpatients have a very high risk of suicide relative to the background population, but it remains challenging for clinicians to identify those patients that are most likely to die from suicide during......People with mental illness have an increased risk of suicide. The aim of this paper is to provide an overview of suicide risk estimates among psychiatric inpatients based on the body of evidence found in scientific peer-reviewed literature; primarily focusing on the relative risks, rates, time...... admission. Most studies are based on low power, thus compromising quality and generalisability. The few studies with sufficient statistical power mainly identified non-modifiable risk predictors such as male gender, diagnosis, or recent deliberate self-harm. Also, the predictive value of these predictors...

  3. Risk Estimates and Risk Factors Related to Psychiatric Inpatient Suicide

    DEFF Research Database (Denmark)

    Madsen, Trine; Erlangsen, Annette; Nordentoft, Merete

    2017-01-01

    People with mental illness have an increased risk of suicide. The aim of this paper is to provide an overview of suicide risk estimates among psychiatric inpatients based on the body of evidence found in scientific peer-reviewed literature; primarily focusing on the relative risks, rates, time...... trends, and socio-demographic and clinical risk factors of suicide in psychiatric inpatients. Psychiatric inpatients have a very high risk of suicide relative to the background population, but it remains challenging for clinicians to identify those patients that are most likely to die from suicide during...... is low. It would be of great benefit if future studies would be based on large samples while focusing on modifiable predictors over the course of an admission, such as hopelessness, depressive symptoms, and family/social situations. This would improve our chances of developing better risk assessment...

  4. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    Directory of Open Access Journals (Sweden)

    Lujiang Liu

    2016-06-01

    Full Text Available Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective.

  5. Impact of work-related cancers in Taiwan-Estimation with QALY (quality-adjusted life year) and healthcare costs.

    Science.gov (United States)

    Lee, Lukas Jyuhn-Hsiarn; Lin, Cheng-Kuan; Hung, Mei-Chuan; Wang, Jung-Der

    2016-12-01

    This study estimates the annual numbers of eight work-related cancers, total losses of quality-adjusted life years (QALYs), and lifetime healthcare expenditures that possibly could be saved by improving occupational health in Taiwan. Three databases were interlinked: the Taiwan Cancer Registry, the National Mortality Registry, and the National Health Insurance Research Database. Annual numbers of work-related cancers were estimated based on attributable fractions (AFs) abstracted from a literature review. The survival functions for eight cancers were estimated and extrapolated to lifetime using a semi-parametric method. A convenience sample of 8846 measurements of patients' quality of life with EQ-5D was collected for utility values and multiplied by survival functions to estimate quality-adjusted life expectancies (QALEs). The loss-of-QALE was obtained by subtracting the QALE of cancer from age- and sex-matched referents simulated from national vital statistics. The lifetime healthcare expenditures were estimated by multiplying the survival probability with mean monthly costs paid by the National Health Insurance for cancer diagnosis and treatment and summing this for the expected lifetime. A total of 3010 males and 726 females with eight work-related cancers were estimated in 2010. Among them, lung cancer ranked first in terms of QALY loss, with an annual total loss-of-QALE of 28,463 QALYs and total lifetime healthcare expenditures of US$36.6 million. Successful prevention of eight work-related cancers would not only avoid the occurrence of 3736 cases of cancer, but would also save more than US$70 million in healthcare costs and 46,750 QALYs for the Taiwan society in 2010.

  6. Fast Kalman Filtering for Relative Spacecraft Position and Attitude Estimation for the Raven ISS Hosted Payload

    Science.gov (United States)

    Galante, Joseph M.; Van Eepoel, John; D'Souza, Chris; Patrick, Bryan

    2016-01-01

    The Raven ISS Hosted Payload will feature several pose measurement sensors on a pan/tilt gimbal which will be used to autonomously track resupply vehicles as they approach and depart the International Space Station. This paper discusses the derivation of a Relative Navigation Filter (RNF) to fuse measurements from the different pose measurement sensors to produce relative position and attitude estimates. The RNF relies on relative translation and orientation kinematics and careful pose sensor modeling to eliminate dependence on orbital position information and associated orbital dynamics models. The filter state is augmented with sensor biases to provide a mechanism for the filter to estimate and mitigate the offset between the measurements from different pose sensors

  7. Human comfort and self-estimated performance in relation to indoor environmental parameters and building features

    DEFF Research Database (Denmark)

    Frontczak, Monika Joanna

    The main objective of the Ph.D. study was to examine occupants’ perception of comfort and self-estimated job performance in non-industrial buildings (homes and offices), in particular how building occupants understand comfort and which parameters, not necessarily related to indoor environments...... and storage, noise level and visual privacy. However, if job performance is considered, then satisfaction with the main indoor environmental parameters should be addressed first as they affected self-estimated job performance to the highest extent. The present study showed that overall satisfaction...... with personal workspace affected significantly the self-estimated job performance. Increasing overall satisfaction with the personal workspace by about 15% would correspond to an increase of self-estimated job performance by 3.7%. Among indoor environmental parameters and building features, satisfaction...

  8. Effect of region assignment on relative renal blood flow estimates using radionuclides

    International Nuclear Information System (INIS)

    Harris, C.C.; Ford, K.K.; Coleman, R.E.; Dunnick, N.R.

    1984-01-01

    To determine the value of the initial phase of the Tc-99m DTPA renogram in the direct estimation of relative renal blood flow in dogs, the ratios of the slopes of renal time-activity curves were compared with the ratios of measured blood flow. Radionuclide results were dependent on region-of-interest (ROI) and background ROI assignment, and correlated well with measured relative flow only with a maximum renal outline region. Curve slope ratios correlated well with measured flow ratios with and without background correction, while 1- to 2-minute uptake ratios correlated well only when corrected for background

  9. Estimated Prestroke Peak VO2 Is Related to Circulating IGF-1 Levels During Acute Stroke.

    Science.gov (United States)

    Mattlage, Anna E; Rippee, Michael A; Abraham, Michael G; Sandt, Janice; Billinger, Sandra A

    2017-01-01

    Background Insulin-like growth factor-1 (IGF-1) is neuroprotective after stroke and is regulated by insulin-like binding protein-3 (IGFBP-3). In healthy individuals, exercise and improved aerobic fitness (peak oxygen uptake; peak VO 2 ) increases IGF-1 in circulation. Understanding the relationship between estimated prestroke aerobic fitness and IGF-1 and IGFBP-3 after stroke may provide insight into the benefits of exercise and aerobic fitness on stroke recovery. Objective The purpose of this study was to determine the relationship of IGF-1 and IGFBP-3 to estimated prestroke peak VO 2 in individuals with acute stroke. We hypothesized that (1) estimated prestroke peak VO 2 would be related to IGF-1 and IGFBP-3 and (2) individuals with higher than median IGF-1 levels will have higher estimated prestroke peak VO 2 compared to those with lower than median levels. Methods Fifteen individuals with acute stroke had blood sampled within 72 hours of hospital admission. Prestroke peak VO 2 was estimated using a nonexercise prediction equation. IGF-1 and IGFBP-3 levels were quantified using enzyme-linked immunoassay. Results Estimated prestroke peak VO 2 was significantly related to circulating IGF-1 levels (r = .60; P = .02) but not IGFBP-3. Individuals with higher than median IGF-1 (117.9 ng/mL) had significantly better estimated aerobic fitness (32.4 ± 6.9 mL kg -1 min -1 ) than those with lower than median IGF-1 (20.7 ± 7.8 mL kg -1 min -1 ; P = .03). Conclusions Improving aerobic fitness prior to stroke may be beneficial by increasing baseline IGF-1 levels. These results set the groundwork for future clinical trials to determine whether high IGF-1 and aerobic fitness are beneficial to stroke recovery by providing neuroprotection and improving function. © The Author(s) 2016.

  10. Estimation of the intrinsic stresses in α-alumina in relation with its elaboration mode

    International Nuclear Information System (INIS)

    Boumaza, A.; Djelloul, A.

    2010-01-01

    The specific signatures of α-Al 2 O 3 by Fourier transform infrared (FTIR) spectroscopy were investigated to estimate the intrinsic stress in this compound according to its elaboration mode. Thus, α-alumina was prepared either by calcination of boehmite or gibbsite and also generated by oxidation of a metallic FeCrAl alloy. FTIR results were mainly supported by X-ray diffraction (XRD) patterns that allowed to determine the crystallite size and the strain in the various alpha aluminas. Moreover, the infrared peak at 378.7 cm -1 was used as a reference for stress free α-alumina and the shift of this peak allowed to estimate intrinsic stresses, which were related to the morphology and to the specific surface area of aluminas according to their elaboration mode. These interpretations were confirmed by results obtained by cathodoluminescence experiments. - Graphical abstract: The infrared peak at 378.7 cm -1 was used as a reference for stress free α-alumina and the shift of this peak allowed to estimate intrinsic stresses, which were related to the morphology and to the specific surface area of aluminas according to their elaboration mode.

  11. A method for estimating the relative degree of saponification of xanthophyll sources and feedstuffs.

    Science.gov (United States)

    Fletcher, D L

    2006-05-01

    Saponification of xanthophyll esters in various feed sources has been shown to improve pigmentation efficiency in broiler skin and egg yolks. Three trials were conducted to evaluate a rapid liquid chromatography procedure for estimating the relative degree of xanthophyll saponification using samples of yellow corn, corn gluten meal, alfalfa, and 6 commercially available marigold meal concentrates. In each trial, samples were extracted using a modification of the 1984 Association of Official Analytical Chemists hot saponification procedure with and without the addition of KOH. A comparison of the chromatography results was used to estimate percent saponification of the original sample by dividing the nonsaponified extraction values by the saponified extraction values. A comparison of the percent saponified xanthophylls for each product (mg/kg) was: yellow corn, 101; corn gluten meal, 78; alfalfa, 97.9; and marigold concentrates A through F, 99.8, 4.6, 99.0, 95.6, 96.8, and 6.6, respectively. These results indicate that a modification of the 1984 Association of Official Analytical Chemists procedure and liquid column chromatography can be used to quickly verify saponification and can be used to estimate the relative degree of saponification of an unknown xanthophyll source.

  12. Relation of whole blood carboxyhemoglobin concentration to ambient carbon monoxide exposure estimated using regression.

    Science.gov (United States)

    Rudra, Carole B; Williams, Michelle A; Sheppard, Lianne; Koenig, Jane Q; Schiff, Melissa A; Frederick, Ihunnaya O; Dills, Russell

    2010-04-15

    Exposure to carbon monoxide (CO) and other ambient air pollutants is associated with adverse pregnancy outcomes. While there are several methods of estimating CO exposure, few have been evaluated against exposure biomarkers. The authors examined the relation between estimated CO exposure and blood carboxyhemoglobin concentration in 708 pregnant western Washington State women (1996-2004). Carboxyhemoglobin was measured in whole blood drawn around 13 weeks' gestation. CO exposure during the month of blood draw was estimated using a regression model containing predictor terms for year, month, street and population densities, and distance to the nearest major road. Year and month were the strongest predictors. Carboxyhemoglobin level was correlated with estimated CO exposure (rho = 0.22, 95% confidence interval (CI): 0.15, 0.29). After adjustment for covariates, each 10% increase in estimated exposure was associated with a 1.12% increase in median carboxyhemoglobin level (95% CI: 0.54, 1.69). This association remained after exclusion of 286 women who reported smoking or being exposed to secondhand smoke (rho = 0.24). In this subgroup, the median carboxyhemoglobin concentration increased 1.29% (95% CI: 0.67, 1.91) for each 10% increase in CO exposure. Monthly estimated CO exposure was moderately correlated with an exposure biomarker. These results support the validity of this regression model for estimating ambient CO exposures in this population and geographic setting.

  13. Estimation of unknown nuclear masses by means of the generalized mass relations. Pt. 3

    International Nuclear Information System (INIS)

    Popa, S.M.

    1980-01-01

    A survey of the estimations of the unknown nuclear masses by means of the generalized mass relations is presented. One discusses the new hypotheses supplementing the original general Garvey-Kelson scheme, reviewing the generalized mass relations and formulae, according to the present status of this new formalism. A critical discussions is given of the reliability of these new Garvey-Kelson type extrapolation procedures. (author)

  14. Adequacy of relative and absolute risk models for lifetime risk estimate of radiation-induced cancer

    International Nuclear Information System (INIS)

    McBride, M.; Coldman, A.J.

    1988-03-01

    This report examines the applicability of the relative (multiplicative) and absolute (additive) models in predicting lifetime risk of radiation-induced cancer. A review of the epidemiologic literature, and a discussion of the mathematical models of carcinogenesis and their relationship to these models of lifetime risk, are included. Based on the available data, the relative risk model for the estimation of lifetime risk is preferred for non-sex-specific epithelial tumours. However, because of lack of knowledge concerning other determinants of radiation risk and of background incidence rates, considerable uncertainty in modelling lifetime risk still exists. Therefore, it is essential that follow-up of exposed cohorts be continued so that population-based estimates of lifetime risk are available

  15. The age of the Matuyama-Brunhes transition: New data from the Sulmona paleolake, Italy

    Science.gov (United States)

    Renne, P. R.; Nomade, S.; Sprain, C. J.; Sagnotti, L.; Scardia, G.; Giaccio, B.

    2014-12-01

    The age of the Matuyama-Brunhes geomagnetic polarity transition (MBT) is a key datum for the Pleistocene time scale. Modern estimates of this age vary over a range approaching the 21 ka period of an orbital precession cycle. Singer (2014) placed the transition at 776 ±2 ka, in agreement with the estimate of 773.1 ±0.8 ka of Channell et al. (2010) based on an orbitally-tuned marine ice volume age model for North Atlantic sediment cores. The 40Ar/39Ar data of Singer (2014) are from basaltic lavas with transitional paleomagnetic directions and/or anomalously low paleointensities, representing episodic sampling of the geomagnetic field with limited stratigraphic context. New data from Sulmona paleolake in central Italy (Sagnotti et al., 2014) reveals directional and relative paleointensity records with unsurpassed stratigraphic resolution of the MBT. Here, biogenic carbonates contain numerous sanidine-bearing tuffs derived from nearby volcanoes of the alkalic Roman volcanic province. These tuffs, dated in the LSCE and BGC labs, punctuate the stratigraphic record of the MBT, documenting a smoothly varying sediment accumulation rate of 20-25 cm/ka during the interval between 720-810 ka. A minimum age of 781.3 ±2.3 ka (nominal calibrations of Nomade et al., 2005 and Steiger and Jäger, 1977) for the MBT is provided by a tuff 1 m above the MBT. Linear interpolation between bracketing tuffs yields an age of 786.1 ±1.5 ka for the MBT, and indicate a duration Leonhardt and Fabian, 2007) over a time interval of several ka. Regardless of which calibrating parameters are used, our 40Ar/39Ar age for the MBT at Sulmona is 15 ±3 ka older than that of Singer (2014). These results underscore the need for high-precision dating applied to geomagnetic polarity transitions that are based on high-resolution magnetostratigraphy, in order to calibrate the GPTS and validate age models based on orbital tuning.

  16. Estimation of building-related construction and demolition waste in Shanghai.

    Science.gov (United States)

    Ding, Tao; Xiao, Jianzhuang

    2014-11-01

    One methodology is proposed to estimate the quantification and composition of building-related construction and demolition (C&D) waste in a fast developing region like Shanghai, PR China. The varieties of structure types and building waste intensities due to the requirement of progressive building design and structure codes in different decades are considered in this regional C&D waste estimation study. It is concluded that approximately 13.71 million tons of C&D waste was generated in 2012 in Shanghai, of which more than 80% of this C&D waste was concrete, bricks and blocks. Analysis from this study can be applied to facilitate C&D waste governors and researchers the duty of formulating precise policies and specifications. As a matter of fact, at least a half of the enormous amount of C&D waste could be recycled if implementing proper recycling technologies and measures. The appropriate managements would be economically and environmentally beneficial to Shanghai where the per capita per year output of C&D waste has been as high as 842 kg in 2010. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. ESTIMATION OF INTRINSIC AND EXTRINSIC ENVIRONMENT FACTORS OF AGE-RELATED TOOTH COLOUR CHANGES

    Czech Academy of Sciences Publication Activity Database

    Hyšpler, P.; Jezbera, D.; Fürst, T.; Mikšík, Ivan; Waclawek, M.

    2010-01-01

    Roč. 17, č. 4 (2010), s. 515-525 ISSN 1898-6196 Institutional research plan: CEZ:AV0Z50110509 Keywords : age-related colour changes of teeth * intrinsic and extrinsic factors * 3D mathematical regression models * estimation of real age Subject RIV: ED - Physiology Impact factor: 0.294, year: 2010

  18. State-Level Estimates of Cancer-Related Absenteeism Costs

    Science.gov (United States)

    Tangka, Florence K.; Trogdon, Justin G.; Nwaise, Isaac; Ekwueme, Donatus U.; Guy, Gery P.; Orenstein, Diane

    2016-01-01

    Background Cancer is one of the top five most costly diseases in the United States and leads to substantial work loss. Nevertheless, limited state-level estimates of cancer absenteeism costs have been published. Methods In analyses of data from the 2004–2008 Medical Expenditure Panel Survey, the 2004 National Nursing Home Survey, the U.S. Census Bureau for 2008, and the 2009 Current Population Survey, we used regression modeling to estimate annual state-level absenteeism costs attributable to cancer from 2004 to 2008. Results We estimated that the state-level median number of days of absenteeism per year among employed cancer patients was 6.1 days and that annual state-level cancer absenteeism costs ranged from $14.9 million to $915.9 million (median = $115.9 million) across states in 2010 dollars. Absenteeism costs are approximately 6.5% of the costs of premature cancer mortality. Conclusions The results from this study suggest that lost productivity attributable to cancer is a substantial cost to employees and employers and contributes to estimates of the overall impact of cancer in a state population. PMID:23969498

  19. Robust estimation of event-related potentials via particle filter.

    Science.gov (United States)

    Fukami, Tadanori; Watanabe, Jun; Ishikawa, Fumito

    2016-03-01

    In clinical examinations and brain-computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Effects of exposure estimation errors on estimated exposure-response relations for PM2.5.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2018-07-01

    Associations between fine particulate matter (PM2.5) exposure concentrations and a wide variety of undesirable outcomes, from autism and auto theft to elderly mortality, suicide, and violent crime, have been widely reported. Influential articles have argued that reducing National Ambient Air Quality Standards for PM2.5 is desirable to reduce these outcomes. Yet, other studies have found that reducing black smoke and other particulate matter by as much as 70% and dozens of micrograms per cubic meter has not detectably affected all-cause mortality rates even after decades, despite strong, statistically significant positive exposure concentration-response (C-R) associations between them. This paper examines whether this disconnect between association and causation might be explained in part by ignored estimation errors in estimated exposure concentrations. We use EPA air quality monitor data from the Los Angeles area of California to examine the shapes of estimated C-R functions for PM2.5 when the true C-R functions are assumed to be step functions with well-defined response thresholds. The estimated C-R functions mistakenly show risk as smoothly increasing with concentrations even well below the response thresholds, thus incorrectly predicting substantial risk reductions from reductions in concentrations that do not affect health risks. We conclude that ignored estimation errors obscure the shapes of true C-R functions, including possible thresholds, possibly leading to unrealistic predictions of the changes in risk caused by changing exposures. Instead of estimating improvements in public health per unit reduction (e.g., per 10 µg/m 3 decrease) in average PM2.5 concentrations, it may be essential to consider how interventions change the distributions of exposure concentrations. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Bone strength estimates relative to vertical ground reaction force discriminates women runners with stress fracture history.

    Science.gov (United States)

    Popp, Kristin L; McDermott, William; Hughes, Julie M; Baxter, Stephanie A; Stovitz, Steven D; Petit, Moira A

    2017-01-01

    To determine differences in bone geometry, estimates of bone strength, muscle size and bone strength relative to load, in women runners with and without a history of stress fracture. We recruited 32 competitive distance runners aged 18-35, with (SFX, n=16) or without (NSFX, n=16) a history of stress fracture for this case-control study. Peripheral quantitative computed tomography (pQCT) was used to assess volumetric bone mineral density (vBMD, mg/mm 3 ), total (ToA) and cortical (CtA) bone areas (mm 2 ), and estimated compressive bone strength (bone strength index; BSI, mg/mm 4 ) at the distal tibia. ToA, CtA, cortical vBMD, and estimated strength (section modulus; Zp, mm 3 and strength strain index; SSIp, mm 3 ) were measured at six cortical sites along the tibia. Mean active peak vertical (pkZ) ground reaction forces (GRFs), assessed from a fatigue run on an instrumented treadmill, were used in conjunction with pQCT measurements to estimate bone strength relative to load (mm 2 /N∗kg -1 ) at all cortical sites. SSIp and Zp were 9-11% lower in the SFX group at mid-shaft of the tibia, while ToA and vBMD did not differ between groups at any measurement site. The SFX group had 11-17% lower bone strength relative to mean pkZ GRFs (phistory of stress fracture. Bone strength relative to load is also lower in this same region suggesting that strength deficits in the middle 1/3 of the tibia and altered gait biomechanics may predispose an individual to stress fracture. Copyright © 2016. Published by Elsevier Inc.

  2. Calibrated Tully-fisher Relations For Improved Photometric Estimates Of Disk Rotation Velocities

    Science.gov (United States)

    Reyes, Reinabelle; Mandelbaum, R.; Gunn, J. E.; Pizagno, J.

    2011-01-01

    We present calibrated scaling relations (also referred to as Tully-Fisher relations or TFRs) between rotation velocity and photometric quantities-- absolute magnitude, stellar mass, and synthetic magnitude (a linear combination of absolute magnitude and color)-- of disk galaxies at z 0.1. First, we selected a parent disk sample of 170,000 galaxies from SDSS DR7, with redshifts between 0.02 and 0.10 and r band absolute magnitudes between -18.0 and -22.5. Then, we constructed a child disk sample of 189 galaxies that span the parameter space-- in absolute magnitude, color, and disk size-- covered by the parent sample, and for which we have obtained kinematic data. Long-slit spectroscopy were obtained from the Dual Imaging Spectrograph (DIS) at the Apache Point Observatory 3.5 m for 99 galaxies, and from Pizagno et al. (2007) for 95 galaxies (five have repeat observations). We find the best photometric estimator of disk rotation velocity to be a synthetic magnitude with a color correction that is consistent with the Bell et al. (2003) color-based stellar mass ratio. The improved rotation velocity estimates have a wide range of scientific applications, and in particular, in combination with weak lensing measurements, they enable us to constrain the ratio of optical-to-virial velocity in disk galaxies.

  3. Clonal diversity and estimation of relative clone age: application to agrobiodiversity of yam (Dioscorea rotundata).

    Science.gov (United States)

    Scarcelli, Nora; Couderc, Marie; Baco, Mohamed N; Egah, Janvier; Vigouroux, Yves

    2013-11-13

    Clonal propagation is a particular reproductive system found in both the plant and animal kingdoms, from human parasites to clonally propagated crops. Clonal diversity provides information about plant and animal evolutionary history, i.e. how clones spread, or the age of a particular clone. In plants, this could provide valuable information about agrobiodiversity dynamics and more broadly about the evolutionary history of a particular crop. We studied the evolutionary history of yam, Dioscorea rotundata. In Africa, Yam is cultivated by tuber clonal propagation. We used 12 microsatellite markers to identify intra-clonal diversity in yam varieties. We then used this diversity to assess the relative ages of clones. Using simulations, we assessed how Approximate Bayesian Computation could use clonal diversity to estimate the age of a clone depending on the size of the sample, the number of independent samples and the number of markers. We then applied this approach to our particular dataset and showed that the relative ages of varieties could be estimated, and that each variety could be ranked by age. We give a first estimation of clone age in an approximate Bayesian framework. However the precise estimation of clone age depends on the precision of the mutation rate. We provide useful information on agrobiodiversity dynamics and suggest recurrent creation of varietal diversity in a clonally propagated crop.

  4. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    Science.gov (United States)

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  5. Estimating water equivalent snow depth from related meteorological variables

    International Nuclear Information System (INIS)

    Steyaert, L.T.; LeDuc, S.K.; Strommen, N.D.; Nicodemus, M.L.; Guttman, N.B.

    1980-05-01

    Engineering design must take into consideration natural loads and stresses caused by meteorological elements, such as, wind, snow, precipitation and temperature. The purpose of this study was to determine a relationship of water equivalent snow depth measurements to meteorological variables. Several predictor models were evaluated for use in estimating water equivalent values. These models include linear regression, principal component regression, and non-linear regression models. Linear, non-linear and Scandanavian models are used to generate annual water equivalent estimates for approximately 1100 cooperative data stations where predictor variables are available, but which have no water equivalent measurements. These estimates are used to develop probability estimates of snow load for each station. Map analyses for 3 probability levels are presented

  6. Estimates of Leaf Relative Water Content from Optical Polarization Measurements

    Science.gov (United States)

    Dahlgren, R. P.; Vanderbilt, V. C.; Daughtry, C. S. T.

    2017-12-01

    Remotely sensing the water status of plant canopies remains a long term goal of remote sensing research. Existing approaches to remotely sensing canopy water status, such as the Crop Water Stress Index (CWSI) and the Equivalent Water Thickness (EWT), have limitations. The CWSI, based upon remotely sensing canopy radiant temperature in the thermal infrared spectral region, does not work well in humid regions, requires estimates of the vapor pressure deficit near the canopy during the remote sensing over-flight and, once stomata close, provides little information regarding the canopy water status. The EWT is based upon the physics of water-light interaction in the 900-2000nm spectral region, not plant physiology. Our goal, development of a remote sensing technique for estimating plant water status based upon measurements in the VIS/NIR spectral region, would potentially provide remote sensing access to plant dehydration physiology - to the cellular photochemistry and structural changes associated with water deficits in leaves. In this research, we used optical, crossed polarization filters to measure the VIS/NIR light reflected from the leaf interior, R, as well as the leaf transmittance, T, for 78 corn (Zea mays) and soybean (Glycine max) leaves having relative water contents (RWC) between 0.60 and 0.98. Our results show that as RWC decreases R increases while T decreases. Our results tie R and T changes in the VIS/NIR to leaf physiological changes - linking the light scattered out of the drying leaf interior to its relative water content and to changes in leaf cellular structure and pigments. Our results suggest remotely sensing the physiological water status of a single leaf - and perhaps of a plant canopy - might be possible in the future.

  7. The First Result of Relative Positioning and Velocity Estimation Based on CAPS

    Science.gov (United States)

    Zhao, Jiaojiao; Ge, Jian; Wang, Liang; Wang, Ningbo; Zhou, Kai; Yuan, Hong

    2018-01-01

    The Chinese Area Positioning System (CAPS) is a new positioning system developed by the Chinese Academy of Sciences based on the communication satellites in geosynchronous orbit. The CAPS has been regarded as a pilot system to test the new technology for the design, construction and update of the BeiDou Navigation Satellite System (BDS). The system structure of CAPS, including the space, ground control station and user segments, is almost like the traditional Global Navigation Satellite Systems (GNSSs), but with the clock on the ground, the navigation signal in C waveband, and different principles of operation. The major difference is that the CAPS navigation signal is first generated at the ground control station, before being transmitted to the satellite in orbit and finally forwarded by the communication satellite transponder to the user. This design moves the clock from the satellite in orbit to the ground. The clock error can therefore be easily controlled and mitigated to improve the positioning accuracy. This paper will present the performance of CAPS-based relative positioning and velocity estimation as assessed in Beijing, China. The numerical results show that, (1) the accuracies of relative positioning, using only code measurements, are 1.25 and 1.8 m in the horizontal and vertical components, respectively; (2) meanwhile, they are about 2.83 and 3.15 cm in static mode and 6.31 and 10.78 cm in kinematic mode, respectively, when using the carrier-phase measurements with ambiguities fixed; and (3) the accuracy of the velocity estimation is about 0.04 and 0.11 m/s in static and kinematic modes, respectively. These results indicate the potential application of CAPS for high-precision positioning and velocity estimation and the availability of a new navigation mode based on communication satellites. PMID:29757204

  8. Estimation of baseline lifetime risk of developed cancer related to radiation exposure in China

    International Nuclear Information System (INIS)

    Li Xiaoliang; Niu Haowei; Sun Quanfu; Ma Weidong

    2011-01-01

    Objective: To introduce the general international method for estimation of lifetime risk of developed cancer, and to estimate the lifetime risk baseline values of several kinds of cancers related to radiation exposures in China. Methods: The risk estimation was based on the data from Chinese Cancer Registry Annual Report (2010) and China Population and Employment Statistics Yearbook (2009), and made according to the method previously published by National Cancer Institute (NCI) in USA. Results: The lifetime risk of all cancer in China in 2007 was estimated to be 27.77%, that of lung cancer 5.96%, that of breast cancer for female 3.34%, that of all leukemia 0.14%, that of thyroid cancer 0.37%. The lifetime risks of all cancer were estimated to be 32.74% for males and 24.73% for females, and that was 36.47% for urban residents and 26.79% for rural people. Conclusions: The lifetime risk of all cancer for males in 2007 was about 1.25 times as much as that for females. The value of all cancer for urban residents was about 1.35 times as much as that for rural residents. The lifetime risk of developed cancers in 2007 in China is lower than that in the developed countries,such as Japan. (authors)

  9. Estimated Perennial Streams of Idaho and Related Geospatial Datasets

    Science.gov (United States)

    Rea, Alan; Skinner, Kenneth D.

    2009-01-01

    The perennial or intermittent status of a stream has bearing on many regulatory requirements. Because of changing technologies over time, cartographic representation of perennial/intermittent status of streams on U.S. Geological Survey (USGS) topographic maps is not always accurate and (or) consistent from one map sheet to another. Idaho Administrative Code defines an intermittent stream as one having a 7-day, 2-year low flow (7Q2) less than 0.1 cubic feet per second. To establish consistency with the Idaho Administrative Code, the USGS developed regional regression equations for Idaho streams for several low-flow statistics, including 7Q2. Using these regression equations, the 7Q2 streamflow may be estimated for naturally flowing streams anywhere in Idaho to help determine perennial/intermittent status of streams. Using these equations in conjunction with a Geographic Information System (GIS) technique known as weighted flow accumulation allows for an automated and continuous estimation of 7Q2 streamflow at all points along a stream, which in turn can be used to determine if a stream is intermittent or perennial according to the Idaho Administrative Code operational definition. The selected regression equations were applied to create continuous grids of 7Q2 estimates for the eight low-flow regression regions of Idaho. By applying the 0.1 ft3/s criterion, the perennial streams have been estimated in each low-flow region. Uncertainty in the estimates is shown by identifying a 'transitional' zone, corresponding to flow estimates of 0.1 ft3/s plus and minus one standard error. Considerable additional uncertainty exists in the model of perennial streams presented in this report. The regression models provide overall estimates based on general trends within each regression region. These models do not include local factors such as a large spring or a losing reach that may greatly affect flows at any given point. Site-specific flow data, assuming a sufficient period of

  10. Report on estimated nuclear energy related cost for fiscal 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The report first describes major actions planned to be taken in Japan in fiscal 1991 in the field of nuclear energy utilization. Major activities to be made for comprehensive strengthening of safety assurance measures are described, focusing on improvement of nuclear energy related safety regulations, promotion of research for safety assurance, improvement and strengthening of disaster prevention measures, environmental radioactivity surveys, control of exposure of workers engaged in radioactivity related jobs, etc. The report then describes actions required for the establishment of a nuclear fuel cycle, focusing on the procurement of uranium resources, establishment of a uranium enrichment process, reprocessing of spent fuel, application of recovered uranium, etc. Other activities are required for the development of new type reactors, effective application of plutonium, development of basic techniques, international contributions, cooperation with the public. Then, the report summarizes estimated costs required for the activities to be performed by the Japan Atomic Energy Research Institute, Power Reactor and Nuclear Fuel Development Corporation, National Institute of Radiological Sciences, Institute of Physical and Chemical Research. (N.K.)

  11. Lifetime risk of pregnancy-related death among Zambian women: district-level estimates from the 2010 census

    NARCIS (Netherlands)

    Banda, R.; Fossgard Sandøy, I.; Fylkesnes, K.; Janssen, F.

    The aim of this study was to examine district differentials in the lifetime risk of pregnancy-related death among females aged 15–49 in Zambia. We used data on household deaths collected in the 2010 census to estimate the lifetime risk of pregnancy-related death among females in Zambia. Using

  12. Disentangling the risk assessment and intimate partner violence relation: Estimating mediating and moderating effects.

    Science.gov (United States)

    Williams, Kirk R; Stansfield, Richard

    2017-08-01

    To manage intimate partner violence (IPV), the criminal justice system has turned to risk assessment instruments to predict if a perpetrator will reoffend. Empirically determining whether offenders assessed as high risk are those who recidivate is critical for establishing the predictive validity of IPV risk assessment instruments and for guiding the supervision of perpetrators. But by focusing solely on the relation between calculated risk scores and subsequent IPV recidivism, previous studies of the predictive validity of risk assessment instruments omitted mediating factors intended to mitigate the risk of this behavioral recidivism. The purpose of this study was to examine the mediating effects of such factors and the moderating effects of risk assessment on the relation between assessed risk (using the Domestic Violence Screening Instrument-Revised [DVSI-R]) and recidivistic IPV. Using a sample of 2,520 perpetrators of IPV, results revealed that time sentenced to jail and time sentenced to probation each significantly mediated the relation between DVSI-R risk level and frequency of reoffending. The results also revealed that assessed risk moderated the relation between these mediating factors and IPV recidivism, with reduced recidivism (negative estimated effects) for high-risk perpetrators but increased recidivism (positive estimate effects) for low-risk perpetrators. The implication is to assign interventions to the level of risk so that no harm is done. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Design-related bias in estimates of accuracy when comparing imaging tests: examples from breast imaging research

    International Nuclear Information System (INIS)

    Houssami, Nehmat; Ciatto, Stefano

    2010-01-01

    This work highlights concepts on the potential for design-related factors to bias estimates of test accuracy in comparative imaging research. We chose two design factors, selection of eligible subjects and the reference standard, to examine the effect of design limitations on estimates of accuracy. Estimates of sensitivity in a study of the comparative accuracy of mammography and ultrasound differed according to how subjects were selected. Comparison of a new imaging test with an existing test should distinguish whether the new test is to be used as a replacement for, or as an adjunct to, the conventional test, to guide the method for subject selection. Quality of the reference standard, examined in a meta-analysis of preoperative breast MRI, varied across studies and was associated with estimates of incremental accuracy. Potential solutions to deal with the reference standard are outlined where an ideal reference standard may not be available in all subjects. These examples of breast imaging research demonstrate that design-related bias, when comparing a new imaging test with a conventional imaging test, may bias accuracy in a direction that favours the new test by overestimating the accuracy of the new test or by underestimating that of the conventional test. (orig.)

  14. How Do Different Aspects of Spatial Skills Relate to Early Arithmetic and Number Line Estimation?

    Directory of Open Access Journals (Sweden)

    Véronique Cornu

    2017-12-01

    Full Text Available The present study investigated the predictive role of spatial skills for arithmetic and number line estimation in kindergarten children (N = 125. Spatial skills are known to be related to mathematical development, but due to the construct’s non-unitary nature, different aspects of spatial skills need to be differentiated. In the present study, a spatial orientation task, a spatial visualization task and visuo-motor integration task were administered to assess three different aspects of spatial skills. Furthermore, we assessed counting abilities, knowledge of Arabic numerals, quantitative knowledge, as well as verbal working memory and verbal intelligence in kindergarten. Four months later, the same children performed an arithmetic and a number line estimation task to evaluate how the abilities measured at Time 1 predicted early mathematics outcomes. Hierarchical regression analysis revealed that children’s performance in arithmetic was predicted by their performance on the spatial orientation and visuo-motor integration task, as well as their knowledge of the Arabic numerals. Performance in number line estimation was significantly predicted by the children’s spatial orientation performance. Our findings emphasize the role of spatial skills, notably spatial orientation, in mathematical development. The relation between spatial orientation and arithmetic was partially mediated by the number line estimation task. Our results further show that some aspects of spatial skills might be more predictive of mathematical development than others, underlining the importance to differentiate within the construct of spatial skills when it comes to understanding numerical development.

  15. Relative risk estimation of Chikungunya disease in Malaysia: An analysis based on Poisson-gamma model

    Science.gov (United States)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2015-05-01

    Disease mapping is a method to display the geographical distribution of disease occurrence, which generally involves the usage and interpretation of a map to show the incidence of certain diseases. Relative risk (RR) estimation is one of the most important issues in disease mapping. This paper begins by providing a brief overview of Chikungunya disease. This is followed by a review of the classical model used in disease mapping, based on the standardized morbidity ratio (SMR), which we then apply to our Chikungunya data. We then fit an extension of the classical model, which we refer to as a Poisson-Gamma model, when prior distributions for the relative risks are assumed known. Both results are displayed and compared using maps and we reveal a smoother map with fewer extremes values of estimated relative risk. The extensions of this paper will consider other methods that are relevant to overcome the drawbacks of the existing methods, in order to inform and direct government strategy for monitoring and controlling Chikungunya disease.

  16. Significance of relative velocity in drag force or drag power estimation for a tethered float

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Sastry, J.S.

    There is difference in opinion regarding the use of relative velocity instead of particle velocity alone in the estimation of drag force or power. In the present study, a tethered spherical float which undergoes oscillatory motion in regular waves...

  17. The Effect of Uncertainty in Exposure Estimation on the Exposure-Response Relation between 1,3-Butadiene and Leukemia

    Directory of Open Access Journals (Sweden)

    George Maldonado

    2009-09-01

    Full Text Available Abstract: In a follow-up study of mortality among North American synthetic rubber industry workers, cumulative exposure to 1,3-butadiene was positively associated with leukemia. Problems with historical exposure estimation, however, may have distorted the association. To evaluate the impact of potential inaccuracies in exposure estimation, we conducted uncertainty analyses of the relation between cumulative exposure to butadiene and leukemia. We created the 1,000 sets of butadiene estimates using job-exposure matrices consisting of exposure values that corresponded to randomly selected percentiles of the approximate probability distribution of plant-, work area/job group-, and year specific butadiene ppm. We then analyzed the relation between cumulative exposure to butadiene and leukemia for each of the 1,000 sets of butadiene estimates. In the uncertainty analysis, the point estimate of the RR for the first non zero exposure category (>0–<37.5 ppm-years was most likely to be about 1.5. The rate ratio for the second exposure category (37.5–<184.7 ppm-years was most likely to range from 1.5 to 1.8. The RR for category 3 of exposure (184.7–<425.0 ppm-years was most likely between 2.1 and 3.0. The RR for the highest exposure category (425.0+ ppm-years was likely to be between 2.9 and 3.7. This range off RR point estimates can best be interpreted as a probability distribution that describes our uncertainty in RR point estimates due to uncertainty in exposure estimation. After considering the complete probability distributions of butadiene exposure estimates, the exposure-response association of butadiene and leukemia was maintained. This exercise was a unique example of how uncertainty analyses can be used to investigate and support an observed measure of effect when occupational exposure estimates are employed in the absence of direct exposure measurements.

  18. Some statistical considerations related to the estimation of cancer risk following exposure to ionizing radiation

    International Nuclear Information System (INIS)

    Land, C.E.; Pierce, D.A.

    1983-01-01

    Statistical theory and methodology provide the logical structure for scientific inference about the cancer risk associated with exposure to ionizing radiation. Although much is known about radiation carcinogenesis, the risk associated with low-level exposures is difficult to assess because it is too small to measure directly. Estimation must therefore depend upon mathematical models which relate observed risks at high exposure levels to risks at lower exposure levels. Extrapolated risk estimates obtained using such models are heavily dependent upon assumptions about the shape of the dose-response relationship, the temporal distribution of risk following exposure, and variation of risk according to variables such as age at exposure, sex, and underlying population cancer rates. Expanded statistical models, which make explicit certain assumed relationships between different data sets, can be used to strengthen inferences by incorporating relevant information from diverse sources. They also allow the uncertainties inherent in information from related data sets to be expressed in estimates which partially depend upon that information. To the extent that informed opinion is based upon a valid assessment of scientific data, the larger context of decision theory, which includes statistical theory, provides a logical framework for the incorporation into public policy decisions of the informational content of expert opinion

  19. Mental health of a police force: estimating prevalence of work-related depression in Australia without a direct national measure.

    Science.gov (United States)

    Lawson, Katrina J; Rodwell, John J; Noblet, Andrew J

    2012-06-01

    The risk of work-related depression in Australia was estimated based on a survey of 631 police officers. Psychological wellbeing and psychological distress items were mapped onto a measure of depression to identify optimal cutoff points. Based on a sample of police officers, Australian workers, in general, are at risk of depression when general psychological wellbeing is considerably compromised. Large-scale estimation of work-related depression in the broader population of employed persons in Australia is reasonable. The relatively high prevalence of depression among police officers emphasizes the need to examine prevalence rates of depression among Australian employees.

  20. Estimating the Relative Water Content of Single Leaves from Optical Polarization Measurements

    Science.gov (United States)

    Vanderbilt, Vern; Daughtry, Craig; Dahlgren, Robert

    2016-01-01

    Remotely sensing the water status of plants and the water content of canopies remain long-term goals of remote sensing research. For monitoring canopy water status, existing approaches such as the Crop Water Stress Index and the Equivalent Water Thickness have limitations. The CWSI does not work well in humid regions, requires estimates of the vapor pressure deficit near the canopy during the remote sensing over-flight and, once stomata close, provides little information regarding the canopy water status. The EWI is based upon the physics of water-light interaction, not plant physiology. In this research, we applied optical polarization techniques to monitor the VISNIR light reflected from the leaf interior, R, as well as the leaf transmittance, T, as the relative water content (RWC) of corn (Zea mays) leaves decreased. Our results show that R and T both changed nonlinearly as each leaf dried, R increasing and T decreasing. Our results tie changes in the VISNIR R and T to leaf physiological changes linking the light scattered out of the drying leaf interior to its relative water content and to changes in leaf cellular structure and pigments. Our results suggest remotely sensing the physiological water status of a single leaf and perhaps of a plant canopy might be possible in the future. However, using our approach to estimate the water status of a leaf does not appear possible at present, because our results display too much variability that we do not yet understand.

  1. Estimated daily salt intake in relation to blood pressure and blood lipids

    DEFF Research Database (Denmark)

    Thuesen, Betina H; Toft, Ulla; Buhelt, Lone P

    2015-01-01

    BACKGROUND: Excessive salt intake causes increased blood pressure which is considered the leading risk for premature death. One major challenge when evaluating associations between daily salt intake and markers of non-communicable diseases is that a high daily salt intake correlates with obesity...... 3294 men and women aged 18-69 years from a general population based study in Copenhagen, Denmark. Estimated 24-hour sodium excretion was calculated by measurements of creatinine and sodium concentration in spot urine in combination with information of sex, age, height and weight. The relations...

  2. Absolute Monotonicity of Functions Related To Estimates of First Eigenvalue of Laplace Operator on Riemannian Manifolds

    Directory of Open Access Journals (Sweden)

    Feng Qi

    2014-10-01

    Full Text Available The authors find the absolute monotonicity and complete monotonicity of some functions involving trigonometric functions and related to estimates the lower bounds of the first eigenvalue of Laplace operator on Riemannian manifolds.

  3. Relative pollen productivity estimates in the modern agricultural landscape of Central Bohemia (Czech Republic)

    Czech Academy of Sciences Publication Activity Database

    Abraham, V.; Kozáková, Radka

    2012-01-01

    Roč. 179, JUL 1 (2012), s. 1-12 ISSN 0034-6667 R&D Projects: GA AV ČR IAAX00020701; GA AV ČR IAAX00050801 Institutional research plan: CEZ:AV0Z80020508 Keywords : relative pollen productivity estimates * Central Bohemia * moss polsters * pollen-vegetation relationship * relevant source area of pollen Subject RIV: EH - Ecology, Behaviour Impact factor: 1.933, year: 2012

  4. Estimation of Relative Economic Weights of Hanwoo Carcass Traits Based on Carcass Market Price

    Science.gov (United States)

    Choy, Yun Ho; Park, Byoung Ho; Choi, Tae Jung; Choi, Jae Gwan; Cho, Kwang Hyun; Lee, Seung Soo; Choi, You Lim; Koh, Kyung Chul; Kim, Hyo Sun

    2012-01-01

    The objective of this study was to estimate economic weights of Hanwoo carcass traits that can be used to build economic selection indexes for selection of seedstocks. Data from carcass measures for determining beef yield and quality grades were collected and provided by the Korean Institute for Animal Products Quality Evaluation (KAPE). Out of 1,556,971 records, 476,430 records collected from 13 abattoirs from 2008 to 2010 after deletion of outlying observations were used to estimate relative economic weights of bid price per kg carcass weight on cold carcass weight (CW), eye muscle area (EMA), backfat thickness (BF) and marbling score (MS) and the phenotypic relationships among component traits. Price of carcass tended to increase linearly as yield grades or quality grades, in marginal or in combination, increased. Partial regression coefficients for MS, EMA, BF, and for CW in original scales were +948.5 won/score, +27.3 won/cm2, −95.2 won/mm and +7.3 won/kg when all three sex categories were taken into account. Among four grade determining traits, relative economic weight of MS was the greatest. Variations in partial regression coefficients by sex categories were great but the trends in relative weights for each carcass measures were similar. Relative economic weights of four traits in integer values when standardized measures were fit into covariance model were +4:+1:−1:+1 for MS:EMA:BF:CW. Further research is required to account for the cost of production per unit carcass weight or per unit production under different economic situations. PMID:25049531

  5. A Full-Vector Geomagnetic PSV Curve Derived from East Maui Volcano Lava Flows for the Last ~15,000 years

    Science.gov (United States)

    Herrero-Bervera, E.; Hagstrum, J. T.; Champion, D. E.; Dekkers, M. J.; Bohnel, H.

    2012-12-01

    We have studied the paleomagnetism and rock magnetism of oriented samples from 105 lava flows erupted by the East Maui Volcano, Hawai`i, (i.e. Hana Volcanics) in order to construct a directional and absolute paleointensity (full-vector) paleosecular variation (PSV) curve for the last ~15,000 years. The directional geomagnetic behavior for East Maui has already been published by Sherrod et al. [JGR, 111, 10,1029/2005JB003876, 2006], and Herrero-Bervera and Valet [PEPI, 161-267-280, 2007]. All lava flows were previously dated using radiocarbon methods and span the last ~15,000 years of geomagnetic behavior. In addition to demagnetization experiments (i.e. alternating field and thermal) we have determined Curie temperatures and hysteresis parameters to characterize composition and grain size of the magnetic grains contained by the sampled flows. Accordingly, most lava flow samples have two types of magnetic minerals in different proportions; low-Ti titanomagnetite with high Curie temperature and high-Ti titanomagnetite with low Curie temperature. During sample heating and cooling the temperature curves are often irreversible. Magnetic grains have sizes that are within the pseudo single domain range and include both single and multi domain particles. Absolute paleointensities (PI) of 37 flows were obtained using the multi-specimen parallel differential pTRM method [Dekkers and Böhnel, EPSL, 248, 508-517, 2006], mostly at temperatures between 170° and 250°C when high-Ti titanomagnetite was dominant. In a few samples with magnetic grains having near-magnetite compositions, higher temperatures could be used. For some of the samples the recently proposed domain-state correction [Fabian and Leonhardt, EPSL, 297, 84-94, 2010] was applied as well. In addition, we have been able to successfully obtain PIs by means of the Thellier-Coe protocol for 17 lava flows. Our paleointensity results correlate well with global absolute paleointensity determinations.

  6. Validation of generic cost estimates for construction-related activities at nuclear power plants: Final report

    International Nuclear Information System (INIS)

    Simion, G.; Sciacca, F.; Claiborne, E.; Watlington, B.; Riordan, B.; McLaughlin, M.

    1988-05-01

    This report represents a validation study of the cost methodologies and quantitative factors derived in Labor Productivity Adjustment Factors and Generic Methodology for Estimating the Labor Cost Associated with the Removal of Hardware, Materials, and Structures From Nuclear Power Plants. This cost methodology was developed to support NRC analysts in determining generic estimates of removal, installation, and total labor costs for construction-related activities at nuclear generating stations. In addition to the validation discussion, this report reviews the generic cost analysis methodology employed. It also discusses each of the individual cost factors used in estimating the costs of physical modifications at nuclear power plants. The generic estimating approach presented uses the /open quotes/greenfield/close quotes/ or new plant construction installation costs compiled in the Energy Economic Data Base (EEDB) as a baseline. These baseline costs are then adjusted to account for labor productivity, radiation fields, learning curve effects, and impacts on ancillary systems or components. For comparisons of estimated vs actual labor costs, approximately four dozen actual cost data points (as reported by 14 nuclear utilities) were obtained. Detailed background information was collected on each individual data point to give the best understanding possible so that the labor productivity factors, removal factors, etc., could judiciously be chosen. This study concludes that cost estimates that are typically within 40% of the actual values can be generated by prudently using the methodologies and cost factors investigated herein

  7. Estimate of Venous Thromboembolism and Related-Deaths Attributable to the Use of Combined Oral Contraceptives in France

    OpenAIRE

    Tricotel, Aurore; Raguideau, Fanny; Collin, Cédric; Zureik, Mahmoud

    2014-01-01

    PURPOSE: To estimate the number of venous thromboembolic events and related-premature mortality (including immediate in-hospital lethality) attributable to the use of combined oral contraceptives in women aged 15 to 49 years-old between 2000 and 2011 in France. METHODS: French data on sales of combined oral contraceptives and on contraception behaviours from two national surveys conducted in 2000 and 2010 were combined to estimate the number of exposed women according to contraceptives genera...

  8. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    Science.gov (United States)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  9. Exposure to Traffic-related Air Pollution During Pregnancy and Term Low Birth Weight: Estimation of Causal Associations in a Semiparametric Model

    Science.gov (United States)

    Padula, Amy M.; Mortimer, Kathleen; Hubbard, Alan; Lurmann, Frederick; Jerrett, Michael; Tager, Ira B.

    2012-01-01

    Traffic-related air pollution is recognized as an important contributor to health problems. Epidemiologic analyses suggest that prenatal exposure to traffic-related air pollutants may be associated with adverse birth outcomes; however, there is insufficient evidence to conclude that the relation is causal. The Study of Air Pollution, Genetics and Early Life Events comprises all births to women living in 4 counties in California's San Joaquin Valley during the years 2000–2006. The probability of low birth weight among full-term infants in the population was estimated using machine learning and targeted maximum likelihood estimation for each quartile of traffic exposure during pregnancy. If everyone lived near high-volume freeways (approximated as the fourth quartile of traffic density), the estimated probability of term low birth weight would be 2.27% (95% confidence interval: 2.16, 2.38) as compared with 2.02% (95% confidence interval: 1.90, 2.12) if everyone lived near smaller local roads (first quartile of traffic density). Assessment of potentially causal associations, in the absence of arbitrary model assumptions applied to the data, should result in relatively unbiased estimates. The current results support findings from previous studies that prenatal exposure to traffic-related air pollution may adversely affect birth weight among full-term infants. PMID:23045474

  10. Using speeding detections and numbers of fatalities to estimate relative risk of a fatality for motorcyclists and car drivers.

    Science.gov (United States)

    Huggins, Richard

    2013-10-01

    Precise estimation of the relative risk of motorcyclists being involved in a fatal accident compared to car drivers is difficult. Simple estimates based on the proportions of licenced drivers or riders that are killed in a fatal accident are biased as they do not take into account the exposure to risk. However, exposure is difficult to quantify. Here we adapt the ideas behind the well known induced exposure methods and use available summary data on speeding detections and fatalities for motorcycle riders and car drivers to estimate the relative risk of a fatality for motorcyclists compared to car drivers under mild assumptions. The method is applied to data on motorcycle riders and car drivers in Victoria, Australia in 2010 and a small simulation study is conducted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Estimation of regional building-related C&D debris generation and composition: case study for Florida, US.

    Science.gov (United States)

    Cochran, Kimberly; Townsend, Timothy; Reinhart, Debra; Heck, Howell

    2007-01-01

    Methodology for the accounting, generation, and composition of building-related construction and demolition (C&D) at a regional level was explored. Six specific categories of debris were examined: residential construction, nonresidential construction, residential demolition, nonresidential demolition, residential renovation, and nonresidential renovation. Debris produced from each activity was calculated as the product of the total area of activity and waste generated per unit area of activity. Similarly, composition was estimated as the product of the total area of activity and the amount of each waste component generated per unit area. The area of activity was calculated using statistical data, and individual site studies were used to assess the average amount of waste generated per unit area. The application of the methodology was illustrated using Florida, US approximately 3,750,000 metric tons of building-related C&D debris were estimated as generated in Florida in 2000. Of that amount, concrete represented 56%, wood 13%, drywall 11%, miscellaneous debris 8%, asphalt roofing materials 7%, metal 3%, cardboard 1%, and plastic 1%. This model differs from others because it accommodates regional construction styles and available data. The resulting generation amount per capita is less than the US estimate - attributable to the high construction, low demolition activity seen in Florida.

  12. Estimation of genetic parameters related to eggshell strength using random regression models.

    Science.gov (United States)

    Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K

    2015-01-01

    This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.

  13. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  14. Variations in the geomagnetic dipole moment during the Holocene and the past 50 kyr

    Science.gov (United States)

    Knudsen, Mads Faurschou; Riisager, Peter; Donadini, Fabio; Snowball, Ian; Muscheler, Raimund; Korhonen, Kimmo; Pesonen, Lauri J.

    2008-07-01

    All absolute paleointensity data published in peer-reviewed journals were recently compiled in the GEOMAGIA50 database. Based on the information in GEOMAGIA50, we reconstruct variations in the geomagnetic dipole moment over the past 50 kyr, with a focus on the Holocene period. A running-window approach is used to determine the axial dipole moment that provides the optimal least-squares fit to the paleointensity data, whereas associated error estimates are constrained using a bootstrap procedure. We subsequently compare the reconstruction from this study with previous reconstructions of the geomagnetic dipole moment, including those based on cosmogenic radionuclides ( 10Be and 14C). This comparison generally lends support to the axial dipole moments obtained in this study. Our reconstruction shows that the evolution of the dipole moment was highly dynamic, and the recently observed rates of change (5% per century) do not appear unique. We observe no apparent link between the occurrence of archeomagnetic jerks and changes in the geomagnetic dipole moment, suggesting that archeomagnetic jerks most likely represent drastic changes in the orientation of the geomagnetic dipole axis or periods characterized by large secular variation of the non-dipole field. This study also shows that the Holocene geomagnetic dipole moment was high compared to that of the preceding ˜ 40 kyr, and that ˜ 4 · 10 22 Am 2 appears to represent a critical threshold below which geomagnetic excursions and reversals occur.

  15. Estimation of relative permeability and capillary pressure from mass imbibition experiments

    Science.gov (United States)

    Alyafei, Nayef; Blunt, Martin J.

    2018-05-01

    We perform spontaneous imbibition experiments on three carbonates - Estaillades, Ketton, and Portland - which are three quarry limestones that have very different pore structures and span wide range of permeability. We measure the mass of water imbibed in air saturated cores as a function of time under strongly water-wet conditions. Specifically, we perform co-current spontaneous experiments using a highly sensitive balance to measure the mass imbibed as a function of time for the three rocks. We use cores measuring 37 mm in diameter and three lengths of approximately 76 mm, 204 mm, and 290 mm. We show that the amount imbibed scales as the square root of time and find the parameter C, where the volume imbibed per unit cross-sectional area at time t is Ct1/2. We find higher C values for higher permeability rocks. Employing semi-analytical solutions for one-dimensional flow and using reasonable estimates of relative permeability and capillary pressure, we can match the experimental data. We finally discuss how, in combination with conventional measurements, we can use theoretical solutions and imbibition measurements to find or constrain relative permeability and capillary pressure.

  16. Practical state of health estimation of power batteries based on Delphi method and grey relational grade analysis

    Science.gov (United States)

    Sun, Bingxiang; Jiang, Jiuchun; Zheng, Fangdan; Zhao, Wei; Liaw, Bor Yann; Ruan, Haijun; Han, Zhiqiang; Zhang, Weige

    2015-05-01

    The state of health (SOH) estimation is very critical to battery management system to ensure the safety and reliability of EV battery operation. Here, we used a unique hybrid approach to enable complex SOH estimations. The approach hybridizes the Delphi method known for its simplicity and effectiveness in applying weighting factors for complicated decision-making and the grey relational grade analysis (GRGA) for multi-factor optimization. Six critical factors were used in the consideration for SOH estimation: peak power at 30% state-of-charge (SOC), capacity, the voltage drop at 30% SOC with a C/3 pulse, the temperature rises at the end of discharge and charge at 1C; respectively, and the open circuit voltage at the end of charge after 1-h rest. The weighting of these factors for SOH estimation was scored by the 'experts' in the Delphi method, indicating the influencing power of each factor on SOH. The parameters for these factors expressing the battery state variations are optimized by GRGA. Eight battery cells were used to illustrate the principle and methodology to estimate the SOH by this hybrid approach, and the results were compared with those based on capacity and power capability. The contrast among different SOH estimations is discussed.

  17. High geomagnetic intensity during the mid-Cretaceous from Thellier analyses of single plagioclase crystals.

    Science.gov (United States)

    Tarduno, J A; Cottrell, R D; Smirnov, A V

    2001-03-02

    Recent numerical simulations have yielded the most efficient geodynamo, having the largest dipole intensity when reversal frequency is low. Reliable paleointensity data are limited but heretofore have suggested that reversal frequency and paleointensity are decoupled. We report data from 56 Thellier-Thellier experiments on plagioclase crystals separated from basalts of the Rajmahal Traps (113 to 116 million years old) of India that formed during the Cretaceous Normal Polarity Superchron. These data suggest a time-averaged paleomagnetic dipole moment of 12.5 +/- 1.4 x 10(22) amperes per square meter, three times greater than mean Cenozoic and Early Cretaceous-Late Jurassic dipole moments when geomagnetic reversals were frequent. This result supports a correlation between intervals of low reversal frequency and high geomagnetic field strength.

  18. Orion Exploration Flight Test-1 Post-Flight Navigation Performance Assessment Relative to the Best Estimated Trajectory

    Science.gov (United States)

    Gay, Robert S.; Holt, Greg N.; Zanetti, Renato

    2016-01-01

    This paper details the post-flight navigation performance assessment of the Orion Exploration Flight Test-1 (EFT-1). Results of each flight phase are presented: Ground Align, Ascent, Orbit, and Entry Descent and Landing. This study examines the on-board Kalman Filter uncertainty along with state deviations relative to the Best Estimated Trajectory (BET). Overall the results show that the Orion Navigation System performed as well or better than expected. Specifically, the Global Positioning System (GPS) measurement availability was significantly better than anticipated at high altitudes. In addition, attitude estimation via processing GPS measurements along with Inertial Measurement Unit (IMU) data performed very well and maintained good attitude throughout the mission.

  19. MSP-tool : A VBA-based software tool for the analysis of multispecimen paleointensity data

    NARCIS (Netherlands)

    Monster, Marilyn W L; de Groot, Lennart V.; Dekkers, Mark J.

    2015-01-01

    The multispecimen protocol (MSP) is a method to estimate the Earth's magnetic field's past strength from volcanic rocks or archeological materials. By reducing the amount of heating steps and aligning the specimens parallel to the applied field, thermochemical alteration and multi-domain effects are

  20. Estimating the costs of work-related accidents and ill-health: An analysis of European data sources

    NARCIS (Netherlands)

    Heuvel, S. van den; Zwaan, L. van der; Dam, L. van; Oude Hengel, K.M.; Eekhout, I.; Emmerik, M.L. van; Oldenburg, C.; Brück, C.; Janowski, P.; Wilhelm, C.

    2017-01-01

    This report presents the results of a survey of national and international data sources on the costs of work-related injuries, illnesses and deaths. The aim was to evaluate the quality and comparability of different sources as a first step towards estimating the costs of accidents and ill-health at

  1. Reproducibility and relative validity of a food frequency questionnaire to estimate intake of dietary phylloquinone and menaquinones

    Science.gov (United States)

    Background: Several observational studies have investigated the relation of dietary phylloquinone and menaquinone intake with occurrence of chronic diseases. Most of these studies relied on food frequency questionnaires (FFQ) to estimate the intake of phylloquinone and menaquinones. However, none of...

  2. Reproducibility and relative validity of a food frequency questionnaire to estimate intake of dietary phylloquinone and menaquinones.

    NARCIS (Netherlands)

    Zwakenberg, S R; Engelen, A I P; Dalmeijer, G W; Booth, S L; Vermeer, C; Drijvers, J J M M; Ocke, M C; Feskens, E J M; van der Schouw, Y T; Beulens, J W J

    2017-01-01

    This study aims to investigate the reproducibility and relative validity of the Dutch food frequency questionnaire (FFQ), to estimate intake of dietary phylloquinone and menaquinones compared with 24-h dietary recalls (24HDRs) and plasma markers of vitamin K status.

  3. Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA...... estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using...

  4. Fast joint detection-estimation of evoked brain activity in event-related FMRI using a variational approach

    Science.gov (United States)

    Chaari, Lotfi; Vincent, Thomas; Forbes, Florence; Dojat, Michel; Ciuciu, Philippe

    2013-01-01

    In standard within-subject analyses of event-related fMRI data, two steps are usually performed separately: detection of brain activity and estimation of the hemodynamic response. Because these two steps are inherently linked, we adopt the so-called region-based Joint Detection-Estimation (JDE) framework that addresses this joint issue using a multivariate inference for detection and estimation. JDE is built by making use of a regional bilinear generative model of the BOLD response and constraining the parameter estimation by physiological priors using temporal and spatial information in a Markovian model. In contrast to previous works that use Markov Chain Monte Carlo (MCMC) techniques to sample the resulting intractable posterior distribution, we recast the JDE into a missing data framework and derive a Variational Expectation-Maximization (VEM) algorithm for its inference. A variational approximation is used to approximate the Markovian model in the unsupervised spatially adaptive JDE inference, which allows automatic fine-tuning of spatial regularization parameters. It provides a new algorithm that exhibits interesting properties in terms of estimation error and computational cost compared to the previously used MCMC-based approach. Experiments on artificial and real data show that VEM-JDE is robust to model mis-specification and provides computational gain while maintaining good performance in terms of activation detection and hemodynamic shape recovery. PMID:23096056

  5. Temperature Observation Time and Type Influence Estimates of Heat-Related Mortality in Seven U.S. Cities.

    Science.gov (United States)

    Davis, Robert E; Hondula, David M; Patel, Anjali P

    2016-06-01

    Extreme heat is a leading weather-related cause of mortality in the United States, but little guidance is available regarding how temperature variable selection impacts heat-mortality relationships. We examined how the strength of the relationship between daily heat-related mortality and temperature varies as a function of temperature observation time, lag, and calculation method. Long time series of daily mortality counts and hourly temperature for seven U.S. cities with different climates were examined using a generalized additive model. The temperature effect was modeled separately for each hour of the day (with up to 3-day lags) along with different methods of calculating daily maximum, minimum, and mean temperature. We estimated the temperature effect on mortality for each variable by comparing the 99th versus 85th temperature percentiles, as determined from the annual time series. In three northern cities (Boston, MA; Philadelphia, PA; and Seattle, WA) that appeared to have the greatest sensitivity to heat, hourly estimates were consistent with a diurnal pattern in the heat-mortality response, with strongest associations for afternoon or maximum temperature at lag 0 (day of death) or afternoon and evening of lag 1 (day before death). In warmer, southern cities, stronger associations were found with morning temperatures, but overall the relationships were weaker. The strongest temperature-mortality relationships were associated with maximum temperature, although mean temperature results were comparable. There were systematic and substantial differences in the association between temperature and mortality based on the time and type of temperature observation. Because the strongest hourly temperature-mortality relationships were not always found at times typically associated with daily maximum temperatures, temperature variables should be selected independently for each study location. In general, heat-mortality was more closely coupled to afternoon and maximum

  6. Estimation and applicability of attenuation characteristics for source parameters and scaling relations in the Garhwal Kumaun Himalaya region, India

    Science.gov (United States)

    Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.

    2018-06-01

    Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections

  7. A new estimator for vector velocity estimation [medical ultrasonics

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2001-01-01

    A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...

  8. Mind the Gap! A Multilevel Analysis of Factors Related to Variation in Published Cost-Effectiveness Estimates within and between Countries.

    Science.gov (United States)

    Boehler, Christian E H; Lord, Joanne

    2016-01-01

    Published cost-effectiveness estimates can vary considerably, both within and between countries. Despite extensive discussion, little is known empirically about factors relating to these variations. To use multilevel statistical modeling to integrate cost-effectiveness estimates from published economic evaluations to investigate potential causes of variation. Cost-effectiveness studies of statins for cardiovascular disease prevention were identified by systematic review. Estimates of incremental costs and effects were extracted from reported base case, sensitivity, and subgroup analyses, with estimates grouped in studies and in countries. Three bivariate models were developed: a cross-classified model to accommodate data from multinational studies, a hierarchical model with multinational data allocated to a single category at country level, and a hierarchical model excluding multinational data. Covariates at different levels were drawn from a long list of factors suggested in the literature. We found 67 studies reporting 2094 cost-effectiveness estimates relating to 23 countries (6 studies reporting for more than 1 country). Data and study-level covariates included patient characteristics, intervention and comparator cost, and some study methods (e.g., discount rates and time horizon). After adjusting for these factors, the proportion of variation attributable to countries was negligible in the cross-classified model but moderate in the hierarchical models (14%-19% of total variance). Country-level variables that improved the fit of the hierarchical models included measures of income and health care finance, health care resources, and population risks. Our analysis suggested that variability in published cost-effectiveness estimates is related more to differences in study methods than to differences in national context. Multinational studies were associated with much lower country-level variation than single-country studies. These findings are for a single clinical

  9. Relative Validity and Reproducibility of a Food-Frequency Questionnaire for Estimating Food Intakes among Flemish Preschoolers

    Directory of Open Access Journals (Sweden)

    Inge Huybrechts

    2009-01-01

    Full Text Available The aims of this study were to assess the relative validity and reproducibility of a semi-quantitative food-frequency questionnaire (FFQ applied in a large region-wide survey among 2.5-6.5 year-old children for estimating food group intakes. Parents/guardians were used as a proxy. Estimated diet records (3d were used as reference method and reproducibility was measured by repeated FFQ administrations five weeks apart. In total 650 children were included in the validity analyses and 124 in the reproducibility analyses. Comparing median FFQ1 to FFQ2 intakes, almost all evaluated food groups showed median differences within a range of ± 15%. However, for median vegetables, fruit and cheese intake, FFQ1 was > 20% higher than FFQ2. For most foods a moderate correlation (0.5-0.7 was obtained between FFQ1 and FFQ2. For cheese, sugared drinks and fruit juice intakes correlations were even > 0.7. For median differences between the 3d EDR and the FFQ, six food groups (potatoes & grains; vegetables Fruit; cheese; meat, game, poultry and fish; and sugared drinks gave a difference > 20%. The largest corrected correlations (>0.6 were found for the intake of potatoes and grains, fruit, milk products, cheese, sugared drinks, and fruit juice, while the lowest correlations (<0.4 for bread and meat products. The proportion of subjects classified within one quartile (in the same/adjacent category by FFQ and EDR ranged from 67% (for meat products to 88% (for fruit juice. Extreme misclassification into the opposite quartiles was for all food groups < 10%. The results indicate that our newly developed FFQ gives reproducible estimates of food group intake. Overall, moderate levels of relative validity were observed for estimates of food group intake.

  10. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  11. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece; Rashid, Mamoon; Pain, Arnab

    2012-01-01

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  12. Estimation of environment-related properties of chemicals for design of sustainable processes: development of group-contribution+ (GC+) property models and uncertainty analysis.

    Science.gov (United States)

    Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul

    2012-11-26

    The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application

  13. [Estimating non work-related sickness leave absences related to a previous occupational injury in Catalonia (Spain)].

    Science.gov (United States)

    Molinero-Ruiz, Emilia; Navarro, Albert; Moriña, David; Albertí-Casas, Constança; Jardí-Lliberia, Josefina; de Montserrat-Nonó, Jaume

    2015-01-01

    To estimate the frequency of non-work sickness absence (ITcc) related to previous occupational injuries with (ATB) or without (ATSB) sick leave. Prospective longitudinal study. Workers with ATB or ATSB notified to the Occupational Accident Registry of Catalonia were selected in the last term of 2009. They were followed-up for six months after returning to work (ATB) or after the accident (ATSB), by sex and occupation. Official labor and health authority registries were used as information sources. An "injury-associated ITcc" was defined when the sick leave occurred in the following six months and within the same diagnosis group. The absolute and relative frequency were calculated according to time elapsed and its duration (cumulated days, measures of central trend and dispersion), by diagnosis group or affected body area, as compared to all of Catalonia. 2,9%of ATB (n=627) had an injury-associated ITcc, with differences by diagnosis, sex and occupation; this was also the case for 2,1% of ATSB (n=496).With the same diagnosis, duration of ITcc was longer among those who had an associated injury, and with respect to all of Catalonia. Some of the under-reporting of occupational pathology corresponds to episodes initially recognized as being work-related. Duration of sickness absence depends not only on diagnosis and clinical course, but also on criteria established by the entities managing the case. This could imply that more complicated injuries are referred to the national health system, resulting in personal, legal, healthcare and economic cost consequences for all involved stakeholders. Copyright belongs to the Societat Catalana de Salut Laboral.

  14. Relative Attitude Estimation for a Uniform Motion and Slowly Rotating Noncooperative Spacecraft

    Directory of Open Access Journals (Sweden)

    Liu Zhang

    2017-01-01

    Full Text Available This paper presents a novel relative attitude estimation approach for a uniform motion and slowly rotating noncooperative spacecraft. It is assumed that the uniform motion and slowly rotating noncooperative chief spacecraft is in failure or out of control and there is no a priori rotation rate information. We utilize a very fast binary descriptor based on binary robust independent elementary features (BRIEF to obtain the features of the target, which are rotational invariance and resistance to noise. And then, we propose a novel combination of single candidate random sample consensus (RANSAC with extended Kalman filter (EKF that makes use of the available prior probabilistic information from the EKF in the RANSAC model hypothesis stage. The advantage of this combination obviously reduces the sample size to only one, which results in large computational savings without the loss of accuracy. Experimental results from real image sequence of a real model target show that the relative angular error is about 3.5% and the mean angular velocity error is about 0.1 deg/s.

  15. Formulation of uncertainty relation of error and disturbance in quantum measurement by using quantum estimation theory

    International Nuclear Information System (INIS)

    Yu Watanabe; Masahito Ueda

    2012-01-01

    Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the

  16. Calibrated Tully-Fisher relations for improved estimates of disc rotation velocities

    Science.gov (United States)

    Reyes, R.; Mandelbaum, R.; Gunn, J. E.; Pizagno, J.; Lackner, C. N.

    2011-11-01

    In this paper, we derive scaling relations between photometric observable quantities and disc galaxy rotation velocity Vrot or Tully-Fisher relations (TFRs). Our methodology is dictated by our purpose of obtaining purely photometric, minimal-scatter estimators of Vrot applicable to large galaxy samples from imaging surveys. To achieve this goal, we have constructed a sample of 189 disc galaxies at redshifts z < 0.1 with long-slit Hα spectroscopy from Pizagno et al. and new observations. By construction, this sample is a fair subsample of a large, well-defined parent disc sample of ˜170 000 galaxies selected from the Sloan Digital Sky Survey Data Release 7 (SDSS DR7). The optimal photometric estimator of Vrot we find is stellar mass M★ from Bell et al., based on the linear combination of a luminosity and a colour. Assuming a Kroupa initial mass function (IMF), we find: log [V80/(km s-1)] = (2.142 ± 0.004) + (0.278 ± 0.010)[log (M★/M⊙) - 10.10], where V80 is the rotation velocity measured at the radius R80 containing 80 per cent of the i-band galaxy light. This relation has an intrinsic Gaussian scatter ? dex and a measured scatter σmeas= 0.056 dex in log V80. For a fixed IMF, we find that the dynamical-to-stellar mass ratios within R80, (Mdyn/M★)(R80), decrease from approximately 10 to 3, as stellar mass increases from M★≈ 109 to 1011 M⊙. At a fixed stellar mass, (Mdyn/M★)(R80) increases with disc size, so that it correlates more tightly with stellar surface density than with stellar mass or disc size alone. We interpret the observed variation in (Mdyn/M★)(R80) with disc size as a reflection of the fact that disc size dictates the radius at which Mdyn/M★ is measured, and consequently, the fraction of the dark matter 'seen' by the gas at that radius. For the lowest M★ galaxies, we find a positive correlation between TFR residuals and disc sizes, indicating that the total density profile is dominated by dark matter on these scales. For the

  17. Human comfort and self-estimated performance in relation to indoor environmental parameters and building features

    OpenAIRE

    Frontczak, Monika Joanna; Wargocki, Pawel

    2011-01-01

    The main objective of the Ph.D. study was to examine occupants’ perception of comfort and self-estimated job performance in non-industrial buildings (homes and offices), in particular how building occupants understand comfort and which parameters, not necessarily related to indoor environments, influence the perception of comfort.To meet the objective, the following actions were taken: (1) a literature survey exploring which indoor environmental parameters (thermal, acoustic, visualenvironmen...

  18. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    Science.gov (United States)

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  19. Estimating the relative water content of leaves in a cotton canopy

    Science.gov (United States)

    Vanderbilt, Vern; Daughtry, Craig; Kupinski, Meredith; Bradley, Christine; French, Andrew; Bronson, Kevin; Chipman, Russell; Dahlgren, Robert

    2017-08-01

    Remotely sensing plant canopy water status remains a long term goal of remote sensing research. Established approaches to estimating canopy water status — the Crop Water Stress Index, the Water Deficit Index and the Equivalent Water Thickness — involve measurements in the thermal or reflective infrared. Here we report plant water status estimates based upon analysis of polarized visible imagery of a cotton canopy measured by ground Multi-Spectral Polarization Imager (MSPI). Such estimators potentially provide access to the plant hydrological photochemistry that manifests scattering and absorption effects in the visible spectral region.

  20. Estimation of radiation burden to relatives of patients treated with radioiodine for cancer therapy

    International Nuclear Information System (INIS)

    Tandon, Pankaj; Rohatgi, Rupali; Gaur, P.K.; Rao, B.S.; Gill, B.S.; Hari Babu, T.; Venkatesh, Meera

    2005-01-01

    Patients treated with radioiodine present a radiation hazard and precautions are necessary to limit the radiation dose to family members, nursing staff and members of the public. The precautions advised are usually based on the instantaneous dose rates or iodine retention and do not take into account the time spent in close proximity with a patient. The purpose of this study was to draw guidelines based on the actual measurements and confirm if the present guidelines for discharge of the 131 I-treated thyroid cancer patients are adequate or not. External exposure rates were measured on 37 patients using a calibrated ionization survey meter. The patients' exposure rates were measured at the time of the discharge from the hospital. The patient and their relatives were given lockets embedded with CaSO 4 :Dy dosimeters at the time of discharge from the hospital. These lockets were given with a chain to be worn in the neck for 15 days. These lockets were collected after a fortnight and read out in a conventional TLD reader. These dose estimates can be used to calculate the limits for the patient movements so as to limit doses received to less than 1 mSv for the family members. This study dealt only with the external exposure; the problem of internal contamination was not considered. In our study the doses to the patient have also been measured in order to estimate the percentage of dose received by their relatives. In our study, most of the cases the dose received by the relatives of the patients are more than 1 mSv, which is more than the limit prescribed by the International Commission of Radiological Protection (ICRP) for the general public. (author)

  1. Relation between electrocardiographic and enzymatic methods of estimating acute myocardial infarct size.

    Science.gov (United States)

    Hindman, N; Grande, P; Harrell, F E; Anderson, C; Harrison, D; Ideker, R E; Selvester, R H; Wagner, G S

    1986-07-01

    The extent of initial acute myocardial infarction (AMI) and subsequent patient prognosis were studied using 2 independent indicators of AMI size. Two inexpensive, readily available techniques, the complete Selvester QRS score from the standard 12-lead electrocardiogram and the peak value of the isoenzyme MB of creatine kinase (CK-MB), were evaluated in 125 patients with initial AMI. The overall correlation between peak CK-MB and QRS score was fair (0.57), with marked difference according to anterior (0.72) or inferior (0.35) location. The prognostic capabilities of each measurement varied. Peak CK-MB provided significant information concerning hospital morbidity or early mortality (within 30 days) for both anterior (chi 2 = 9.83) and inferior (chi 2 = 7.68) AMI locations; however, the QRS score was significant only for anterior AMI (chi 2 = 9.50). For total 24-month mortality, the QRS score alone provided the most information (chi 2 = 10.0, p = 0.0016), which was not improved with the addition of CK-MB (chi 2 = 0.07, p = 0.79). This study shows a good relation between these 2 independent estimates of AMI size for patients with anterior AMI location. Both QRS and CK-MB results are significantly related to early morbidity and mortality; however, only the QRS score is related to total 24-month prognosis.

  2. Estimating return periods of extreme values from relatively short time series of winds

    Science.gov (United States)

    Jonasson, Kristjan; Agustsson, Halfdan; Rognvaldsson, Olafur; Arfeuille, Gilles

    2013-04-01

    An important factor for determining the prospect of individual wind farm sites is the frequency of extreme winds at hub height. Here, extreme winds are defined as the value of the highest 10 minutes averaged wind speed with a 50 year return period, i.e. annual exceeding probability of 2% (Rodrigo, 2010). A frequently applied method to estimate winds in the lowest few hundred meters above ground is to extrapolate observed 10-meter winds logarithmically to higher altitudes. Recent study by Drechsel et al. (2012) showed however that this methodology is not as accurate as interpolating simulated results from the global ECMWF numerical weather prediction (NWP) model to the desired height. Observations of persistent low level jets near Colima in SW-Mexico also show that the logarithmic approach can give highly inaccurate results for some regions (Arfeuille et al., 2012). To address these shortcomings of limited, and/or poorly representative, observations and extrapolations of winds one can use NWP models to dynamically scale down relatively coarse resolution atmospheric analysis. In the case of limited computing resources one has typically to make a compromise between spatial resolution and the duration of the simulated period, both of which can limit the quality of the wind farm siting. A common method to estimate maximum winds is to fit an extreme value distribution (e.g. Gumbel, gev or Pareto) to the maximum values of each year of available data, or the tail of these values. If data are only available for a short period, e.g. 10 or 15 years, then this will give a rather inaccurate estimate. It is possible to deal with this problem by utilizing monthly or weekly maxima, but this introduces new problems: seasonal variation, autocorrelation of neighboring values, and increased discrepancy between data and fitted distribution. We introduce a new method to estimate return periods of extreme values of winds at hub height from relatively short time series of winds, simulated

  3. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Uk [Samsung Electroics, Suwon (Korea, Republic of); Sun, Ju Young; Won, Mooncheol [Chungnam Nat' l Univ., Daejeon (Korea, Republic of)

    2013-12-15

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.

  4. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    International Nuclear Information System (INIS)

    Lee, Jung Uk; Sun, Ju Young; Won, Mooncheol

    2013-01-01

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner

  5. A framework for estimating radiation-related cancer risks in Japan from the 2011 Fukushima nuclear accident.

    Science.gov (United States)

    Walsh, L; Zhang, W; Shore, R E; Auvinen, A; Laurier, D; Wakeford, R; Jacob, P; Gent, N; Anspaugh, L R; Schüz, J; Kesminiene, A; van Deventer, E; Tritscher, A; del Rosarion Pérez, M

    2014-11-01

    We present here a methodology for health risk assessment adopted by the World Health Organization that provides a framework for estimating risks from the Fukushima nuclear accident after the March 11, 2011 Japanese major earthquake and tsunami. Substantial attention has been given to the possible health risks associated with human exposure to radiation from damaged reactors at the Fukushima Daiichi nuclear power station. Cumulative doses were estimated and applied for each post-accident year of life, based on a reference level of exposure during the first year after the earthquake. A lifetime cumulative dose of twice the first year dose was estimated for the primary radionuclide contaminants ((134)Cs and (137)Cs) and are based on Chernobyl data, relative abundances of cesium isotopes, and cleanup efforts. Risks for particularly radiosensitive cancer sites (leukemia, thyroid and breast cancer), as well as the combined risk for all solid cancers were considered. The male and female cumulative risks of cancer incidence attributed to radiation doses from the accident, for those exposed at various ages, were estimated in terms of the lifetime attributable risk (LAR). Calculations of LAR were based on recent Japanese population statistics for cancer incidence and current radiation risk models from the Life Span Study of Japanese A-bomb survivors. Cancer risks over an initial period of 15 years after first exposure were also considered. LAR results were also given as a percentage of the lifetime baseline risk (i.e., the cancer risk in the absence of radiation exposure from the accident). The LAR results were based on either a reference first year dose (10 mGy) or a reference lifetime dose (20 mGy) so that risk assessment may be applied for relocated and non-relocated members of the public, as well as for adult male emergency workers. The results show that the major contribution to LAR from the reference lifetime dose comes from the first year dose. For a dose of 10 mGy in

  6. L’estime de soi : un cas particulier d’estime sociale ?

    OpenAIRE

    Santarelli, Matteo

    2016-01-01

    Un des traits plus originaux de la théorie intersubjective de la reconnaissance d’Axel Honneth, consiste dans la façon dont elle discute la relation entre estime sociale et estime de soi. En particulier, Honneth présente l’estime de soi comme un reflet de l’estime sociale au niveau individuel. Dans cet article, je discute cette conception, en posant la question suivante : l’estime de soi est-elle un cas particulier de l’estime sociale ? Pour ce faire, je me concentre sur deux problèmes crucia...

  7. Risk Estimates and Risk Factors Related to Psychiatric Inpatient Suicide—An Overview

    Directory of Open Access Journals (Sweden)

    Trine Madsen

    2017-03-01

    Full Text Available People with mental illness have an increased risk of suicide. The aim of this paper is to provide an overview of suicide risk estimates among psychiatric inpatients based on the body of evidence found in scientific peer-reviewed literature; primarily focusing on the relative risks, rates, time trends, and socio-demographic and clinical risk factors of suicide in psychiatric inpatients. Psychiatric inpatients have a very high risk of suicide relative to the background population, but it remains challenging for clinicians to identify those patients that are most likely to die from suicide during admission. Most studies are based on low power, thus compromising quality and generalisability. The few studies with sufficient statistical power mainly identified non-modifiable risk predictors such as male gender, diagnosis, or recent deliberate self-harm. Also, the predictive value of these predictors is low. It would be of great benefit if future studies would be based on large samples while focusing on modifiable predictors over the course of an admission, such as hopelessness, depressive symptoms, and family/social situations. This would improve our chances of developing better risk assessment tools.

  8. Calculation of prevalence estimates through differential equations: application to stroke-related disability.

    Science.gov (United States)

    Mar, Javier; Sainz-Ezkerra, María; Moler-Cuiral, Jose Antonio

    2008-01-01

    Neurological diseases now make up 6.3% of the global burden of disease mainly because they cause disability. To assess disability, prevalence estimates are needed. The objective of this study is to apply a method based on differential equations to calculate the prevalence of stroke-related disability. On the basis of a flow diagram, a set of differential equations for each age group was constructed. The linear system was solved analytically and numerically. The parameters of the system were obtained from the literature. The model was validated and calibrated by comparison with previous results. The stroke prevalence rate per 100,000 men was 828, and the rate for stroke-related disability was 331. The rates steadily rose with age, but the group between the ages of 65 and 74 years had the highest total number of individuals. Differential equations are useful to represent the natural history of neurological diseases and to make possible the calculation of the prevalence for the various states of disability. In our experience, when compared with the results obtained by Markov models, the benefit of the continuous use of time outweighs the mathematical requirements of our model. (c) 2008 S. Karger AG, Basel.

  9. Grid-search Moment Tensor Estimation: Implementation and CTBT-related Application

    Science.gov (United States)

    Stachnik, J. C.; Baker, B. I.; Rozhkov, M.; Friberg, P. A.; Leifer, J. M.

    2017-12-01

    This abstract presents a review work related to moment tensor estimation for Expert Technical Analysis at the Comprehensive Test Ban Treaty Organization. In this context of event characterization, estimation of key source parameters provide important insights into the nature of failure in the earth. For example, if the recovered source parameters are indicative of a shallow source with large isotropic component then one conclusion is that it is a human-triggered explosive event. However, an important follow-up question in this application is - does an alternative hypothesis like a deeper source with a large double couple component explain the data approximately as well as the best solution? Here we address the issue of both finding a most likely source and assessing its uncertainty. Using the uniform moment tensor discretization of Tape and Tape (2015) we exhaustively interrogate and tabulate the source eigenvalue distribution (i.e., the source characterization), tensor orientation, magnitude, and source depth. The benefit of the grid-search is that we can quantitatively assess the extent to which model parameters are resolved. This provides a valuable opportunity during the assessment phase to focus interpretation on source parameters that are well-resolved. Another benefit of the grid-search is that it proves to be a flexible framework where different pieces of information can be easily incorporated. To this end, this work is particularly interested in fitting teleseismic body waves and regional surface waves as well as incorporating teleseismic first motions when available. Being that the moment tensor search methodology is well-established we primarily focus on the implementation and application. We present a highly scalable strategy for systematically inspecting the entire model parameter space. We then focus on application to regional and teleseismic data recorded during a handful of natural and anthropogenic events, report on the grid-search optimum, and

  10. Estimates of relative areas for the disposal in bedded salt of LWR wastes from alternative fuel cycles

    International Nuclear Information System (INIS)

    Lincoln, R.C.; Larson, D.W.; Sisson, C.E.

    1978-01-01

    The relative mine-level areas (land use requirements) which would be required for the disposal of light-water reactor (LWR) radioactive wastes in a hypothetical bedded-salt formation have been estimated. Five waste types from alternative fuel cycles have been considered. The relative thermal response of each of five different site conditions to each waste type has been determined. The fuel cycles considered are the once-through (no recycle), the uranium-only recycle, and the uranium and plutonium recycle. The waste types which were considered include (1) unreprocessed spent reactor fuel, (2) solidified waste derived from reprocessing uranium oxide fuel, (3) plutonium recovered from reprocessing spent reactor fuel and doped with 1.5% of the accompanying waste from reprocessing uranium oxide fuel, (4) waste derived from reprocessing mixed uranium/plutonium oxide fuel in the third recycle, and (5) unreprocessed spent fuel after three recycles of mixed uranium/plutonium oxide fuels. The relative waste-disposal areas were determined from a calculated value of maximum thermal energy (MTE) content of the geologic formations. Results are presented for each geologic site condition in terms of area ratios. Disposal area requirements for each waste type are expressed as ratios relative to the smallest area requirement (for waste type No. 2 above). For the reference geologic site condition, the estimated mine-level disposal area ratios are 4.9 for waste type No. 1, 4.3 for No. 3, 2.6 for No. 4, and 11 for No. 5

  11. Patient absorbed radiation doses estimation related to irradiation anatomy

    International Nuclear Information System (INIS)

    Soares, Flavio Augusto Penna; Soares, Amanda Anastacio; Kahl, Gabrielly Gomes

    2014-01-01

    Developed a direct equation to estimate the absorbed dose to the patient in x-ray examinations, using electric, geometric parameters and filtering combined with data from irradiated anatomy. To determine the absorbed dose for each examination, the entrance skin dose (ESD) is adjusted to the thickness of the patient's specific anatomy. ESD is calculated from the estimated KERMA greatness in the air. Beer-Lambert equations derived from power data mass absorption coefficients obtained from the NIST / USA, were developed for each tissue: bone, muscle, fat and skin. Skin thickness was set at 2 mm and the bone was estimated in the central ray of the site, in the anteroposterior view. Because they are similar in density and attenuation coefficients, muscle and fat are treated as a single tissue. For evaluation of the full equations, we chose three different anatomies: chest, hand and thigh. Although complex in its shape, the equations simplify direct determination of absorbed dose from the characteristics of the equipment and patient. The input data is inserted at a single time and total absorbed dose (mGy) is calculated instantly. The average error, when compared with available data, is less than 5% in any combination of device data and exams. In calculating the dose for an exam and patient, the operator can choose the variables that will deposit less radiation to the patient through the prior analysis of each combination of variables, using the ALARA principle in routine diagnostic radiology sector

  12. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Science.gov (United States)

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  13. Russia-specific relative risks and their effects on the estimated alcohol-attributable burden of disease.

    Science.gov (United States)

    Shield, Kevin D; Rehm, Jürgen

    2015-05-10

    Alcohol consumption is a major risk factor for the burden of disease globally. This burden is estimated using Relative Risk (RR) functions for alcohol from meta-analyses that use data from all countries; however, for Russia and surrounding countries, country-specific risk data may need to be used. The objective of this paper is to compare the estimated burden of alcohol consumption calculated using Russia-specific alcohol RRs with the estimated burden of alcohol consumption calculated using alcohol RRs from meta-analyses. Data for 2012 on drinking indicators were calculated based on the Global Information System on Alcohol and Health. Data for 2012 on mortality, Years of Life Lost, Years Lived with Disability, and Disability-Adjusted Life Years (DALYs) lost by cause were obtained by country from the World Health Organization. Alcohol Population-Attributable Fractions (PAFs) were calculated based on a risk modelling methodology from Russia. These PAFs were compared to PAFs calculated using methods applied for all other countries. The 95% Uncertainty Intervals (UIs) for the alcohol PAFs were calculated using a Monte Carlo-like method. Using Russia-specific alcohol RR functions, in Russia in 2012 alcohol caused an estimated 231,900 deaths (95% UI: 185,600 to 278,200) (70,800 deaths among women and 161,100 deaths among men) and 13,295,000 DALYs lost (95% UI: 11,242,000 to 15,348,000) (3,670,000 DALYs lost among women and 9,625,000 DALYs lost among men) among people 0 to 64 years of age. This compares to an estimated 165,600 deaths (95% UI: 97,200 to 228,100) (29,700 deaths among women and 135,900 deaths among men) and 10,623,000 DALYs lost (95% UI: 7,265,000 to 13,754,000) (1,783,000 DALYs lost among women and 8,840,000 DALYs lost among men) among people 0 to 64 years of age caused by alcohol when non-Russia-specific alcohol RRs were used. Results indicate that if the Russia-specific RRs are used when estimating the health burden attributable to alcohol consumption in

  14. Fall in hematocrit per 1000 parasites cleared from peripheral blood: a simple method for estimating drug-related fall in hematocrit after treatment of malaria infections.

    Science.gov (United States)

    Gbotosho, Grace Olusola; Okuboyejo, Titilope; Happi, Christian Tientcha; Sowunmi, Akintunde

    2014-01-01

    A simple method to estimate antimalarial drug-related fall in hematocrit (FIH) after treatment of Plasmodium falciparum infections in the field is described. The method involves numeric estimation of the relative difference in hematocrit at baseline (pretreatment) and the first 1 or 2 days after treatment begun as numerator and the corresponding relative difference in parasitemia as the denominator, and expressing it per 1000 parasites cleared from peripheral blood. Using the method showed that FIH/1000 parasites cleared from peripheral blood (cpb) at 24 or 48 hours were similar in artemether-lumefantrine and artesunate-amodiaquine-treated children (0.09; 95% confidence interval, 0.052-0.138 vs 0.10; 95% confidence interval, 0.069-0.139%; P = 0.75) FIH/1000 parasites cpb in patients with higher parasitemias were significantly (P 1000 parasites cpb were similar in anemic and nonanemic children. Estimation of FIH/1000 parasites cpb is simple, allows estimation of relatively conserved hematocrit during treatment, and can be used in both observational studies and clinical trials involving antimalarial drugs.

  15. A new relation to estimate nuclear radius

    International Nuclear Information System (INIS)

    Singh, M.; Kumar, Pradeep; Singh, Y.; Gupta, K.K.; Varshney, A.K.; Gupta, D.K.

    2013-01-01

    The uncertainty found in Grodzins semi empirical relation may be due to the non - consideration of asymmetry in the relation. In the present work we propose a new relation connecting B(E2; 2 1 + → 0 1 + ) and E2 1 + with asymmetric parameter γ

  16. Urban energy consumption and related carbon emission estimation: a study at the sector scale

    Science.gov (United States)

    Lu, Weiwei; Chen, Chen; Su, Meirong; Chen, Bin; Cai, Yanpeng; Xing, Tao

    2013-12-01

    With rapid economic development and energy consumption growth, China has become the largest energy consumer in the world. Impelled by extensive international concern, there is an urgent need to analyze the characteristics of energy consumption and related carbon emission, with the objective of saving energy, reducing carbon emission, and lessening environmental impact. Focusing on urban ecosystems, the biggest energy consumer, a method for estimating energy consumption and related carbon emission was established at the urban sector scale in this paper. Based on data for 1996-2010, the proposed method was applied to Beijing in a case study to analyze the consumption of different energy resources (i.e., coal, oil, gas, and electricity) and related carbon emission in different sectors (i.e., agriculture, industry, construction, transportation, household, and service sectors). The results showed that coal and oil contributed most to energy consumption and carbon emission among different energy resources during the study period, while the industrial sector consumed the most energy and emitted the most carbon among different sectors. Suggestions were put forward for energy conservation and emission reduction in Beijing. The analysis of energy consumption and related carbon emission at the sector scale is helpful for practical energy saving and emission reduction in urban ecosystems.

  17. An Improved Cluster Richness Estimator

    Energy Technology Data Exchange (ETDEWEB)

    Rozo, Eduardo; /Ohio State U.; Rykoff, Eli S.; /UC, Santa Barbara; Koester, Benjamin P.; /Chicago U. /KICP, Chicago; McKay, Timothy; /Michigan U.; Hao, Jiangang; /Michigan U.; Evrard, August; /Michigan U.; Wechsler, Risa H.; /SLAC; Hansen, Sarah; /Chicago U. /KICP, Chicago; Sheldon, Erin; /New York U.; Johnston, David; /Houston U.; Becker, Matthew R.; /Chicago U. /KICP, Chicago; Annis, James T.; /Fermilab; Bleem, Lindsey; /Chicago U.; Scranton, Ryan; /Pittsburgh U.

    2009-08-03

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.

  18. Data Sources for the Model-based Small Area Estimates of Cancer-Related Knowledge - Small Area Estimates

    Science.gov (United States)

    The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).

  19. Temperature-related mortality estimates after accounting for the cumulative effects of air pollution in an urban area.

    Science.gov (United States)

    Stanišić Stojić, Svetlana; Stanišić, Nemanja; Stojić, Andreja

    2016-07-11

    To propose a new method for including the cumulative mid-term effects of air pollution in the traditional Poisson regression model and compare the temperature-related mortality risk estimates, before and after including air pollution data. The analysis comprised a total of 56,920 residents aged 65 years or older who died from circulatory and respiratory diseases in Belgrade, Serbia, and daily mean PM10, NO2, SO2 and soot concentrations obtained for the period 2009-2014. After accounting for the cumulative effects of air pollutants, the risk associated with cold temperatures was significantly lower and the overall temperature-attributable risk decreased from 8.80 to 3.00 %. Furthermore, the optimum range of temperature, within which no excess temperature-related mortality is expected to occur, was very broad, between -5 and 21 °C, which differs from the previous findings that most of the attributable deaths were associated with mild temperatures. These results suggest that, in polluted areas of developing countries, most of the mortality risk, previously attributed to cold temperatures, can be explained by the mid-term effects of air pollution. The results also showed that the estimated relative importance of PM10 was the smallest of four examined pollutant species, and thus, including PM10 data only is clearly not the most effective way to control for the effects of air pollution.

  20. Wenatchee River steelhead reproductive success - Estimate the relative reproductive success of hatchery and wild steelhead in the Wenatchee River, WA

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This project uses genetic parentage analysis to estimate the relative reproductive success of hatchery and wild steelhead spawning in the Wenatchee River, WA. The...

  1. Use of surveillance data on HIV diagnoses with HIV-related symptoms to estimate the number of people living with undiagnosed HIV in need of antiretroviral therapy.

    Science.gov (United States)

    Lodwick, Rebecca K; Nakagawa, Fumiyo; van Sighem, Ard; Sabin, Caroline A; Phillips, Andrew N

    2015-01-01

    It is important to have methods available to estimate the number of people who have undiagnosed HIV and are in need of antiretroviral therapy (ART). The method uses the concept that a predictable level of occurrence of AIDS or other HIV-related clinical symptoms which lead to presentation for care, and hence diagnosis of HIV, arises in undiagnosed people with a given CD4 count. The method requires surveillance data on numbers of new HIV diagnoses with HIV-related symptoms, and the CD4 count at diagnosis. The CD4 count-specific rate at which HIV-related symptoms develop are estimated from cohort data. 95% confidence intervals can be constructed using a simple simulation method. For example, if there were 13 HIV diagnoses with HIV-related symptoms made in one year with CD4 count at diagnosis between 150-199 cells/mm3, then since the CD4 count-specific rate of HIV-related symptoms is estimated as 0.216 per person-year, the estimated number of person years lived in people with undiagnosed HIV with CD4 count 150-199 cells/mm3 is 13/0.216 = 60 (95% confidence interval: 29-100), which is considered an estimate of the number of people living with undiagnosed HIV in this CD4 count stratum. The method is straightforward to implement within a short period once a surveillance system of all new HIV diagnoses, collecting data on HIV-related symptoms at diagnosis, is in place and is most suitable for estimating the number of undiagnosed people with CD4 count HIV-related symptoms at higher CD4 counts. A potential source of bias is under-diagnosis and under-reporting of diagnoses with HIV-related symptoms. Although this method has limitations as with all approaches, it is important for prompting increased efforts to identify undiagnosed people, particularly those with low CD4 count, and for informing levels of unmet need for ART.

  2. Associations between Salivary Testosterone Levels, Androgen‐Related Genetic Polymorphisms, and Self‐Estimated Ejaculation Latency Time

    Directory of Open Access Journals (Sweden)

    Patrick Jern, PhD

    2014-08-01

    Conclusions: We were unable to find support for the hypothesis suggesting an association between T levels and ELT, possibly because of the low number of phenotypically extreme cases (the sample used in the present study was population based. Our results concerning genetic associations should be interpreted with caution until replication studies have been conducted. Jern P, Westberg L, Ankarberg‐Lindgren C, Johansson A, Gunst A, Sandnabba NK, and Santtila P. Associations between salivary testosterone levels, androgen‐related genetic polymorphisms, and self‐estimated ejaculation latency time. Sex Med 2014;2:107–114.

  3. Estimating the relative nutrient uptake from different soil depths in Quercus robur, Fagus sylvatica and Picea abies

    DEFF Research Database (Denmark)

    Göransson, Hans; Wallander, Håkan; Ingerslev, Morten

    2006-01-01

    The distribution of fine roots and external ectomycorrhizal mycelium of three species of trees was determined down to a soil depth of 55 cm to estimate the relative nutrient uptake capacity of the trees from different soil layers. In addition, a root bioassay was performed to estimate the nutrien...

  4. Non-Dipole Features of the Geomagnetic Field May Persist for Millions of Years

    Science.gov (United States)

    Biasi, J.; Kirschvink, J. L.

    2017-12-01

    Here we present paleointensity results from within the South Atlantic Anomaly (SAA), which is a large non-dipole feature of the geomagnetic field. Within the area of the SAA, anomalous declinations, inclinations, and intensities are observed. Our results suggest that the SAA has been present for at least 5 Ma. This is orders-of-magnitude greater than any previous estimate, and suggests that some non-dipole features do not `average out' over geologic time, which is a fundamental assumption in all paleodirectional studies. The SAA has been steadily growing in size since the first magnetic measurements were made in the South Atlantic, and it is widely believed to have appeared 400 years ago. Recent studies from South Africa (Tarduno et al. (2015)) and Tristan da Cunha (Shah et al. (2016)) have suggested that the SAA has persisted for 1 ka and 96 ka respectively. We conducted paleointensity (PI) experiments on basaltic lavas from James Ross Island, on the Antarctic Peninsula. This large shield volcano has been erupting regularly over the last 6+ Ma (dated via Ar/Ar geochronology), and therefore contains the most complete volcanostratigraphic record in the south Atlantic. Our PI experiments used the Thellier-Thellier method, the IZZI protocol, and the same selection criteria as the Lawrence et al. (2009) study of Ross Island lavas (near McMurdo Station), which is the only comparable PI study on the Antarctic continent. We determined an average paleointensity at JRI of 13.8±5.2 μT, which is far lower than what we would expect from a dipole field (55 μT). In addition, this is far lower than the current value over James Ross Island of 36 μT. These results support the following conclusions: The time-averaged field model of Juarez et al. (1998) and Tauxe et al. (2013) is strongly favored by our PI data. The SAA has persisted over James Ross Island for at least 5 Ma, and has not drifted significantly over that time. The strength of non-dipole features such as the SAA

  5. Lattice energy calculation - A quick tool for screening of cocrystals and estimation of relative solubility. Case of flavonoids

    Science.gov (United States)

    Kuleshova, L. N.; Hofmann, D. W. M.; Boese, R.

    2013-03-01

    Cocrystals (or multicomponent crystals) have physico-chemical properties that are different from crystals of pure components. This is significant in drug development, since the desired properties, e.g. solubility, stability and bioavailability, can be tailored by binding two substances into a single crystal without chemical modification of an active component. Here, the FLEXCRYST program suite, implemented with a data mining force field, was used to estimate the relative stability and, consequently, the relative solubility of cocrystals of flavonoids vs their pure crystals, stored in the Cambridge Structural Database. The considerable potency of this approach for in silico screening of cocrystals, as well as their relative solubility, was demonstrated.

  6. Simultaneous State and Parameter Estimation Using Maximum Relative Entropy with Nonhomogenous Differential Equation Constraints

    Directory of Open Access Journals (Sweden)

    Adom Giffin

    2014-09-01

    Full Text Available In this paper, we continue our efforts to show how maximum relative entropy (MrE can be used as a universal updating algorithm. Here, our purpose is to tackle a joint state and parameter estimation problem where our system is nonlinear and in a non-equilibrium state, i.e., perturbed by varying external forces. Traditional parameter estimation can be performed by using filters, such as the extended Kalman filter (EKF. However, as shown with a toy example of a system with first order non-homogeneous ordinary differential equations, assumptions made by the EKF algorithm (such as the Markov assumption may not be valid. The problem can be solved with exponential smoothing, e.g., exponentially weighted moving average (EWMA. Although this has been shown to produce acceptable filtering results in real exponential systems, it still cannot simultaneously estimate both the state and its parameters and has its own assumptions that are not always valid, for example when jump discontinuities exist. We show that by applying MrE as a filter, we can not only develop the closed form solutions, but we can also infer the parameters of the differential equation simultaneously with the means. This is useful in real, physical systems, where we want to not only filter the noise from our measurements, but we also want to simultaneously infer the parameters of the dynamics of a nonlinear and non-equilibrium system. Although there were many assumptions made throughout the paper to illustrate that EKF and exponential smoothing are special cases ofMrE, we are not “constrained”, by these assumptions. In other words, MrE is completely general and can be used in broader ways.

  7. Authigenic 10Be/9Be ratio signatures of the cosmogenic nuclide production linked to geomagnetic dipole moment variation since the Brunhes/Matuyama boundary.

    Science.gov (United States)

    Simon, Quentin; Thouveny, Nicolas; Bourlès, Didier L; Valet, Jean-Pierre; Bassinot, Franck; Ménabréaz, Lucie; Guillou, Valéry; Choy, Sandrine; Beaufort, Luc

    2016-11-01

    Geomagnetic dipole moment variations associated with polarity reversals and excursions are expressed by large changes of the cosmogenic nuclide beryllium-10 ( 10 Be) production rates. Authigenic 10 Be/ 9 Be ratios (proxy of atmospheric 10 Be production) from oceanic cores therefore complete the classical information derived from relative paleointensity (RPI) records. This study presents new authigenic 10 Be/ 9 Be ratio results obtained from cores MD05-2920 and MD05-2930 collected in the west equatorial Pacific Ocean. Be ratios from cores MD05-2920, MD05-2930 and MD90-0961 have been stacked and averaged. Variations of the authigenic 10 Be/ 9 Be ratio are analyzed and compared with the geomagnetic dipole low series reported from global RPI stacks. The largest 10 Be overproduction episodes are related to dipole field collapses (below a threshold of 2 × 10 22  Am 2 ) associated with the Brunhes/Matuyama reversal, the Laschamp (41 ka) excursion, and the Iceland Basin event (190 ka). Other significant 10 Be production peaks are correlated to geomagnetic excursions reported in literature. The record was then calibrated by using absolute dipole moment values drawn from the Geomagia and Pint paleointensity value databases. The 10 Be-derived geomagnetic dipole moment record, independent from sedimentary paleomagnetic data, covers the Brunhes-Matuyama transition and the whole Brunhes Chron. It provides new and complementary data on the amplitude and timing of millennial-scale geomagnetic dipole moment variations and particularly on dipole moment collapses triggering polarity instabilities.

  8. On estimates of the pion-nucleon sigma term by the dispersion relations and taking into account the interrelation between the chiral and scale invariance breaking

    International Nuclear Information System (INIS)

    Efrosinin, V.P.; Zaikin, D.A.

    1983-01-01

    Possible reasons of disagreement between estimates of the pion-nucleon σ term obtained by the method of dispersion relations with extrapolation to the Chang-Dashen point and by alternative methods, making no use of such extrapolation are investigated. One of the reasons may be, that the πN amplitude is not analytic in the variable t at ν=0. A method, which is not so strongly influenced by the nonanalyticity, is suggested to estimate the σ term making use of the threshold data for the πN amplitude. Relation between the scale and chiral invariance breakings is discussed and the resulting estimate of the σ term is presented. Both estimates give close results (42 and 34 MeV) which do not contradict one another within the uncertainties of the methods

  9. Estimating alcohol-related premature mortality in san francisco: use of population-attributable fractions from the global burden of disease study

    Directory of Open Access Journals (Sweden)

    Reiter Randy B

    2010-11-01

    Full Text Available Abstract Background In recent years, national and global mortality data have been characterized in terms of well-established risk factors. In this regard, alcohol consumption has been called the third leading "actual cause of death" (modifiable behavioral risk factor in the United States, after tobacco use and the combination of poor diet and physical inactivity. Globally and in various regions of the world, alcohol use has been established as a leading contributor to the overall burden of disease and as a major determinant of health disparities, but, to our knowledge, no one has characterized alcohol-related harm in such broad terms at the local level. We asked how alcohol-related premature mortality in San Francisco, measured in years of life lost (YLLs, compares with other well-known causes of premature mortality, such as ischemic heart disease or HIV/AIDS. Methods We applied sex- and cause-specific population-attributable fractions (PAFs of years of life lost (YLLs from the Global Burden of Disease Study to 17 comparable outcomes among San Francisco males and females during 2004-2007. We did this in three ways: Method 1 assumed that all San Franciscans drink like populations in developed economies. These estimates were limited to alcohol-related harm. Method 2 modified these estimates by including several beneficial effects. Method 3 assumed that Latino and Asian San Franciscans drink alcohol like populations in the global regions related to their ethnicity. Results By any of these three methods, alcohol-related premature mortality accounts for roughly a tenth of all YLLs among males. Alcohol-related YLLs among males are comparable to YLLs for leading causes such as ischemic heart disease and HIV/AIDS, in some instances exceeding them. Latino and black males bear a disproportionate burden of harm. Among females, for whom estimates differed more by method and were smaller than those for males, alcohol-related YLLs are comparable to leading

  10. Resistance–temperature relation and atom cluster estimation of In–Bi system melts

    International Nuclear Information System (INIS)

    Geng, Haoran; Wang Zhiming; Zhou Yongzhi; Li Cancan

    2012-01-01

    Highlights: ► A testing device was adopted to measure the electrical resistivity of In–Bi system melts. ► A basically linear relation exists between the resistivity and temperature of In x Bi 100−x melts in measured temperature range. ► Based on Novakovic's assumption, the content of InBi atomic cluster in In x Bi 100−x melt is estimated with ρ ≈ ρ InBi x InBi + ρ m (1 − x InBi ) equation. - Abstract: A testing device for the resistivity of high-temperature melt was adopted to measure the l resistivity of In–Bi system melts at different temperatures. It can be concluded from the analysis and calculation of the experimental results that the resistivity of In x Bi 100−x (x = 0–100) melt is in linear relationship with temperature within the experiment temperature range. The resistivity of the melt decreases with the increasing content of In. The fair consistency of resistivity of In–Bi system melt is found in the heating and cooling processes. On the basis of Novakovic's assumption, we approximately estimated the content of InBi atom clusters in In x Bi 100−x melts with the resistivity data by equation ρ ≈ ρ InBi x InBi + ρ m (1 − x InBi ). In the whole components interval, the content corresponds well with the mole fraction of InBi clusters calculated by Novakovic in the thermodynamic approach. The mole fraction of InBi type atom clusters in the melts reaches the maximum at the point of stoichiometric composition In 50 Bi 50 .

  11. Petro- and Paleomagnetic Investigations of Tuzla Section Sediments (Krasnodarsk Territory)

    DEFF Research Database (Denmark)

    Pilipenko, Olga; Abrahamsen, N.; Trubikhin, Valerian

    2006-01-01

    in this study display an anomalous direction coinciding in time (~25-35 ka) with an anomalous horizon discovered in rocks of the Roxolany section (Ukraine). According to the world time scale of geomagnetic excursions, the anomalous direction correlates with the Mono Lake excursion. A significant correlation...... between the time series NRM 0.015 /SIRM (Tuzla section) and NRM 250 /KB (Roxolany section) in the interval 50-10 ka and the world composite curves VADM-21 and Sint-800 implies that, in this time interval, the curve NRM 0.015 /SIRM reflects the variation in the relative paleointensity of the geomagnetic...

  12. Spatially explicit estimation of heat stress-related impacts of climate change on the milk production of dairy cows in the United Kingdom

    Science.gov (United States)

    Topp, Cairistiona F. E.; Moorby, Jon M.; Pásztor, László; Foyer, Christine H.

    2018-01-01

    Dairy farming is one the most important sectors of United Kingdom (UK) agriculture. It faces major challenges due to climate change, which will have direct impacts on dairy cows as a result of heat stress. In the absence of adaptations, this could potentially lead to considerable milk loss. Using an 11-member climate projection ensemble, as well as an ensemble of 18 milk loss estimation methods, temporal changes in milk production of UK dairy cows were estimated for the 21st century at a 25 km resolution in a spatially-explicit way. While increases in UK temperatures are projected to lead to relatively low average annual milk losses, even for southern UK regions (cow), the ‘hottest’ 25×25 km grid cell in the hottest year in the 2090s, showed an annual milk loss exceeding 1300 kg/cow. This figure represents approximately 17% of the potential milk production of today’s average cow. Despite the potential considerable inter-annual variability of annual milk loss, as well as the large differences between the climate projections, the variety of calculation methods is likely to introduce even greater uncertainty into milk loss estimations. To address this issue, a novel, more biologically-appropriate mechanism of estimating milk loss is proposed that provides more realistic future projections. We conclude that South West England is the region most vulnerable to climate change economically, because it is characterised by a high dairy herd density and therefore potentially high heat stress-related milk loss. In the absence of mitigation measures, estimated heat stress-related annual income loss for this region by the end of this century may reach £13.4M in average years and £33.8M in extreme years. PMID:29738581

  13. Spatially explicit estimation of heat stress-related impacts of climate change on the milk production of dairy cows in the United Kingdom.

    Directory of Open Access Journals (Sweden)

    Nándor Fodor

    Full Text Available Dairy farming is one the most important sectors of United Kingdom (UK agriculture. It faces major challenges due to climate change, which will have direct impacts on dairy cows as a result of heat stress. In the absence of adaptations, this could potentially lead to considerable milk loss. Using an 11-member climate projection ensemble, as well as an ensemble of 18 milk loss estimation methods, temporal changes in milk production of UK dairy cows were estimated for the 21st century at a 25 km resolution in a spatially-explicit way. While increases in UK temperatures are projected to lead to relatively low average annual milk losses, even for southern UK regions (<180 kg/cow, the 'hottest' 25×25 km grid cell in the hottest year in the 2090s, showed an annual milk loss exceeding 1300 kg/cow. This figure represents approximately 17% of the potential milk production of today's average cow. Despite the potential considerable inter-annual variability of annual milk loss, as well as the large differences between the climate projections, the variety of calculation methods is likely to introduce even greater uncertainty into milk loss estimations. To address this issue, a novel, more biologically-appropriate mechanism of estimating milk loss is proposed that provides more realistic future projections. We conclude that South West England is the region most vulnerable to climate change economically, because it is characterised by a high dairy herd density and therefore potentially high heat stress-related milk loss. In the absence of mitigation measures, estimated heat stress-related annual income loss for this region by the end of this century may reach £13.4M in average years and £33.8M in extreme years.

  14. Uncertainty in estimating and mitigating industrial related GHG emissions

    International Nuclear Information System (INIS)

    El-Fadel, M.; Zeinati, M.; Ghaddar, N.; Mezher, T.

    2001-01-01

    Global climate change has been one of the challenging environmental concerns facing policy makers in the past decade. The characterization of the wide range of greenhouse gas emissions sources and sinks as well as their behavior in the atmosphere remains an on-going activity in many countries. Lebanon, being a signatory to the Framework Convention on Climate Change, is required to submit and regularly update a national inventory of greenhouse gas emissions sources and removals. Accordingly, an inventory of greenhouse gases from various sectors was conducted following the guidelines set by the United Nations Intergovernmental Panel on Climate Change (IPCC). The inventory indicated that the industrial sector contributes about 29% to the total greenhouse gas emissions divided between industrial processes and energy requirements at 12 and 17%, respectively. This paper describes major mitigation scenarios to reduce emissions from this sector based on associated technical, economic, environmental, and social characteristics. Economic ranking of these scenarios was conducted and uncertainty in emission factors used in the estimation process was emphasized. For this purpose, theoretical and experimental emission factors were used as alternatives to default factors recommended by the IPCC and the significance of resulting deviations in emission estimation is presented. (author)

  15. Paleomagnetic evidence for the persistence or recurrence of the South Atlantic geomagnetic Anomaly

    Science.gov (United States)

    Shah, Jay; Koppers, Anthony A. P.; Leitner, Marko; Leonhardt, Roman; Muxworthy, Adrian R.; Heunemann, Christoph; Bachtadse, Valerian; Ashley, Jack A. D.; Matzka, Jürgen

    2017-04-01

    The South Atlantic geomagnetic Anomaly (SAA) is known as a region of the geomagnetic field that is approximately 25 μT in intensity, compared to an expected value of ˜43 μT. Geomagnetic field models do not find evidence for the SAA being a persistent feature of the geomagnetic field, however these models are constructed from paleomagnetic data that is sparse in the southern hemisphere. We present a full-vector paleomagnetic study of 40Ar/39Ar dated Late Pleistocene lavas from Tristan da Cunha in the South Atlantic Ocean (Shah et al., 2016; EPSL). Paleointensity estimations using the Thellier method of eight lava flows yield an average paleointensity of the Tristan da Cunha lavas as 18 ± 6 μT and an average virtual axial dipole moment (VADM) of 3.1 ± 1.2 × 1022 Am2. Comparing the VADM of the lava flows against the PADM2M, PINT and SINT-800 databases indicates that the lava flows represent four distinct periods of anomalously weak intensity in the South Atlantic between 43 and 90 ka ago, constrained by newly obtained 40Ar/39Ar ages. This anomalously weak intensity in the Late Pleistocene is similar to the present-day SAA and SAA-like anomalous behavior found in the recent archeomagnetic study by Tarduno et al. (2015; Nat. Commun.). Our dataset provides evidence for the persistence or recurrence of geomagnetic main field anomalies in the South Atlantic, and potentially indicates such anomalies are the geomagnetic field manifestation of the long-existing core-mantle boundary heterogeneity seismically identified as the African Large Low Velocity Shear Province (LLSVP).

  16. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  17. A fast and simple method to estimate relative, hyphal tensile-strength of filamentous fungi used to assess the effect of autophagy

    DEFF Research Database (Denmark)

    Quintanilla, Daniela; Chelius, Cynthia; Iambamrung, Sirasa

    2018-01-01

    Fungal hyphal strength is an important phenotype which can have a profound impact on bioprocess behavior. Until now, there is not an efficient method which allows its characterization. Currently available methods are very time consuming; thus, compromising their applicability in strain selection...... and process development. To overcome this issue, a method for fast and easy, statistically-verified quantification of relative hyphal tensile strength was developed. It involves off-line fragmentation in a high shear mixer followed by quantification of fragment size using laser diffraction. Particle size...... distribution (PSD) is determined, with analysis time on the order of minutes. Plots of PSD 90th percentile versus time allow estimation of the specific fragmentation rate. This novel method is demonstrated by estimating relative hyphal strength during growth in control conditions and rapamycin...

  18. Estimated cases of blindness and visual impairment from neovascular age-related macular degeneration avoided in Australia by ranibizumab treatment.

    Science.gov (United States)

    Mitchell, Paul; Bressler, Neil; Doan, Quan V; Dolan, Chantal; Ferreira, Alberto; Osborne, Aaron; Rochtchina, Elena; Danese, Mark; Colman, Shoshana; Wong, Tien Y

    2014-01-01

    Intravitreal injections of anti-vascular endothelial growth factor agents, such as ranibizumab, have significantly improved the management of neovascular age-related macular degeneration. This study used patient-level simulation modelling to estimate the number of individuals in Australia who would have been likely to avoid legal blindness or visual impairment due to neovascular age-related macular degeneration over a 2-year period as a result of intravitreal ranibizumab injections. The modelling approach used existing data for the incidence of neovascular age-related macular degeneration in Australia and outcomes from ranibizumab trials. Blindness and visual impairment were defined as visual acuity in the better-seeing eye of worse than 6/60 or 6/12, respectively. In 2010, 14,634 individuals in Australia were estimated to develop neovascular age-related macular degeneration who would be eligible for ranibizumab therapy. Without treatment, 2246 individuals would become legally blind over 2 years. Monthly 0.5 mg intravitreal ranibizumab would reduce incident blindness by 72% (95% simulation interval, 70-74%). Ranibizumab given as needed would reduce incident blindness by 68% (64-71%). Without treatment, 4846 individuals would become visually impaired over 2 years; this proportion would be reduced by 37% (34-39%) with monthly intravitreal ranibizumab, and by 28% (23-33%) with ranibizumab given as needed. These data suggest that intravitreal injections of ranibizumab, given either monthly or as needed, can substantially lower the number of cases of blindness and visual impairment over 2 years after the diagnosis of neovascular age-related macular degeneration.

  19. Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture from Small Angle X-Ray Scattering Intensity Measurements

    Science.gov (United States)

    Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee

    2015-01-01

    In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916

  20. Improving Relative Combat Power Estimation: The Road to Victory

    Science.gov (United States)

    2014-06-13

    was unthinkable before. Napoleon Bonaparte achieved a superior warfighting system compared to his opponents, which resulted in SOF. Napoleon’s...observations about combat power estimation and force empoloyment, remain valid. Napoleon also offered thoughts about combat power and superiority whe he...force. However, Napoleon did not think one- sidedly about the problem. He also said: “The moral is to the physical as three to one.”11 This dual

  1. Milestones of mathematical model for business process management related to cost estimate documentation in petroleum industry

    Science.gov (United States)

    Khamidullin, R. I.

    2018-05-01

    The paper is devoted to milestones of the optimal mathematical model for a business process related to cost estimate documentation compiled during construction and reconstruction of oil and gas facilities. It describes the study and analysis of fundamental issues in petroleum industry, which are caused by economic instability and deterioration of a business strategy. Business process management is presented as business process modeling aimed at the improvement of the studied business process, namely main criteria of optimization and recommendations for the improvement of the above-mentioned business model.

  2. Estimating the mediating effect of different biomarkers on the relation of alcohol consumption with the risk of type 2 diabetes

    NARCIS (Netherlands)

    Beulens, J.W.J.; Schouw, van der Y.T.; Moons, K.G.M.; Boshuizen, H.C.; A, van der D.L.; Groenwold, R.H.H.

    2013-01-01

    Purpose Moderate alcohol consumption is associated with a reduced type 2 diabetes risk, but the biomarkers that explain this relation are unknown. The most commonly used method to estimate the proportion explained by a biomarker is the difference method. However, influence of alcohol–biomarker

  3. Estimation of the Cloud condensation nuclei concentration(CCN) and aerosol optical depth(AOD) relation in the Arctic region

    Science.gov (United States)

    Jung, C. H.; Yoon, Y. J.; Ahn, S. H.; Kang, H. J.; Gim, Y. T.; Lee, B. Y.

    2017-12-01

    Information of the spatial and temporal variations of cloud condensation nuclei (CCN) concentrations is important in estimating aerosol indirect effects. Generally, CCN aerosol is difficult to estimate using remote sensing methods. Although there are many CCN measurements data, extensive measurements of CCN are not feasible because of the complex nature of the operation and high cost, especially in the Arctic region. Thus, there have been many attempts to estimate CCN concentrations from more easily obtainable parameters such as aerosol optical depth (AOD) because AOD has the advantage of being readily observed by remote sensing from space by several sensors. For example, some form of correlation was derived between AOD and the number concentration of cloud condensation nuclei (CCN) through the comparison results from AERONET network and CCN measurements (Andreae 2009). In this study, a parameterization of CCN concentration as a function of AOD at 500 nm is given in the Arctic region. CCN data was collected during the period 2007-2013 at the Zeppelin observatory (78.91° N, 11.89° E, 474 masl). The AERONET network and MODIS AOD data are compared with ground measured CCN measurement and the relations between AOD and CCN are parameterized. The seasonal characteristics as well as long term trends are also considered. Through the measurement, CCN concentration remains high during spring because of aerosol transportation from the mid-latitudes, known as Arctic Haze. Lowest CCN number densities were observed during Arctic autumn and early winter when aerosol long-range transport into the Arctic is not effective and new particle formation ceases. The results show that the relation between AOD and CCN shows a different parameter depending on the seasonal aerosol and CCN characteristics. This seasonal different CCN-AOD relation can be interpreted as many physico-chemical aerosol properties including aerosol size distribution, composition. ReferenceAndreae, M. O. (2009

  4. Relative efficiency of unequal versus equal cluster sizes in cluster randomized trials using generalized estimating equation models.

    Science.gov (United States)

    Liu, Jingxia; Colditz, Graham A

    2018-05-01

    There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Estimating internal exposure risks by the relative risk and the National Institute of Health risk models

    International Nuclear Information System (INIS)

    Mehta, S.K.; Sarangapani, R.

    1995-01-01

    This paper presents tabulations of risk (R) and person-years of life lost (PYLL) for acute exposures of individual organs at ages 20 and 40 yrs for the Indian and Japanese populations to illustrate the effect of age at exposure in the two models. Results are also presented for the organ wise nominal probability coefficients (NPC) and PYLL for individual organs for the age distributed Indian population by the two models. The results presented show that for all organs the estimates of PYLL and NPC for the Indian population are lower than those for the Japanese population by both models except for oesophagus, breast and ovary by the relative risk (RR) model, where the opposite trend is observed. The results also show that the Indian all-cancer values of NPC averaged over the two models is 2.9 x 10 -2 Sv -1 , significantly lower than the world average value of 5x10 -2 Sv -1 estimated by the ICRP. (author). 9 refs., 2 figs., 2 tabs

  6. Estimating terrestrial aboveground biomass estimation using lidar remote sensing: a meta-analysis

    Science.gov (United States)

    Zolkos, S. G.; Goetz, S. J.; Dubayah, R.

    2012-12-01

    Estimating biomass of terrestrial vegetation is a rapidly expanding research area, but also a subject of tremendous interest for reducing carbon emissions associated with deforestation and forest degradation (REDD). The accuracy of biomass estimates is important in the context carbon markets emerging under REDD, since areas with more accurate estimates command higher prices, but also for characterizing uncertainty in estimates of carbon cycling and the global carbon budget. There is particular interest in mapping biomass so that carbon stocks and stock changes can be monitored consistently across a range of scales - from relatively small projects (tens of hectares) to national or continental scales - but also so that other benefits of forest conservation can be factored into decision making (e.g. biodiversity and habitat corridors). We conducted an analysis of reported biomass accuracy estimates from more than 60 refereed articles using different remote sensing platforms (aircraft and satellite) and sensor types (optical, radar, lidar), with a particular focus on lidar since those papers reported the greatest efficacy (lowest errors) when used in the a synergistic manner with other coincident multi-sensor measurements. We show systematic differences in accuracy between different types of lidar systems flown on different platforms but, perhaps more importantly, differences between forest types (biomes) and plot sizes used for field calibration and assessment. We discuss these findings in relation to monitoring, reporting and verification under REDD, and also in the context of more systematic assessment of factors that influence accuracy and error estimation.

  7. Autistic disorders and schizophrenia: related or remote? An anatomical likelihood estimation.

    Directory of Open Access Journals (Sweden)

    Charlton Cheung

    Full Text Available Shared genetic and environmental risk factors have been identified for autistic spectrum disorders (ASD and schizophrenia. Social interaction, communication, emotion processing, sensorimotor gating and executive function are disrupted in both, stimulating debate about whether these are related conditions. Brain imaging studies constitute an informative and expanding resource to determine whether brain structural phenotype of these disorders is distinct or overlapping. We aimed to synthesize existing datasets characterizing ASD and schizophrenia within a common framework, to quantify their structural similarities. In a novel modification of Anatomical Likelihood Estimation (ALE, 313 foci were extracted from 25 voxel-based studies comprising 660 participants (308 ASD, 352 first-episode schizophrenia and 801 controls. The results revealed that, compared to controls, lower grey matter volumes within limbic-striato-thalamic circuitry were common to ASD and schizophrenia. Unique features of each disorder included lower grey matter volume in amygdala, caudate, frontal and medial gyrus for schizophrenia and putamen for autism. Thus, in terms of brain volumetrics, ASD and schizophrenia have a clear degree of overlap that may reflect shared etiological mechanisms. However, the distinctive neuroanatomy also mapped in each condition raises the question about how this is arrived in the context of common etiological pressures.

  8. Estimating diversification rates for higher taxa: BAMM can give problematic estimates of rates and rate shifts.

    Science.gov (United States)

    Meyer, Andreas L S; Wiens, John J

    2018-01-01

    Estimates of diversification rates are invaluable for many macroevolutionary studies. Recently, an approach called BAMM (Bayesian Analysis of Macro-evolutionary Mixtures) has become widely used for estimating diversification rates and rate shifts. At the same time, several articles have concluded that estimates of net diversification rates from the method-of-moments (MS) estimators are inaccurate. Yet, no studies have compared the ability of these two methods to accurately estimate clade diversification rates. Here, we use simulations to compare their performance. We found that BAMM yielded relatively weak relationships between true and estimated diversification rates. This occurred because BAMM underestimated the number of rates shifts across each tree, and assigned high rates to small clades with low rates. Errors in both speciation and extinction rates contributed to these errors, showing that using BAMM to estimate only speciation rates is also problematic. In contrast, the MS estimators (particularly using stem group ages), yielded stronger relationships between true and estimated diversification rates, by roughly twofold. Furthermore, the MS approach remained relatively accurate when diversification rates were heterogeneous within clades, despite the widespread assumption that it requires constant rates within clades. Overall, we caution that BAMM may be problematic for estimating diversification rates and rate shifts. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  9. NgsRelate

    DEFF Research Database (Denmark)

    Korneliussen, Thorfinn Sand; Moltke, Ida

    2015-01-01

    . Using both simulated and real data, we show that NgsRelate provides markedly better estimates for low-depth NGS data than two state-of-the-art genotype-based methods. AVAILABILITY: NgsRelate is implemented in C++ and is available under the GNU license at www.pop gen.dk/software. CONTACT: ida...... be called with high certainty. RESULTS: We present a software tool, NgsRelate, for estimating pairwise relatedness from NGS data. It provides maximum likelihood estimates that are based on genotype likelihoods instead of genotypes and thereby takes the inherent uncertainty of the genotypes into account...

  10. The relative efficiency of three methods of estimating herbage mass ...

    African Journals Online (AJOL)

    The methods involved were randomly placed circular quadrats; randomly placed narrow strips; and disc meter sampling. Disc meter and quadrat sampling appear to be more efficient than strip sampling. In a subsequent small plot grazing trial the estimates of herbage mass, using the disc meter, had a consistent precision ...

  11. Estimation of environment-related properties of chemicals for design of sustainable processes: Development of group-contribution+ (GC+) models and uncertainty analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Kalakul, Sawitree; Sarup, Bent

    2012-01-01

    The aim of this work is to develop group-3 contribution+ (GC+)method (combined group-contribution (GC) method and atom connectivity index (CI)) based 15 property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated...... property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality......, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22...

  12. Decommissioning Cost Estimating -The ''Price'' Approach

    International Nuclear Information System (INIS)

    Manning, R.; Gilmour, J.

    2002-01-01

    Over the past 9 years UKAEA has developed a formalized approach to decommissioning cost estimating. The estimating methodology and computer-based application are known collectively as the PRICE system. At the heart of the system is a database (the knowledge base) which holds resource demand data on a comprehensive range of decommissioning activities. This data is used in conjunction with project specific information (the quantities of specific components) to produce decommissioning cost estimates. PRICE is a dynamic cost-estimating tool, which can satisfy both strategic planning and project management needs. With a relatively limited analysis a basic PRICE estimate can be produced and used for the purposes of strategic planning. This same estimate can be enhanced and improved, primarily by the improvement of detail, to support sanction expenditure proposals, and also as a tender assessment and project management tool. The paper will: describe the principles of the PRICE estimating system; report on the experiences of applying the system to a wide range of projects from contaminated car parks to nuclear reactors; provide information on the performance of the system in relation to historic estimates, tender bids, and outturn costs

  13. Linearized motion estimation for articulated planes.

    Science.gov (United States)

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  14. Relative Wave Energy based Adaptive Neuro-Fuzzy Inference System model for the Estimation of Depth of Anaesthesia.

    Science.gov (United States)

    Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P

    2018-01-01

    The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.

  15. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  16. Absolute and estimated values of macular pigment optical density in young and aged Asian participants with or without age-related macular degeneration.

    Science.gov (United States)

    Ozawa, Yoko; Shigeno, Yuta; Nagai, Norihiro; Suzuki, Misa; Kurihara, Toshihide; Minami, Sakiko; Hirano, Eri; Shinoda, Hajime; Kobayashi, Saori; Tsubota, Kazuo

    2017-08-29

    Lutein and zeaxanthin are suggested micronutrient supplements to prevent the progression of age-related macular degeneration (AMD), a leading cause of blindness worldwide. To monitor the levels of lutein/zeaxanthin in the macula, macular pigment optical density (MPOD) is measured. A commercially available device (MPSII®, Elektron Technology, Switzerland), using technology based on heterochromatic flicker photometry, can measure both absolute and estimated values of MPOD. However, whether the estimated value is applicable to Asian individuals and/or AMD patients remains to be determined. The absolute and estimated values of MPOD were measured using the MPSII® device in 77 participants with a best-corrected visual acuity (BCVA) > 0.099 (logMAR score). The studied eyes included 17 young (20-29 years) healthy, 26 aged (>50 years) healthy, 18 aged and AMD-fellow, and 16 aged AMD eyes. The mean BCVA among the groups were not significantly different. Both absolute and estimated values were measurable in all eyes of young healthy group. However, absolute values were measurable in only 57.7%, 66.7%, and 43.8%, of the aged healthy, AMD-fellow, and AMD groups, respectively, and 56.7% of the eyes included in the 3 aged groups. In contrast, the estimated value was measurable in 84.6%, 88.9% and 93.8% of the groups, respectively, and 88.3% of eyes in the pooled aged group. The estimated value was correlated with absolute value in individuals from all groups by Spearman's correlation coefficient analyses (young healthy: R 2  = 0.885, P = 0.0001; aged healthy: R 2  = 0.765, P = 0.001; AMD-fellow: R 2  = 0.851, P = 0.0001; and AMD: R 2  = 0.860, P = 0.013). Using the estimated value, significantly lower MPOD values were found in aged AMD-related eyes, which included both AMD-fellow and AMD eyes, compared with aged healthy eyes by Student's t-test (P = 0.02). Absolute, in contrast to estimated, value was measurable in a limited number of aged participants

  17. Associations of hypoosmotic swelling test, relative sperm volume shift, aquaporin7 mRNA abundance and bull fertility estimates.

    Science.gov (United States)

    Kasimanickam, R K; Kasimanickam, V R; Arangasamy, A; Kastelic, J P

    2017-02-01

    Mammalian sperm are exposed to a natural hypoosmotic environment during male-to-female reproductive tract transition; although this activates sperm motility in vivo, excessive swelling can harm sperm structure and function. Aquaporins (AQPs) is a family of membrane-channel proteins implicated in sperm osmoregulation. The objective was to determine associations among relative sperm volume shift, hypoosmotic swelling test (HOST), sperm aquaporin (AQP) 7 mRNA abundances, and sire conception rate (SCR; fertility estimate) in Holstein bulls at a commercial artificial insemination center. Three or four sires for each full point SCR score from -4 to +4 were included. Each SCR estimate for study bulls (N = 30) was based on > 500 services (mean ± SEM) of 725 ± 13 services/sire). Sperm from a single collection day (two ejaculates) from these commercial Holstein bulls were used. Relative mRNA expression of AQP7 in sperm was determined by polymerase chain reaction. Mean relative sperm volume shift and percentage of sperm reacted in a HOST (% HOST) were determined (400 sperm per bull) after incubating in isoosmotic (300 mOsm/kg) and hypoosmotic (100 mOsm/kg) solutions for 30 min. There was no correlation between %HOST and SCR (r = 0.28 P > 0.1). However, there was a positive correlation between relative sperm volume shift and SCR (r = 0.65, P 2) fertility sire groups. In conclusion, bulls with higher SCR had significantly greater AQP7 mRNA abundance in frozen-thawed sperm. This plausibly contributed to greater regulation of sperm volume shift, which apparently conferred protection from detrimental swelling and impaired functions. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation

    International Nuclear Information System (INIS)

    Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick; Slosar, Anže

    2015-01-01

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2

  19. Estimating differential reproductive success from nests of related individuals, with application to a study of the Mottled Sculpin, Cottus bairdi

    Science.gov (United States)

    Beatrix Jones; Gary D. Grossman; Daniel C.I. Walsh; Brady A. Porter; John C. Avise; Anthony C. Flumera

    2007-01-01

    Understanding how variation in reproductive success is related to demography is a critical component in understanding the life history of an organism. Parentage analysis using molecular markers can be used to estimate the reproductive success of different groups of individuals in natural populations. Previous models have been developed for cases where offspring are...

  20. In Search of a Dipole Field during the Plio-Pleistocene

    Science.gov (United States)

    Asefaw, H. F.; Tauxe, L.; Staudigel, H.; Shaar, R.; Cai, S.; Cromwell, G.; Behar, N.; Koppers, A. A. P.

    2017-12-01

    A geocentric axial dipole (GAD) field accounts for the majority of the modern field and is assumed to be a good first order approximation for the time averaged ancient field. A GAD field predicts a latitudinal dependence of intensity. Given this relationship, the intensity of the field measured at the North and South poles should be twice as strong as the intensity recorded at the equator. The current paleointensity database- archived at both http://earth.liv.ac.uk/pint/ and http://earthref.org/MagIC - shows no such dependency over the last 5 Myr (e.g. Lawrence et al., 2009, doi: 10.1029/2008GC002072; Cromwell et al., 2015; doi: 10.1002/2014JB011828). In order to investigate whether better experimental protocol or data selection approaches could resolve the problem, we: 1) applied a new data selection protocol (CCRIT) which has recovered historical field values with high precision and accuracy (Cromwell et al., 2015), 2) re-sampled the fine grained tops of lava flows in Antarctica (77.9° S) that were previously studied for paleodirections but failed to meet our strict selection criteria, 3) sampled cinder cones in the Golan Heights (33.08° N), and 4) acquired data from lava flows from the HSDP2 drill core in Hawaii (19.71° N ). New and published Ar-Ar dates demonstrate that all the samples formed in the last 5 Myr. We conducted IZZI modified Thellier-Thellier experiments and then calculated paleointensities from the samples that passed a set of strict selection criteria. After applying the CCRIT criteria to our data, we find a time averaged paleointensity of 35.7 ±6.86 μT in the Golan Heights, 34.5 μT in Hawaii, and 34.22 ±3.4 μT in Antarctica. New results from Iceland (64° N), published by Cromwell et al. (2015, doi: 10.1002/2014JB011828), also pass the CCRIT criteria and record an average intensity of 33.1 ± 8.3 μT. The average paleointensities from the Golan Heights, Antarctica, Iceland and Hawaii, that span the last 5 Myr and pass the CCRIT criteria

  1. Edge type affects leaf-level water relations and estimated transpiration of Eucalyptus arenacea.

    Science.gov (United States)

    Wright, Thomas E; Tausz, Michael; Kasel, Sabine; Volkova, Liubov; Merchant, Andrew; Bennett, Lauren T

    2012-03-01

    While edge effects on tree water relations are well described for closed forests, they remain under-examined in more open forest types. Similarly, there has been minimal evaluation of the effects of contrasting land uses on the water relations of open forest types in highly fragmented landscapes. We examined edge effects on the water relations and gas exchange of a dominant tree (Eucalyptus arenacea Marginson & Ladiges) in an open forest type (temperate woodland) of south-eastern Australia. Edge effects in replicate woodlands adjoined by cleared agricultural land (pasture edges) were compared with those adjoined by 7- to 9-year-old eucalypt plantation with a 25m fire break (plantation edges). Consistent with studies in closed forest types, edge effects were pronounced at pasture edges where photosynthesis, transpiration and stomatal conductance were greater for edge trees than interior trees (75m into woodlands), and were related to greater light availability and significantly higher branch water potentials at woodland edges than interiors. Nonetheless, gas exchange values were only ∼50% greater for edge than interior trees, compared with ∼200% previously found in closed forest types. In contrast to woodlands adjoined by pasture, gas exchange in winter was significantly lower for edge than interior trees in woodlands adjoined by plantations, consistent with shading and buffering effects of plantations on edge microclimate. Plantation edge effects were less pronounced in summer, although higher water use efficiency of edge than interior woodland trees indicated possible competition for water between plantation trees and woodland edge trees in the drier months (an effect that might have been more pronounced were there no firebreak between the two land uses). Scaling up of leaf-level water relations to stand transpiration using a Jarvis-type phenomenological model indicated similar differences between edge types. That is, transpiration was greater at pasture than

  2. Estimating the clinical benefits of vaccinating boys and girls against HPV-related diseases in Europe

    International Nuclear Information System (INIS)

    Marty, Rémi; Roze, Stéphane; Bresse, Xavier; Largeron, Nathalie; Smith-Palmer, Jayne

    2013-01-01

    HPV is related to a number of cancer types, causing a considerable burden in both genders in Europe. Female vaccination programs can substantially reduce the incidence of HPV-related diseases in women and, to some extent, men through herd immunity. The objective was to estimate the incremental benefit of vaccinating boys and girls using the quadrivalent HPV vaccine in Europe versus girls-only vaccination. Incremental benefits in terms of reduction in the incidence of HPV 6, 11, 16 and 18-related diseases (including cervical, vaginal, vulvar, anal, penile, and head and neck carcinomas and genital warts) were assessed. The analysis was performed using a model constructed in Microsoft®Excel, based on a previously-published dynamic transmission model of HPV vaccination and published European epidemiological data on incidence of HPV-related diseases. The incremental benefits of vaccinating 12-year old girls and boys versus girls-only vaccination was assessed (70% vaccine coverage were assumed for both). Sensitivity analyses around vaccine coverage and duration of protection were performed. Compared with screening alone, girls-only vaccination led to 84% reduction in HPV 16/18-related carcinomas in females and a 61% reduction in males. Vaccination of girls and boys led to a 90% reduction in HPV 16/18-related carcinomas in females and 86% reduction in males versus screening alone. Relative to a girls-only program, vaccination of girls and boys led to a reduction in female and male HPV-related carcinomas of 40% and 65%, respectively and a reduction in the incidence of HPV 6/11-related genital warts of 58% for females and 71% for males versus girls-only vaccination. In Europe, the vaccination of 12-year old boys and girls against HPV 6, 11, 16 and 18 would be associated with substantial additional clinical benefits in terms of reduced incidence of HPV-related genital warts and carcinomas versus girls-only vaccination. The incremental benefits of adding boys vaccination are

  3. Generalized estimating equations

    CERN Document Server

    Hardin, James W

    2002-01-01

    Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th

  4. Heritability estimates for yield and related traits in bread wheat

    International Nuclear Information System (INIS)

    Din, R.; Jehan, S.; Ibraullah, A.

    2009-01-01

    A set of 22 experimental wheat lines along with four check cultivars were evaluated in in-irrigated and unirrgated environments with objectives to determine genetic and phenotypic variation and heritability estimates for yield and its traits- The two environments were statistically at par for physiological maturity, plant height, spikes m/sub -2/. spike lets spike/sup -1/ and 1000-grain weight. Highly significant genetic variability existed among wheat lines (P < 0.0 I) in the combined analysis across two test environments for traits except 1000- grain weight. Genotypes x environment interactions were non-significant for traits indicating consistent performance of lines in two test environments. However lines and check cultivars were two to five days early in maturity under unirrigated environment. Plant height, spikes m/sup -2/ and 1000-grain weight also reduced under unirrigated environments. Genetic variances were greater than Environmental variances for most of traits- Heritability estimates were of higher magnitude (0.74 to 0.96) for plant height, medium (0.31 to 0.56) for physiological maturity. spikelets spike/sup -1/ (unirrigated) and 1000-grain weight, and low for spikes m/sup -2/. (author)

  5. Estimation of population dose from all sources in Japan

    International Nuclear Information System (INIS)

    Kusama, Tomoko; Nakagawa, Takeo; Kai, Michiaki; Yoshizawa, Yasuo

    1988-01-01

    The purposes of estimation of population doses are to understand the per-caput doses of the public member from each artificial radiation source and to determine the proportion contributed of the doses from each individual source to the total irradiated population. We divided the population doses into two categories: individual-related and source-related population doses. The individual-related population dose is estimated based on the maximum assumption for use in allocation of the dose limits for members of the public. The source-related population dose is estimated both to justify the sources and practices and to optimize radiation protection. The source-related population dose, therefore, should be estimated as realistically as possible. We investigated all sources that caused exposure to the population in Japan from the above points of view

  6. Estimation of Environment-Related Properties of Chemicals for Design of Sustainable Processes: Development of Group-Contribution+ (GC+) Property Models and Uncertainty Analysis

    Science.gov (United States)

    The aim of this work is to develop group-contribution+ (GC+) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncert...

  7. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    Science.gov (United States)

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  8. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  9. Population estimates of extended family structure and size.

    Science.gov (United States)

    Garceau, Anne; Wideroff, Louise; McNeel, Timothy; Dunn, Marsha; Graubard, Barry I

    2008-01-01

    Population-based estimates of biological family size can be useful for planning genetic studies, assessing how distributions of relatives affect disease associations with family history and estimating prevalence of potential family support. Mean family size per person is estimated from a population-based telephone survey (n = 1,019). After multivariate adjustment for demographic variables, older and non-White respondents reported greater mean numbers of total, first- and second-degree relatives. Females reported more total and first-degree relatives, while less educated respondents reported more second-degree relatives. Demographic differences in family size have implications for genetic research. Therefore, periodic collection of family structure data in representative populations would be useful. Copyright 2008 S. Karger AG, Basel.

  10. Body composition estimation from selected slices

    DEFF Research Database (Denmark)

    Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara

    2017-01-01

    Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total...

  11. Estimating the Effect and Economic Impact of Absenteeism, Presenteeism, and Work Environment-Related Problems on Reductions in Productivity from a Managerial Perspective.

    Science.gov (United States)

    Strömberg, Carl; Aboagye, Emmanuel; Hagberg, Jan; Bergström, Gunnar; Lohela-Karlsson, Malin

    2017-09-01

    The aim of this study was to propose wage multipliers that can be used to estimate the costs of productivity loss for employers in economic evaluations, using detailed information from managers. Data were collected in a survey panel of 758 managers from different sectors of the labor market. Based on assumed scenarios of a period of absenteeism due to sickness, presenteeism and work environment-related problem episodes, and specified job characteristics (i.e., explanatory variables), managers assessed their impact on group productivity and cost (i.e., the dependent variable). In an ordered probit model, the extent of productivity loss resulting from job characteristics is predicted. The predicted values are used to derive wage multipliers based on the cost of productivity estimates provided by the managers. The results indicate that job characteristics (i.e., degree of time sensitivity of output, teamwork, or difficulty in replacing a worker) are linked to productivity loss as a result of health-related and work environment-related problems. The impact of impaired performance on productivity differs among various occupations. The mean wage multiplier is 1.97 for absenteeism, 1.70 for acute presenteeism, 1.54 for chronic presenteeism, and 1.72 for problems related to the work environment. This implies that the costs of health-related and work environment-related problems to organizations can exceed the worker's wage. The use of wage multipliers is recommended for calculating the cost of health-related and work environment-related productivity loss to properly account for actual costs. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  12. Noninvasive Vascular Displacement Estimation for Relative Elastic Modulus Reconstruction in Transversal Imaging Planes

    Directory of Open Access Journals (Sweden)

    Chris L. de Korte

    2013-03-01

    Full Text Available Atherosclerotic plaque rupture can initiate stroke or myocardial infarction. Lipid-rich plaques with thin fibrous caps have a higher risk to rupture than fibrotic plaques. Elastic moduli differ for lipid-rich and fibrous tissue and can be reconstructed using tissue displacements estimated from intravascular ultrasound radiofrequency (RF data acquisitions. This study investigated if modulus reconstruction is possible for noninvasive RF acquisitions of vessels in transverse imaging planes using an iterative 2D cross-correlation based displacement estimation algorithm. Furthermore, since it is known that displacements can be improved by compounding of displacements estimated at various beam steering angles, we compared the performance of the modulus reconstruction with and without compounding. For the comparison, simulated and experimental RF data were generated of various vessel-mimicking phantoms. Reconstruction errors were less than 10%, which seems adequate for distinguishing lipid-rich from fibrous tissue. Compounding outperformed single-angle reconstruction: the interquartile range of the reconstructed moduli for the various homogeneous phantom layers was approximately two times smaller. Additionally, the estimated lateral displacements were a factor of 2–3 better matched to the displacements corresponding to the reconstructed modulus distribution. Thus, noninvasive elastic modulus reconstruction is possible for transverse vessel cross sections using this cross-correlation method and is more accurate with compounding.

  13. UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY

    Directory of Open Access Journals (Sweden)

    Jean-Paul Jernot

    2011-05-01

    Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.

  14. Observing expertise-related actions leads to perfect time flow estimations.

    Directory of Open Access Journals (Sweden)

    Yin-Hua Chen

    Full Text Available The estimation of the time of exposure of a picture portraying an action increases as a function of the amount of movement implied in the action represented. This effect suggests that the perceiver creates an internal embodiment of the action observed as if internally simulating the entire movement sequence. Little is known however about the timing accuracy of these internal action simulations, specifically whether they are affected by the level of familiarity and experience that the observer has of the action. In this study we asked professional pianists to reproduce different durations of exposure (shorter or longer than one second of visual displays both specific (a hand in piano-playing action and non-specific to their domain of expertise (a hand in finger-thumb opposition and scrambled-pixels and compared their performance with non-pianists. Pianists outperformed non-pianists independently of the time of exposure of the stimuli; remarkably the group difference was particularly magnified by the pianists' enhanced accuracy and stability only when observing the hand in the act of playing the piano. These results for the first time provide evidence that through musical training, pianists create a selective and self-determined dynamic internal representation of an observed movement that allows them to estimate precisely its temporal duration.

  15. The age estimation practice related to illegal unaccompanied minors immigration in Italy.

    Science.gov (United States)

    Pradella, F; Pinchi, V; Focardi, M; Grifoni, R; Palandri, M; Norelli, G A

    2017-12-01

    The migrants arrived to the Italian coasts in 2016 were 181.436, 18% more than the previous year and 6% more than the highest number ever since. An "unaccompanied minor" (UAM) is a third-country national or a stateless person under eighteen years of age, who arrives on the territory of the Member State unaccompanied by an adult responsible for him/her whether by law or by the practice of the Member State concerned, and for as long as he or she is not effectively taken into the care of such a person; it includes a minor who is left unaccompanied after he/she entered the territory of the Member States. As many as 95.985 UAMs applied for international protection in an EU member country just in 2015, almost four times the number registered in the previous year. The UAMs arrived in Italy were 28.283 in 2016; 94% of them were males, 92% unaccompanied, 8% of them under 15; the 53,6% is 17; the individuals between 16 and 17 are instead the 82%. Many of them (50%), 6561 in 2016, escaped from the sanctuaries, thus avoiding to be formally identified and registered in Italy in the attempt to reach more easily northern Europe countries, since The Dublin Regulations (2003) state that the asylum application should be held in the EU country of entrance or where parents reside. The age assessment procedures can therefore be considered as a relevant task that weighs in on the shoulders of the forensic experts with all the related issues and the coming of age is the important threshold. In the EU laws on asylum, the minors are considered as one of the groups of vulnerable persons towards whom Member States have specific obligations. A proper EU common formal regulation in the matter of age estimation procedures still lacks. According to the Italian legal framework in the matter, a medical examination should have been always performed but a new law completely changed the approach to the procedures of age estimation of the migrant (excluding the criminal cases) with a better adherence

  16. World Health Organization Estimates of the Relative Contributions of Food to the Burden of Disease Due to Selected Foodborne Hazards: A Structured Expert Elicitation.

    Science.gov (United States)

    Hald, Tine; Aspinall, Willy; Devleesschauwer, Brecht; Cooke, Roger; Corrigan, Tim; Havelaar, Arie H; Gibb, Herman J; Torgerson, Paul R; Kirk, Martyn D; Angulo, Fred J; Lake, Robin J; Speybroeck, Niko; Hoffmann, Sandra

    2016-01-01

    The Foodborne Disease Burden Epidemiology Reference Group (FERG) was established in 2007 by the World Health Organization (WHO) to estimate the global burden of foodborne diseases (FBDs). This estimation is complicated because most of the hazards causing FBD are not transmitted solely by food; most have several potential exposure routes consisting of transmission from animals, by humans, and via environmental routes including water. This paper describes an expert elicitation study conducted by the FERG Source Attribution Task Force to estimate the relative contribution of food to the global burden of diseases commonly transmitted through the consumption of food. We applied structured expert judgment using Cooke's Classical Model to obtain estimates for 14 subregions for the relative contributions of different transmission pathways for eleven diarrheal diseases, seven other infectious diseases and one chemical (lead). Experts were identified through international networks followed by social network sampling. Final selection of experts was based on their experience including international working experience. Enrolled experts were scored on their ability to judge uncertainty accurately and informatively using a series of subject-matter specific 'seed' questions whose answers are unknown to the experts at the time they are interviewed. Trained facilitators elicited the 5th, and 50th and 95th percentile responses to seed questions through telephone interviews. Cooke's Classical Model uses responses to the seed questions to weigh and aggregate expert responses. After this interview, the experts were asked to provide 5th, 50th, and 95th percentile estimates for the 'target' questions regarding disease transmission routes. A total of 72 experts were enrolled in the study. Ten panels were global, meaning that the experts should provide estimates for all 14 subregions, whereas the nine panels were subregional, with experts providing estimates for one or more subregions

  17. World Health Organization Estimates of the Relative Contributions of Food to the Burden of Disease Due to Selected Foodborne Hazards: A Structured Expert Elicitation.

    Directory of Open Access Journals (Sweden)

    Tine Hald

    Full Text Available The Foodborne Disease Burden Epidemiology Reference Group (FERG was established in 2007 by the World Health Organization (WHO to estimate the global burden of foodborne diseases (FBDs. This estimation is complicated because most of the hazards causing FBD are not transmitted solely by food; most have several potential exposure routes consisting of transmission from animals, by humans, and via environmental routes including water. This paper describes an expert elicitation study conducted by the FERG Source Attribution Task Force to estimate the relative contribution of food to the global burden of diseases commonly transmitted through the consumption of food.We applied structured expert judgment using Cooke's Classical Model to obtain estimates for 14 subregions for the relative contributions of different transmission pathways for eleven diarrheal diseases, seven other infectious diseases and one chemical (lead. Experts were identified through international networks followed by social network sampling. Final selection of experts was based on their experience including international working experience. Enrolled experts were scored on their ability to judge uncertainty accurately and informatively using a series of subject-matter specific 'seed' questions whose answers are unknown to the experts at the time they are interviewed. Trained facilitators elicited the 5th, and 50th and 95th percentile responses to seed questions through telephone interviews. Cooke's Classical Model uses responses to the seed questions to weigh and aggregate expert responses. After this interview, the experts were asked to provide 5th, 50th, and 95th percentile estimates for the 'target' questions regarding disease transmission routes. A total of 72 experts were enrolled in the study. Ten panels were global, meaning that the experts should provide estimates for all 14 subregions, whereas the nine panels were subregional, with experts providing estimates for one or more

  18. An improved estimation and focusing scheme for vector velocity estimation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Munk, Peter

    1999-01-01

    to reduce spatial velocity dispersion. Examples of different velocity vector conditions are shown using the Field II simulation program. A relative accuracy of 10.1 % is obtained for the lateral velocity estimates for a parabolic velocity profile for a flow perpendicular to the ultrasound beam and a signal...

  19. Relating Local to Global Spatial Knowledge: Heuristic Influence of Local Features on Direction Estimates

    Science.gov (United States)

    Phillips, Daniel W.; Montello, Daniel R.

    2015-01-01

    Previous research has examined heuristics--simplified decision-making rules-of-thumb--for geospatial reasoning. This study examined at two locations the influence of beliefs about local coastline orientation on estimated directions to local and distant places; estimates were made immediately or after fifteen seconds. This study goes beyond…

  20. Relative estimation of the mineral ages using uranium migration

    International Nuclear Information System (INIS)

    Danis, A.

    1990-01-01

    Using the uranium fission track micro mapping technique the correlation between the age and uranium migration from inclusions was studied. It is shown that during geological time, as function of the mineral, its age and its uranium migration speed, the pattern of the track, clusters corresponding to the uranium inclusions got a typical feature. Thus for a bulk polished geological sample it is possible to establish an age succession of the constituent minerals as a function of the track cluster patterns. Also, it is shown that knowing the migration speed of the uranium in a mineral it is possible to estimate the age of this mineral by measuring the migration distance on the micro mapping. (Author)

  1. Estimation of Jump Tails

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Todorov, Victor

    We propose a new and flexible non-parametric framework for estimating the jump tails of Itô semimartingale processes. The approach is based on a relatively simple-to-implement set of estimating equations associated with the compensator for the jump measure, or its "intensity", that only utilizes...... the weak assumption of regular variation in the jump tails, along with in-fill asymptotic arguments for uniquely identifying the "large" jumps from the data. The estimation allows for very general dynamic dependencies in the jump tails, and does not restrict the continuous part of the process...... and the temporal variation in the stochastic volatility. On implementing the new estimation procedure with actual high-frequency data for the S&P 500 aggregate market portfolio, we find strong evidence for richer and more complex dynamic dependencies in the jump tails than hitherto entertained in the literature....

  2. Towards a global view of the Laschamp excursion

    Science.gov (United States)

    Laj, C.; Kissel, C.; Leonhardt, R.; Fabian, K.; Winklhofer, M.; Ferk, A.; Ninnemann, U.

    2009-04-01

    A new record of a geomagnetic excursion has been obtained from Core MD07-3128 taken at (75° 34' W ; 52° 40' S) off the Pacific coast of Southern Chile, during the IMAGES XV-MD159-PACHIDERME cruise of the R/V Marion Dufresne (IPEV). Radiocarbon datings extend for the moment to 36.3 kyr BP at 18 meters. Linear extrapolation of the last 2 dates downcore, gives an age of 40.7 for the middle point of the excursion at 20.5 m. This age is virtually identical to the most precise and reliable dating of the Laschamp excursion (obtained by K/Ar and 40Ar/39Ar) at the type locality at Laschamp, where Norbert Bonhommet first discovered the excursion during his PhD research work. We therefore consider that we have obtained a new record of the Laschamp Excursion. Because of the extraordinarily high sediment accumulation rate, the directional excursion is recorded over about 2 meters of sediment (between 19,65 and 21,5) and corresponds to a prolonged marked low in the relative paleointensity record. Details of the directional and relative paleointensity changes will be discussed. The high southern latitude at which this new record was obtained and its very detailed nature make it ideal to further constrain the inverse model of the Laschamp Excursion (IMOLEe) (Leonhardt et al. EPSL in press). The results obtained when these new data are considered in the model will be discussed in terms of dipolar versus non-dipolar components of the transistional field and comparison between predicted (modeled) and observed directions at all the sites used for the construction of this last version of the inverse model (IMOLEf).

  3. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  4. Bird mortality related to collisions with ski–lift cables: do we estimate just the tip of the iceberg?

    Directory of Open Access Journals (Sweden)

    Bech, N.

    2012-01-01

    Full Text Available Collisions with ski–lift cables are an important cause of death for grouse species living close to alpine ski resorts. As several biases may reduce the detection probability of bird carcasses, the mortality rates related to these collisions are generally underestimated. The possibility that injured birds may continue flying for some distance after striking cables represents a major source of error, known as crippling bias. Estimating the crippling losses resulting from birds dying far from the ski–lift corridors is difficult and it is usually assessed by systematic searches of carcasses on both sides of the ski–lifts. Using molecular tracking, we were able to demonstrate that a rock ptarmigan hen flew up to 600 m after striking a ski–lift cable, a distance preventing its detection by traditional carcasses surveys. Given the difficulty in conducting systematic searches over large areas surrounding the ski–lifts, only an experiment using radio–tagged birds would allow us to estimate the real mortality rate associated with cable collision.

  5. Magnetism in meteorites. [terminology, principles and techniques

    Science.gov (United States)

    Herndon, J. M.; Rowe, M. W.

    1974-01-01

    An overview of this subject is presented. The paper includes a glossary of magnetism terminology and a discussion of magnetic techniques used in meteorite research. These techniques comprise thermomagnetic analysis, alternating field demagnetization, thermal demagnetization, magnetic anisotropy, low-temperature cycling, and coercive forces, with emphasis on the first method. Limitations on the validity of paleointensity determinations are also discussed.

  6. Weighing up crime: the over estimation of drug-related crime

    OpenAIRE

    Stevens, Alex

    2008-01-01

    Background: It is generally accepted that harms from crime cause a very large part of the total social harm that can be attributed to drug use. For example, crime harms accounted for 70% of the weighting of the British Drug Harm Index in 2004. This paper explores the linkage of criminal harm to drug use and challenges the current overestimation of the proportion of crime that can be causally attributed to drug use. It particularly examines the use of data from arrested drug users to estimate ...

  7. Rough estimate demand of atomic energy-related budget for fiscal year 1996

    International Nuclear Information System (INIS)

    Kitagishi, Tatsuro

    1996-01-01

    The rough estimate demand of the budget for fiscal year 1996 of eight atomic energy-related ministries and agencies was determined at about 494,879 million yen, which is 2.4% growth as compared with that for the previous year. Concretely, the general account is 204,594 million yen, 2.2% growth, and the special account is 290,285 million yen, 2.6% growth. The budget is 357,060 million yen and 3.7% growth for Science and Technology Agency, 130, 787 million yen and 2% decrease for Ministry of International Trade and Industry, and 7,032 million yen and 29.2% increase for other six ministries and agencies. Emphasis is placed on the research of upgrading LWRs including the disassembling of reactors, the performance test for fuel, the improvement of reactor technology and the verifying test of practical reactor decommissioning facilities, and the research and development of advanced nuclear fuel cycle technology. Also the technical development of waste treatment and disposal including high level radioactive waste is carried out with 40.3 billion yen. Atomic Energy Commission exerts efforts for the development of atomic energy policy for the peaceful utilization, the establishment of coordinative LWR power generation system, the development of nuclear fuel recycling and the strengthening of the basic research on atomic energy. (K.I.)

  8. Estimating Contact Exposure in Football Using the Head Impact Exposure Estimate.

    Science.gov (United States)

    Kerr, Zachary Y; Littleton, Ashley C; Cox, Leah M; DeFreese, J D; Varangis, Eleanna; Lynall, Robert C; Schmidt, Julianne D; Marshall, Stephen W; Guskiewicz, Kevin M

    2015-07-15

    Over the past decade, there has been significant debate regarding the effect of cumulative subconcussive head impacts on short and long-term neurological impairment. This debate remains unresolved, because valid epidemiological estimates of athletes' total contact exposure are lacking. We present a measure to estimate the total hours of contact exposure in football over the majority of an athlete's lifespan. Through a structured oral interview, former football players provided information related to primary position played and participation in games and practice contacts during the pre-season, regular season, and post-season of each year of their high school, college, and professional football careers. Spring football for college was also included. We calculated contact exposure estimates for 64 former football players (n = 32 college football only, n = 32 professional and college football). The head impact exposure estimate (HIEE) discriminated between individuals who stopped after college football, and individuals who played professional football (p < 0.001). The HIEE measure was independent of concussion history (p = 0.82). Estimating total hours of contact exposure may allow for the detection of differences between individuals with variation in subconcussive impacts, regardless of concussion history. This measure is valuable for the surveillance of subconcussive impacts and their associated potential negative effects.

  9. New developments in state estimation for Nonlinear Systems

    DEFF Research Database (Denmark)

    Nørgård, Peter Magnus; Poulsen, Niels Kjølstad; Ravn, Ole

    2000-01-01

    Based on an interpolation formula, accurate state estimators for nonlinear systems can be derived. The estimators do not require derivative information which makes them simple to implement.; State estimators for nonlinear systems are derived based on polynomial approximations obtained with a mult......-known estimators, such as the extended Kalman filter (EKF) and its higher-order relatives, in most practical applications....

  10. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  11. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.

    2015-01-01

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  12. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  13. Coherence in quantum estimation

    Science.gov (United States)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  14. Analytical Estimation of Water-Oil Relative Permeabilities through Fractures

    Directory of Open Access Journals (Sweden)

    Saboorian-Jooybari Hadi

    2016-05-01

    Full Text Available Modeling multiphase flow through fractures is a key issue for understanding flow mechanism and performance prediction of fractured petroleum reservoirs, geothermal reservoirs, underground aquifers and carbon-dioxide sequestration. One of the most challenging subjects in modeling of fractured petroleum reservoirs is quantifying fluids competition for flow in fracture network (relative permeability curves. Unfortunately, there is no standard technique for experimental measurement of relative permeabilities through fractures and the existing methods are very expensive, time consuming and erroneous. Although, several formulations were presented to calculate fracture relative permeability curves in the form of linear and power functions of flowing fluids saturation, it is still unclear what form of relative permeability curves must be used for proper modeling of flow through fractures and consequently accurate reservoir simulation. Basically, the classic linear relative permeability (X-type curves are used in almost all of reservoir simulators. In this work, basic fluid flow equations are combined to develop a new simple analytical model for water-oil two phase flow in a single fracture. The model gives rise to simple analytic formulations for fracture relative permeabilities. The model explicitly proves that water-oil relative permeabilities in fracture network are functions of fluids saturation, viscosity ratio, fluids density, inclination of fracture plane from horizon, pressure gradient along fracture and rock matrix wettability, however they were considered to be only functions of saturations in the classic X-type and power (Corey [35] and Honarpour et al. [28, 29] models. Eventually, validity of the proposed formulations is checked against literature experimental data. The proposed fracture relative permeability functions have several advantages over the existing ones. Firstly, they are explicit functions of the parameters which are known for

  15. Introduction to quantum-state estimation

    CERN Document Server

    Teo, Yong Siah

    2016-01-01

    Quantum-state estimation is an important field in quantum information theory that deals with the characterization of states of affairs for quantum sources. This book begins with background formalism in estimation theory to establish the necessary prerequisites. This basic understanding allows us to explore popular likelihood- and entropy-related estimation schemes that are suitable for an introductory survey on the subject. Discussions on practical aspects of quantum-state estimation ensue, with emphasis on the evaluation of tomographic performances for estimation schemes, experimental realizations of quantum measurements and detection of single-mode multi-photon sources. Finally, the concepts of phase-space distribution functions, which compatibly describe these multi-photon sources, are introduced to bridge the gap between discrete and continuous quantum degrees of freedom. This book is intended to serve as an instructive and self-contained medium for advanced undergraduate and postgraduate students to gra...

  16. Toxoplasma gondii infection in Kyrgyzstan: seroprevalence, risk factor analysis, and estimate of congenital and AIDS-related toxoplasmosis.

    Directory of Open Access Journals (Sweden)

    Gulnara Minbaeva

    Full Text Available BACKGROUND: HIV-prevalence, as well as incidence of zoonotic parasitic diseases like cystic echinococcosis, has increased in the Kyrgyz Republic due to fundamental socio-economic changes after the breakdown of the Soviet Union. The possible impact on morbidity and mortality caused by Toxoplasma gondii infection in congenital toxoplasmosis or as an opportunistic infection in the emerging AIDS pandemic has not been reported from Kyrgyzstan. METHODOLOGY/PRINCIPAL FINDINGS: We screened 1,061 rural and 899 urban people to determine the seroprevalence of T. gondii infection in 2 representative but epidemiologically distinct populations in Kyrgyzstan. The rural population was from a typical agricultural district where sheep husbandry is a major occupation. The urban population was selected in collaboration with several diagnostic laboratories in Bishkek, the largest city in Kyrgyzstan. We designed a questionnaire that was used on all rural subjects so a risk-factor analysis could be undertaken. The samples from the urban population were anonymous and only data with regard to age and gender was available. Estimates of putative cases of congenital and AIDS-related toxoplasmosis in the whole country were made from the results of the serology. Specific antibodies (IgG against Triton X-100 extracted antigens of T. gondii tachyzoites from in vitro cultures were determined by ELISA. Overall seroprevalence of infection with T. gondii in people living in rural vs. urban areas was 6.2% (95%CI: 4.8-7.8 (adjusted seroprevalence based on census figures 5.1%, 95% CI 3.9-6.5, and 19.0% (95%CI: 16.5-21.7 (adjusted 16.4%, 95% CI 14.1-19.3, respectively, without significant gender-specific differences. The seroprevalence increased with age. Independently low social status increased the risk of Toxoplasma seropositivity while increasing numbers of sheep owned decreased the risk of seropositivity. Water supply, consumption of unpasteurized milk products or undercooked

  17. IMPROVEMENT OF THE RICHNESS ESTIMATES OF maxBCG CLUSTERS

    International Nuclear Information System (INIS)

    Rozo, Eduardo; Rykoff, Eli S.; Koester, Benjamin P.; Hansen, Sarah; Becker, Matthew; Bleem, Lindsey; McKay, Timothy; Hao Jiangang; Evrard, August; Wechsler, Risa H.; Sheldon, Erin; Johnston, David; Annis, James; Scranton, Ryan

    2009-01-01

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L X -richness relation, from σ lnLx 2 = (0.86±0.02) 2 to σ lnLx 2 = (0.69±0.02) 2 . Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the L X -richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to the better treatment of galaxy color data. We also demonstrate the scatter in the L X -richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can easily be generalized to other mass tracers.

  18. A statistical estimator for the boiler power and its related parameters

    International Nuclear Information System (INIS)

    Tang, H.

    2001-01-01

    To determine the boiler power accurately is important for both controlling the plant and maximizing the plant productivity. There are two computed boiler powers for each boiler. They are steam based boiler power and feedwater based boiler power. The steam based boiler power is computed as the enthalpy difference between the feedwater enthalpy and the boiler steam enthalpy. The feedwater based boiler power is computed as enthalpy absorbed by the feedwater. The steam based boiler power is computed in RRS program and used in calibrating the measured reactor power, while the feedwater based boiler power is computed in CSTAT program and used for indication. Since the steam based boiler power is used as feedback in the reactor control, it is chosen to be the one estimated in this work. Because the boiler power employs steam flow, feedwater flow and feedwater temperature measurements, and because any measurement contains constant or drifting noise and bias, the reconciliation and rectification procedures are needed to determine the boiler power more accurately. A statistic estimator is developed to perform the function of data reconciliation, gross error detection and instruments performance monitoring

  19. Economic evaluations of occupational health interventions from a company's perspective: A systematic review of methods to estimate the cost of health-related productivity loss

    NARCIS (Netherlands)

    Uegaki, K.; Bruijne, M.C. de; Beek, A.J. van der; Mechelen, W. van; Tulder, M.W. van

    2011-01-01

    Objectives: To investigate the methods used to estimate the indirect costs of health-related productivity in economic evaluations from a company's perspective. Methods: The primary literature search was conducted in Medline and Embase. Supplemental searches were conducted in the Cochrane NHS

  20. Health-related quality of life among adults 65 years and older in the United States, 2011-2012: a multilevel small area estimation approach.

    Science.gov (United States)

    Lin, Yu-Hsiu; McLain, Alexander C; Probst, Janice C; Bennett, Kevin J; Qureshi, Zaina P; Eberth, Jan M

    2017-01-01

    The purpose of this study was to develop county-level estimates of poor health-related quality of life (HRQOL) among aged 65 years and older U.S. adults and to identify spatial clusters of poor HRQOL using a multilevel, poststratification approach. Multilevel, random-intercept models were fit to HRQOL data (two domains: physical health and mental health) from the 2011-2012 Behavioral Risk Factor Surveillance System. Using a poststratification, small area estimation approach, we generated county-level probabilities of having poor HRQOL for each domain in U.S. adults aged 65 and older, and validated our model-based estimates against state and county direct estimates. County-level estimates of poor HRQOL in the United States ranged from 18.07% to 44.81% for physical health and 14.77% to 37.86% for mental health. Correlations between model-based and direct estimates were higher for physical than mental HRQOL. Counties located in the Arkansas, Kentucky, and Mississippi exhibited the worst physical HRQOL scores, but this pattern did not hold for mental HRQOL, which had the highest probability of mentally unhealthy days in Illinois, Indiana, and Vermont. Substantial geographic variation in physical and mental HRQOL scores exists among older U.S. adults. State and local policy makers should consider these local conditions in targeting interventions and policies to counties with high levels of poor HRQOL scores. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Velocity Estimation in Medical Ultrasound [Life Sciences

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Villagómez Hoyos, Carlos Armando; Holbek, Simon

    2017-01-01

    This article describes the application of signal processing in medical ultrasound velocity estimation. Special emphasis is on the relation among acquisition methods, signal processing, and estimators employed. The description spans from current clinical systems for one-and two-dimensional (1-D an...

  2. An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation

    Directory of Open Access Journals (Sweden)

    Senthil Kumar Murugesan

    2015-01-01

    Full Text Available Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product.

  3. Estimation of Direct Melanoma-related Costs by Disease Stage and by Phase of Diagnosis and Treatment According to Clinical Guidelines

    Directory of Open Access Journals (Sweden)

    Alessandra Buja

    2017-11-01

    Full Text Available Cutaneous melanoma is a major concern in terms of healthcare systems and economics. The aim of this study was to estimate the direct costs of melanoma by disease stage, phase of diagnosis, and treatment according to the pre-set clinical guidelines drafted by the AIOM (Italian Medical Oncological Association. Based on the AIOM guidelines for malignant cutaneous melanoma, a highly detailed decision-making model was developed describing the patient’s pathway from diagnosis through the subsequent phases of disease staging, surgical and medical treatment, and follow-up. The model associates each phase potentially involving medical procedures with a likelihood measure and a cost, thus enabling an estimation of the expected costs by disease stage and clinical phase of melanoma diagnosis and treatment according to the clinical guidelines. The mean per-patient cost of the whole melanoma pathway (including one year of follow-up ranged from €149 for stage 0 disease to €66,950 for stage IV disease. The costs relating to each phase of the disease’s diagnosis and treatment depended on disease stage. It is essential to calculate the direct costs of managing malignant cutaneous melanoma according to clinical guidelines in order to estimate the economic burden of this disease and to enable policy-makers to allocate appropriate resources.

  4. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  5. Attitude Estimation in Fractionated Spacecraft Cluster Systems

    Science.gov (United States)

    Hadaegh, Fred Y.; Blackmore, James C.

    2011-01-01

    An attitude estimation was examined in fractioned free-flying spacecraft. Instead of a single, monolithic spacecraft, a fractionated free-flying spacecraft uses multiple spacecraft modules. These modules are connected only through wireless communication links and, potentially, wireless power links. The key advantage of this concept is the ability to respond to uncertainty. For example, if a single spacecraft module in the cluster fails, a new one can be launched at a lower cost and risk than would be incurred with onorbit servicing or replacement of the monolithic spacecraft. In order to create such a system, however, it is essential to know what the navigation capabilities of the fractionated system are as a function of the capabilities of the individual modules, and to have an algorithm that can perform estimation of the attitudes and relative positions of the modules with fractionated sensing capabilities. Looking specifically at fractionated attitude estimation with startrackers and optical relative attitude sensors, a set of mathematical tools has been developed that specify the set of sensors necessary to ensure that the attitude of the entire cluster ( cluster attitude ) can be observed. Also developed was a navigation filter that can estimate the cluster attitude if these conditions are satisfied. Each module in the cluster may have either a startracker, a relative attitude sensor, or both. An extended Kalman filter can be used to estimate the attitude of all modules. A range of estimation performances can be achieved depending on the sensors used and the topology of the sensing network.

  6. Estimate of $B(K\\to\\pi\

    CERN Document Server

    Kettell, S H; Nguyen, H

    2003-01-01

    We estimate $B(\\kzpnn)$ in the context of the Standard Model by fitting for \\lamt $\\equiv V_{td}V^*_{ts}$ of the `kaon unitarity triangle' relation. We fit data from \\ek, the CP-violating parameter describing $K$-mixing, and \\apsiks, the CP-violating asymmetry in \\bpsiks decays. Our estimate is independent of the CKM matrix element \\vcb and of the ratio of B-mixing frequencies \\bsbd. The measured value of $B(\\kpnn)$ can be compared both to this estimate and to predictions made from \\bsbd.

  7. Global income related health inequalities

    Directory of Open Access Journals (Sweden)

    Jalil Safaei

    2007-01-01

    Full Text Available Income related health inequalities have been estimated for various groups of individuals at local, state, or national levels. Almost all of theses estimates are based on individual data from sample surveys. Lack of consistent individual data worldwide has prevented estimates of international income related health inequalities. This paper uses the (population weighted aggregate data available from many countries around the world to estimate worldwide income related health inequalities. Since the intra-country inequalities are subdued by the aggregate nature of the data, the estimates would be those of the inter-country or international health inequalities. As well, the study estimates the contribution of major socioeconomic variables to the overall health inequalities. The findings of the study strongly support the existence of worldwide income related health inequalities that favor the higher income countries. Decompositions of health inequalities identify inequalities in both the level and distribution of income as the main source of health inequality along with inequalities in education and degree of urbanization as other contributing determinants. Since income related health inequalities are preventable, policies to reduce the income gaps between the poor and rich nations could greatly improve the health of hundreds of millions of people and promote global justice. Keywords: global, income, health inequality, socioeconomic determinants of health

  8. Trial latencies estimation of event-related potentials in EEG by means of genetic algorithms

    Science.gov (United States)

    Da Pelo, P.; De Tommaso, M.; Monaco, A.; Stramaglia, S.; Bellotti, R.; Tangaro, S.

    2018-04-01

    Objective. Event-related potentials (ERPs) are usually obtained by averaging thus neglecting the trial-to-trial latency variability in cognitive electroencephalography (EEG) responses. As a consequence the shape and the peak amplitude of the averaged ERP are smeared and reduced, respectively, when the single-trial latencies show a relevant variability. To date, the majority of the methodologies for single-trial latencies inference are iterative schemes providing suboptimal solutions, the most commonly used being the Woody’s algorithm. Approach. In this study, a global approach is developed by introducing a fitness function whose global maximum corresponds to the set of latencies which renders the trial signals most aligned as possible. A suitable genetic algorithm has been implemented to solve the optimization problem, characterized by new genetic operators tailored to the present problem. Main results. The results, on simulated trials, showed that the proposed algorithm performs better than Woody’s algorithm in all conditions, at the cost of an increased computational complexity (justified by the improved quality of the solution). Application of the proposed approach on real data trials, resulted in an increased correlation between latencies and reaction times w.r.t. the output from RIDE method. Significance. The above mentioned results on simulated and real data indicate that the proposed method, providing a better estimate of single-trial latencies, will open the way to more accurate study of neural responses as well as to the issue of relating the variability of latencies to the proper cognitive and behavioural correlates.

  9. The relative impact of baryons and cluster shape on weak lensing mass estimates of galaxy clusters

    Science.gov (United States)

    Lee, B. E.; Le Brun, A. M. C.; Haq, M. E.; Deering, N. J.; King, L. J.; Applegate, D.; McCarthy, I. G.

    2018-05-01

    Weak gravitational lensing depends on the integrated mass along the line of sight. Baryons contribute to the mass distribution of galaxy clusters and the resulting mass estimates from lensing analysis. We use the cosmo-OWLS suite of hydrodynamic simulations to investigate the impact of baryonic processes on the bias and scatter of weak lensing mass estimates of clusters. These estimates are obtained by fitting NFW profiles to mock data using MCMC techniques. In particular, we examine the difference in estimates between dark matter-only runs and those including various prescriptions for baryonic physics. We find no significant difference in the mass bias when baryonic physics is included, though the overall mass estimates are suppressed when feedback from AGN is included. For lowest-mass systems for which a reliable mass can be obtained (M200 ≈ 2 × 1014M⊙), we find a bias of ≈-10 per cent. The magnitude of the bias tends to decrease for higher mass clusters, consistent with no bias for the most massive clusters which have masses comparable to those found in the CLASH and HFF samples. For the lowest mass clusters, the mass bias is particularly sensitive to the fit radii and the limits placed on the concentration prior, rendering reliable mass estimates difficult. The scatter in mass estimates between the dark matter-only and the various baryonic runs is less than between different projections of individual clusters, highlighting the importance of triaxiality.

  10. Relative contributions of sampling effort, measuring, and weighing to precision of larval sea lamprey biomass estimates

    Science.gov (United States)

    Slade, Jeffrey W.; Adams, Jean V.; Cuddy, Douglas W.; Neave, Fraser B.; Sullivan, W. Paul; Young, Robert J.; Fodale, Michael F.; Jones, Michael L.

    2003-01-01

    We developed two weight-length models from 231 populations of larval sea lampreys (Petromyzon marinus) collected from tributaries of the Great Lakes: Lake Ontario (21), Lake Erie (6), Lake Huron (67), Lake Michigan (76), and Lake Superior (61). Both models were mixed models, which used population as a random effect and additional environmental factors as fixed effects. We resampled weights and lengths 1,000 times from data collected in each of 14 other populations not used to develop the models, obtaining a weight and length distribution from reach resampling. To test model performance, we applied the two weight-length models to the resampled length distributions and calculated the predicted mean weights. We also calculated the observed mean weight for each resampling and for each of the original 14 data sets. When the average of predicted means was compared to means from the original data in each stream, inclusion of environmental factors did not consistently improve the performance of the weight-length model. We estimated the variance associated with measures of abundance and mean weight for each of the 14 selected populations and determined that a conservative estimate of the proportional contribution to variance associated with estimating abundance accounted for 32% to 95% of the variance (mean = 66%). Variability in the biomass estimate appears more affected by variability in estimating abundance than in converting length to weight. Hence, efforts to improve the precision of biomass estimates would be aided most by reducing the variability associated with estimating abundance.

  11. Variational estimates of point-kinetics parameters

    International Nuclear Information System (INIS)

    Favorite, J.A.; Stacey, W.M. Jr.

    1995-01-01

    Variational estimates of the effect of flux shifts on the integral reactivity parameter of the point-kinetics equations and on regional power fractions were calculated for a variety of localized perturbations in two light water reactor (LWR) model problems representing a small, tightly coupled core and a large, loosely coupled core. For the small core, the flux shifts resulting from even relatively large localized reactivity changes (∼600 pcm) were small, and the standard point-kinetics approximation estimates of reactivity were in error by only ∼10% or less, while the variational estimates were accurate to within ∼1%. For the larger core, significant (>50%) flux shifts occurred in response to local perturbations, leading to errors of the same magnitude in the standard point-kinetics approximation of the reactivity worth. For positive reactivity, the error in the variational estimate of reactivity was only a few percent in the larger core, and the resulting transient power prediction was 1 to 2 orders of magnitude more accurate than with the standard point-kinetics approximation. For a large, local negative reactivity insertion resulting in a large flux shift, the accuracy of the variational estimate broke down. The variational estimate of the effect of flux shifts on reactivity in point-kinetics calculations of transients in LWR cores was found to generally result in greatly improved accuracy, relative to the standard point-kinetics approximation, the exception being for large negative reactivity insertions with large flux shifts in large, loosely coupled cores

  12. Attributing death to cancer: cause-specific survival estimation.

    Directory of Open Access Journals (Sweden)

    Mathew A

    2002-10-01

    Full Text Available Cancer survival estimation is an important part of assessing the overall strength of cancer care in a region. Generally, the death of a patient is taken as the end point in estimation of overall survival. When calculating the overall survival, the cause of death is not taken into account. With increasing demand for better survival of cancer patients it is important for clinicians and researchers to know about survival statistics due to disease of interest, i.e. net survival. It is also important to choose the best method for estimating net survival. Increase in the use of computer programmes has made it possible to carry out statistical analysis without guidance from a bio-statistician. This is of prime importance in third- world countries as there are a few trained bio-statisticians to guide clinicians and researchers. The present communication describes current methods used to estimate net survival such as cause-specific survival and relative survival. The limitation of estimation of cause-specific survival particularly in India and the usefulness of relative survival are discussed. The various sources for estimating cancer survival are also discussed. As survival-estimates are to be projected on to the population at large, it becomes important to measure the variation of the estimates, and thus confidence intervals are used. Rothman′s confidence interval gives the most satisfactory result for survival estimate.

  13. METHODOLOGY RELATED TO ESTIMATION OF INVESTMENT APPEAL OF RURAL SETTLEMENTS

    Directory of Open Access Journals (Sweden)

    A. S. Voshev

    2010-03-01

    Full Text Available Conditions for production activity vary considerably from region to region, from area to area, from settlement to settlement. In this connection, investors are challenged to choose an optimum site for a new enterprise. To make the decision, investors follow such references as: investment potential and risk level; their interrelation determines investment appeal of a country, region, area, city or rural settlement. At present Russia faces a problem of «black boxes» represented by a lot of rural settlements. No effective and suitable techniques of quantitative estimation of investment potential, rural settlement risks and systems to make the given information accessible for potential investors exist until now.

  14. Estimating Risks and Relative Risks in Case-Base Studies under the Assumptions of Gene-Environment Independence and Hardy-Weinberg Equilibrium

    Science.gov (United States)

    Chui, Tina Tsz-Ting; Lee, Wen-Chung

    2014-01-01

    Many diseases result from the interactions between genes and the environment. An efficient method has been proposed for a case-control study to estimate the genetic and environmental main effects and their interactions, which exploits the assumptions of gene-environment independence and Hardy-Weinberg equilibrium. To estimate the absolute and relative risks, one needs to resort to an alternative design: the case-base study. In this paper, the authors show how to analyze a case-base study under the above dual assumptions. This approach is based on a conditional logistic regression of case-counterfactual controls matched data. It can be easily fitted with readily available statistical packages. When the dual assumptions are met, the method is approximately unbiased and has adequate coverage probabilities for confidence intervals. It also results in smaller variances and shorter confidence intervals as compared with a previous method for a case-base study which imposes neither assumption. PMID:25137392

  15. Estimating risks and relative risks in case-base studies under the assumptions of gene-environment independence and Hardy-Weinberg equilibrium.

    Directory of Open Access Journals (Sweden)

    Tina Tsz-Ting Chui

    Full Text Available Many diseases result from the interactions between genes and the environment. An efficient method has been proposed for a case-control study to estimate the genetic and environmental main effects and their interactions, which exploits the assumptions of gene-environment independence and Hardy-Weinberg equilibrium. To estimate the absolute and relative risks, one needs to resort to an alternative design: the case-base study. In this paper, the authors show how to analyze a case-base study under the above dual assumptions. This approach is based on a conditional logistic regression of case-counterfactual controls matched data. It can be easily fitted with readily available statistical packages. When the dual assumptions are met, the method is approximately unbiased and has adequate coverage probabilities for confidence intervals. It also results in smaller variances and shorter confidence intervals as compared with a previous method for a case-base study which imposes neither assumption.

  16. Damage severity estimation from the global stiffness decrease

    International Nuclear Information System (INIS)

    Nitescu, C; Gillich, G R; Manescu, T; Korka, Z I; Abdel Wahab, M

    2017-01-01

    In actual damage detection methods, localization and severity estimation can be treated separately. The severity is commonly estimated using fracture mechanics approach, with the main disadvantage of involving empirically deduced relations. In this paper, a damage severity estimator based on the global stiffness reduction is proposed. This feature is computed from the deflections of the intact and damaged beam, respectively. The damage is always located where the bending moment achieves maxima. If the damage is positioned elsewhere on the beam, its effect becomes lower, because the stress is produced by a diminished bending moment. It is shown that the global stiffness reduction produced by a crack is the same for all beams with a similar cross-section, regardless of the boundary conditions. One mathematical relation indicating the severity and another indicating the effect of removing damage from the beam. Measurements on damaged beams with different boundary conditions and cross-sections are carried out, and the location and severity are found using the proposed relations. These comparisons prove that the proposed approach can be used to accurately compute the severity estimator. (paper)

  17. Smoking-attributable medical expenditures by age, sex, and smoking status estimated using a relative risk approach☆

    Science.gov (United States)

    Maciosek, Michael V.; Xu, Xin; Butani, Amy L.; Pechacek, Terry F.

    2015-01-01

    Objective To accurately assess the benefits of tobacco control interventions and to better inform decision makers, knowledge of medical expenditures by age, gender, and smoking status is essential. Method We propose an approach to distribute smoking-attributable expenditures by age, gender, and cigarette smoking status to reflect the known risks of smoking. We distribute hospitalization days for smoking-attributable diseases according to relative risks of smoking-attributable mortality, and use the method to determine national estimates of smoking-attributable expenditures by age, sex, and cigarette smoking status. Sensitivity analyses explored assumptions of the method. Results Both current and former smokers ages 75 and over have about 12 times the smoking-attributable expenditures of their current and former smoker counterparts 35–54 years of age. Within each age group, the expenditures of formers smokers are about 70% lower than current smokers. In sensitivity analysis, these results were not robust to large changes to the relative risks of smoking-attributable mortality which were used in the calculations. Conclusion Sex- and age-group-specific smoking expenditures reflect observed disease risk differences between current and former cigarette smokers and indicate that about 70% of current smokers’ excess medical care costs is preventable by quitting. PMID:26051203

  18. Care during labor and birth for the prevention of intrapartum-related neonatal deaths: a systematic review and Delphi estimation of mortality effect

    Science.gov (United States)

    2011-01-01

    Background Our objective was to estimate the effect of various childbirth care packages on neonatal mortality due to intrapartum-related events (“birth asphyxia”) in term babies for use in the Lives Saved Tool (LiST). Methods We conducted a systematic literature review to identify studies or reviews of childbirth care packages as defined by United Nations norms (basic and comprehensive emergency obstetric care, skilled care at birth). We also reviewed Traditional Birth Attendant (TBA) training. Data were abstracted into standard tables and quality assessed by adapted GRADE criteria. For interventions with low quality evidence, but strong GRADE recommendation for implementation, an expert Delphi consensus process was conducted to estimate cause-specific mortality effects. Results We identified evidence for the effect on perinatal/neonatal mortality of emergency obstetric care packages: 9 studies (8 observational, 1 quasi-experimental), and for skilled childbirth care: 10 studies (8 observational, 2 quasi-experimental). Studies were of low quality, but the GRADE recommendation for implementation is strong. Our Delphi process included 21 experts representing all WHO regions and achieved consensus on the reduction of intrapartum-related neonatal deaths by comprehensive emergency obstetric care (85%), basic emergency obstetric care (40%), and skilled birth care (25%). For TBA training we identified 2 meta-analyses and 9 studies reporting mortality effects (3 cRCT, 1 quasi-experimental, 5 observational). There was substantial between-study heterogeneity and the overall quality of evidence was low. Because the GRADE recommendation for TBA training is conditional on the context and region, the effect was not estimated through a Delphi or included in the LiST tool. Conclusion Evidence quality is rated low, partly because of challenges in undertaking RCTs for obstetric interventions, which are considered standard of care. Additional challenges for evidence interpretation

  19. Care during labor and birth for the prevention of intrapartum-related neonatal deaths: a systematic review and Delphi estimation of mortality effect

    Directory of Open Access Journals (Sweden)

    Moran Neil F

    2011-04-01

    Full Text Available Abstract Background Our objective was to estimate the effect of various childbirth care packages on neonatal mortality due to intrapartum-related events (“birth asphyxia” in term babies for use in the Lives Saved Tool (LiST. Methods We conducted a systematic literature review to identify studies or reviews of childbirth care packages as defined by United Nations norms (basic and comprehensive emergency obstetric care, skilled care at birth. We also reviewed Traditional Birth Attendant (TBA training. Data were abstracted into standard tables and quality assessed by adapted GRADE criteria. For interventions with low quality evidence, but strong GRADE recommendation for implementation, an expert Delphi consensus process was conducted to estimate cause-specific mortality effects. Results We identified evidence for the effect on perinatal/neonatal mortality of emergency obstetric care packages: 9 studies (8 observational, 1 quasi-experimental, and for skilled childbirth care: 10 studies (8 observational, 2 quasi-experimental. Studies were of low quality, but the GRADE recommendation for implementation is strong. Our Delphi process included 21 experts representing all WHO regions and achieved consensus on the reduction of intrapartum-related neonatal deaths by comprehensive emergency obstetric care (85%, basic emergency obstetric care (40%, and skilled birth care (25%. For TBA training we identified 2 meta-analyses and 9 studies reporting mortality effects (3 cRCT, 1 quasi-experimental, 5 observational. There was substantial between-study heterogeneity and the overall quality of evidence was low. Because the GRADE recommendation for TBA training is conditional on the context and region, the effect was not estimated through a Delphi or included in the LiST tool. Conclusion Evidence quality is rated low, partly because of challenges in undertaking RCTs for obstetric interventions, which are considered standard of care. Additional challenges for

  20. Improving Google Flu Trends estimates for the United States through transformation.

    Directory of Open Access Journals (Sweden)

    Leah J Martin

    Full Text Available Google Flu Trends (GFT uses Internet search queries in an effort to provide early warning of increases in influenza-like illness (ILI. In the United States, GFT estimates the percentage of physician visits related to ILI (%ILINet reported by the Centers for Disease Control and Prevention (CDC. However, during the 2012-13 influenza season, GFT overestimated %ILINet by an appreciable amount and estimated the peak in incidence three weeks late. Using data from 2010-14, we investigated the relationship between GFT estimates (%GFT and %ILINet. Based on the relationship between the relative change in %GFT and the relative change in %ILINet, we transformed %GFT estimates to better correspond with %ILINet values. In 2010-13, our transformed %GFT estimates were within ± 10% of %ILINet values for 17 of the 29 weeks that %ILINet was above the seasonal baseline value determined by the CDC; in contrast, the original %GFT estimates were within ± 10% of %ILINet values for only two of these 29 weeks. Relative to the %ILINet peak in 2012-13, the peak in our transformed %GFT estimates was 2% lower and one week later, whereas the peak in the original %GFT estimates was 74% higher and three weeks later. The same transformation improved %GFT estimates using the recalibrated 2013 GFT model in early 2013-14. Our transformed %GFT estimates can be calculated approximately one week before %ILINet values are reported by the CDC and the transformation equation was stable over the time period investigated (2010-13. We anticipate our results will facilitate future use of GFT.

  1. Estimate of the benefits of a population-based reduction in dietary sodium additives on hypertension and its related health care costs in Canada.

    Science.gov (United States)

    Joffres, Michel R; Campbell, Norm R C; Manns, Braden; Tu, Karen

    2007-05-01

    Hypertension is the leading risk factor for mortality worldwide. One-quarter of the adult Canadian population has hypertension, and more than 90% of the population is estimated to develop hypertension if they live an average lifespan. Reductions in dietary sodium additives significantly lower systolic and diastolic blood pressure, and population reductions in dietary sodium are recommended by major scientific and public health organizations. To estimate the reduction in hypertension prevalence and specific hypertension management cost savings associated with a population-wide reduction in dietary sodium additives. Based on data from clinical trials, reducing dietary sodium additives by 1840 mg/day would result in a decrease of 5.06 mmHg (systolic) and 2.7 mmHg (diastolic) blood pressures. Using Canadian Heart Health Survey data, the resulting reduction in hypertension was estimated. Costs of laboratory testing and physician visits were based on 2001 to 2003 Ontario Health Insurance Plan data, and the number of physician visits and costs of medications for patients with hypertension were taken from 2003 IMS Canada. To estimate the reduction in total physician visits and laboratory costs, current estimates of aware hypertensive patients in Canada were used from the Canadian Community Health Survey. Reducing dietary sodium additives may decrease hypertension prevalence by 30%, resulting in one million fewer hypertensive patients in Canada, and almost double the treatment and control rate. Direct cost savings related to fewer physician visits, laboratory tests and lower medication use are estimated to be approximately $430 million per year. Physician visits and laboratory costs would decrease by 6.5%, and 23% fewer treated hypertensive patients would require medications for control of blood pressure. Based on these estimates, lowering dietary sodium additives would lead to a large reduction in hypertension prevalence and result in health care cost savings in Canada.

  2. Cost-estimating relationships for space programs

    Science.gov (United States)

    Mandell, Humboldt C., Jr.

    1992-01-01

    Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.

  3. A relative vulnerability estimation of flood disaster using data envelopment analysis in the Dongting Lake region of Hunan

    Science.gov (United States)

    Li, C.-H.; Li, N.; Wu, L.-C.; Hu, A.-J.

    2013-07-01

    scale, the occurrence of a vibrating flood vulnerability trend is observed. A different picture is displayed with the disaster driver risk level, disaster environment stability level and disaster bearer sensitivity level. The flood relative vulnerability estimation method based on DEA is characteristic of good comparability, which takes the relative efficiency of disaster system input-output into account, and portrays a very diverse but consistent picture with varying time steps. Therefore, among different spatial and time domains, we could compare the disaster situations with what was reflected by the same disaster. Additionally, the method overcomes the subjectivity of a comprehensive flood index caused by using an a priori weighting system, which exists in disaster vulnerability estimation of current disasters.

  4. 24 CFR 3500.7 - Good faith estimate.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...

  5. Missing reversals in the geomagnetic polarity timescale: Their influence on the analysis and in constraining the process that generates geomagnetic reversals

    Science.gov (United States)

    Marzocchi, W.

    1997-03-01

    A major problem in defining the chronology of geomagnetic reversals is linked to the detection of short (short TIBR or to paleointensity fluctuations. Particular attention is paid to the influence of measurement errors estimated for the most recently published Cenozoic timescale. By following the minimalist philosophy of Occam's razor, which is particularly suitable for studying poorly known processes, the reliability of the simplest model, i.e., the Poisson process which is symmetric in polarity, can be checked. The results indicate the plausibility of a generalized renewal process; the only regularity is relative to the long-term trend, which is probably linked to core-mantle coupling. In detail, a uniform exponential trend in the last 80 Myr is found for the timescale; it is not presently possible to estimate the influence of the inclusion of tiny wiggles because they are well-resolved only in the last 30 Myr, a period in which both series are stationary. The sequences, with and without tiny wiggles, are symmetric in polarity, with no evidence of low-dimensional chaos and memory of past configurations. The empirical statistical distribution of the TIBR departs slightly from a theoretical exponential distribution, i.e., from a Poisson process, which can be explained by a lack of short anomalies, and/or by a generating process with wear-out properties (a more general renewal process). A real exponential distribution is sustainable only if the number of missing short TIBR in the last 30 Myr is larger than the number of tiny wiggles observed in the same period.

  6. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  7. Contributions of national and global health estimates to monitoring health-related sustainable development goals.

    Science.gov (United States)

    Bundhamcharoen, Kanitta; Limwattananon, Supon; Kusreesakul, Khanitta; Tangcharoensathien, Viroj

    2016-01-01

    The millennium development goals triggered an increased demand for data on child and maternal mortalities for monitoring progress. With the advent of the sustainable development goals and growing evidence of an epidemiological transition toward non-communicable diseases, policymakers need data on mortality and disease trends and distribution to inform effective policies and support monitoring progress. Where there are limited capacities to produce national health estimates (NHEs), global health estimates (GHEs) can fill gaps for global monitoring and comparisons. This paper discusses lessons learned from Thailand's burden of disease (BOD) study on capacity development on NHEs and discusses the contributions and limitations of GHEs in informing policies at the country level. Through training and technical support by external partners, capacities are gradually strengthened and institutionalized to enable regular updates of BOD at national and subnational levels. Initially, the quality of cause-of-death reporting in death certificates was inadequate, especially for deaths occurring in the community. Verbal autopsies were conducted, using domestic resources, to determine probable causes of deaths occurring in the community. This method helped to improve the estimation of years of life lost. Since the achievement of universal health coverage in 2002, the quality of clinical data on morbidities has also considerably improved. There are significant discrepancies between the Global Burden of Disease 2010 study estimates for Thailand and the 1999 nationally generated BOD, especially for years of life lost due to HIV/AIDS, and the ranking of priority diseases. National ownership of NHEs and an effective interface between researchers and decision-makers contribute to enhanced country policy responses, whereas subnational data are intended to be used by various subnational partners. Although GHEs contribute to benchmarking country achievement compared with global health

  8. Aboveground Forest Biomass Estimation with Landsat and LiDAR Data and Uncertainty Analysis of the Estimates

    Directory of Open Access Journals (Sweden)

    Dengsheng Lu

    2012-01-01

    Full Text Available Landsat Thematic mapper (TM image has long been the dominate data source, and recently LiDAR has offered an important new structural data stream for forest biomass estimations. On the other hand, forest biomass uncertainty analysis research has only recently obtained sufficient attention due to the difficulty in collecting reference data. This paper provides a brief overview of current forest biomass estimation methods using both TM and LiDAR data. A case study is then presented that demonstrates the forest biomass estimation methods and uncertainty analysis. Results indicate that Landsat TM data can provide adequate biomass estimates for secondary succession but are not suitable for mature forest biomass estimates due to data saturation problems. LiDAR can overcome TM’s shortcoming providing better biomass estimation performance but has not been extensively applied in practice due to data availability constraints. The uncertainty analysis indicates that various sources affect the performance of forest biomass/carbon estimation. With that said, the clear dominate sources of uncertainty are the variation of input sample plot data and data saturation problem related to optical sensors. A possible solution to increasing the confidence in forest biomass estimates is to integrate the strengths of multisensor data.

  9. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  10. Refinement on geometry of Matuyama-Brunhes polarity transition from paleomagnetic records

    Science.gov (United States)

    Oda, H.; Fabian, K.; Leonhardt, R.

    2012-04-01

    The Bayesian model of the Matuyama/Brunhes (MB) geomagnetic polarity reversal was extended from the previous model IMMAB4 (Leonhardt and Fabian, 2007), which was based on one volcanic record and three sedimentary records from the Atlantic sector. The essential improvement on the model was achieved by incorporating a new volcanic record from Tahiti (Mochizuki et al., 2011). This record is unique in that it contains important absolute paleointensity data for the Pacific region, which provide new constraints for the global geomagnetic reversal scenario. The full vector development of transitional geomagnetic field in the central part of the Pacific significantly stabilized the solution in this important region, which was completely missing in the previous model IMMAB4. The sedimentary high-quality record of ODP Site 769 by Oda et al. (2000) previously was only used to check the reliability of the model IMMAB4 by comparing the VGP paths of the model and the data. An integrated sedimentary record of ODP Site 769 was developed from Oda et al. (2000) in combination with the relative paleointensity record provided by Schneider et al. (1992) and Kent&Schneider (1995). The record will also be included into the construction of the new model. Additionally, two records from the Antarctic region (Guyodo et al., 2001; Macri et al., 2010) were found, and might prove useful for further refining the model. To fulfill the aim, we have also revised the data structure, and developed a GUI based correlation software to simplify refinement of the model and further development of the scheme. In the presentation, we will show the revised morphological development of the Earth's magnetic field during Matuyama-Brunhes polarity transition.

  11. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  12. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    Science.gov (United States)

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  13. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  14. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    Science.gov (United States)

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides

  15. State Estimation for Tensegrity Robots

    Science.gov (United States)

    Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas

    2016-01-01

    Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.

  16. Relative and single particle diffusion estimates determined from smoke plume photographs

    International Nuclear Information System (INIS)

    Nappo, C.J. Jr.

    1978-01-01

    The formula given by Gifford (1959) for obtaining space-varying values of particle dispersion parameters from photographs of smoke puffs and plumes has been applied to high-altitude U-2 photographs of a long smoke plume generated at the Idaho National Engineering Laboratory near Idaho Falls. The turbulence time scale derived from the photographs was found to be in good agreement with estimates obtained within the framework of single- and two-particle diffusion theory applied to wind speed and direction data from a tower near the smoke source

  17. Challenges Associated with Estimating Utility in Wet Age-Related Macular Degeneration: A Novel Regression Analysis to Capture the Bilateral Nature of the Disease.

    Science.gov (United States)

    Hodgson, Robert; Reason, Timothy; Trueman, David; Wickstead, Rose; Kusel, Jeanette; Jasilek, Adam; Claxton, Lindsay; Taylor, Matthew; Pulikottil-Jacob, Ruth

    2017-10-01

    The estimation of utility values for the economic evaluation of therapies for wet age-related macular degeneration (AMD) is a particular challenge. Previous economic models in wet AMD have been criticized for failing to capture the bilateral nature of wet AMD by modelling visual acuity (VA) and utility values associated with the better-seeing eye only. Here we present a de novo regression analysis using generalized estimating equations (GEE) applied to a previous dataset of time trade-off (TTO)-derived utility values from a sample of the UK population that wore contact lenses to simulate visual deterioration in wet AMD. This analysis allows utility values to be estimated as a function of VA in both the better-seeing eye (BSE) and worse-seeing eye (WSE). VAs in both the BSE and WSE were found to be statistically significant (p regression analysis provides a possible source of utility values to allow future economic models to capture the quality of life impact of changes in VA in both eyes. Novartis Pharmaceuticals UK Limited.

  18. Borderline features are associated with inaccurate trait self-estimations.

    Science.gov (United States)

    Morey, Leslie C

    2014-01-01

    Many treatments for Borderline Personality Disorder (BPD) are based upon the hypothesis that gross distortion in perceptions and attributions related to self and others represent a core mechanism for the enduring difficulties displayed by such patients. However, available experimental evidence of such distortions provides equivocal results, with some studies suggesting that BPD is related to inaccuracy in such perceptions and others indicative of enhanced accuracy in some judgments. The current study uses a novel methodology to explore whether individuals with BPD features are less accurate in estimating their levels of universal personality characteristics as compared to community norms. One hundred and four students received course instruction on the Five Factor Model of personality, and then were asked to estimate their levels of these five traits relative to community norms. They then completed the NEO-Five Factor Inventory and the Personality Assessment Inventory-Borderline Features scale (PAI-BOR). Accuracy of estimates was calculated by computing squared differences between self-estimated trait levels and norm-referenced standardized scores in the NEO-FFI. There was a moderately strong relationship between PAI-BOR score and inaccuracy of trait level estimates. In particular, high BOR individuals dramatically overestimated their levels of Agreeableness and Conscientiousness, estimating themselves to be slightly above average on each of these characteristics but actually scoring well below average on both. The accuracy of estimates of levels of Neuroticism were unrelated to BOR scores, despite the fact that BOR scores were highly correlated with Neuroticism. These findings support the hypothesis that a key feature of BPD involves marked perceptual distortions of various aspects of self in relationship to others. However, the results also indicate that this is not a global perceptual deficit, as high BOR scorers accurately estimated that their emotional

  19. Project schedule and cost estimate report

    International Nuclear Information System (INIS)

    1988-03-01

    All cost tables represent obligation dollars, at both a constant FY 1987 level and an estimated escalation level, and are based on the FY 1989 DOE Congressional Budget submittal of December 1987. The cost tables display the total UMTRA Project estimated costs, which include both Federal and state funding. The Total Estimated Cost (TEC) for the UMTRA Project is approximately $992.5 million (in 1987 escalated dollars). Project schedules have been developed that provide for Project completion by September 1994, subject to Congressional approval extending DOE's authorization under Public Law 95-604. The report contains site-specific demographic data, conceptual design assumptions, preliminary cost estimates, and site schedules. A general project overview is also presented, which includes a discussion of the basis for the schedule and cost estimates, contingency assumptions, work breakdown structure, and potential project risks. The schedules and cost estimates will be revised as necessary to reflect appropriate decisions relating to relocation of certain tailings piles, or other special design considerations or circumstances (such as revised EPA groundwater standards), and changes in the Project mission. 27 figs', 97 tabs

  20. Estimation of the simple correlation coefficient.

    Science.gov (United States)

    Shieh, Gwowen

    2010-11-01

    This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.

  1. Parameter Estimation of a Reliability Model of Demand-Caused and Standby-Related Failures of Safety Components Exposed to Degradation by Demand Stress and Ageing That Undergo Imperfect Maintenance

    Directory of Open Access Journals (Sweden)

    S. Martorell

    2017-01-01

    Full Text Available One can find many reliability, availability, and maintainability (RAM models proposed in the literature. However, such models become more complex day after day, as there is an attempt to capture equipment performance in a more realistic way, such as, explicitly addressing the effect of component ageing and degradation, surveillance activities, and corrective and preventive maintenance policies. Then, there is a need to fit the best model to real data by estimating the model parameters using an appropriate tool. This problem is not easy to solve in some cases since the number of parameters is large and the available data is scarce. This paper considers two main failure models commonly adopted to represent the probability of failure on demand (PFD of safety equipment: (1 by demand-caused and (2 standby-related failures. It proposes a maximum likelihood estimation (MLE approach for parameter estimation of a reliability model of demand-caused and standby-related failures of safety components exposed to degradation by demand stress and ageing that undergo imperfect maintenance. The case study considers real failure, test, and maintenance data for a typical motor-operated valve in a nuclear power plant. The results of the parameters estimation and the adoption of the best model are discussed.

  2. Criticality accident in uranium fuel processing plant. The estimation of the total number of fissions with related reactor physics parameters

    International Nuclear Information System (INIS)

    Nishina, Kojiro; Oyamatsu, Kazuhiro; Kondo, Shunsuke; Sekimoto, Hiroshi; Ishitani, Kazuki; Yamane, Yoshihiro; Miyoshi, Yoshinori

    2000-01-01

    This accident occurred when workers were pouring a uranium solution into a precipitation tank with handy operation against the established procedure and both the cylindrical diameter and the total mass exceeded the limited values. As a result, nuclear fission chain reactor in the solution reached not only a 'criticality' state continuing it independently but also an instantly forming criticality state exceed the criticality and increasing further nuclear fission number. The place occurring the accident at this time was not reactor but a place having not to form 'criticality' called by a processing process of uranium fuel. In such place, as because of relating to mechanism of chain reaction, it is required naturally for knowledge on the reactor physics, it is also necessary to understand chemical reaction in chemical process, and functions of tanks, valves and pumps mounted at the processes. For this purpose, some information on uranium concentration ratio, atomic density of nuclides largely affecting to chain reaction such as uranium, hydrogen, and so forth in the solution, shape, inner structure and size of container for the solution, and its temperature and total volume, were necessary for determining criticality volume of the accident uranium solution by using nuclear physics procedures. Here were described on estimation of energy emission in the JCO accident, estimation from analytical results on neutron and solution, calculation of various nuclear physics property estimation on the JCO precipitation tank at JAERI. (G.K.)

  3. Can genetic estimators provide robust estimates of the effective number of breeders in small populations?

    Directory of Open Access Journals (Sweden)

    Marion Hoehn

    Full Text Available The effective population size (N(e is proportional to the loss of genetic diversity and the rate of inbreeding, and its accurate estimation is crucial for the monitoring of small populations. Here, we integrate temporal studies of the gecko Oedura reticulata, to compare genetic and demographic estimators of N(e. Because geckos have overlapping generations, our goal was to demographically estimate N(bI, the inbreeding effective number of breeders and to calculate the N(bI/N(a ratio (N(a =number of adults for four populations. Demographically estimated N(bI ranged from 1 to 65 individuals. The mean reduction in the effective number of breeders relative to census size (N(bI/N(a was 0.1 to 1.1. We identified the variance in reproductive success as the most important variable contributing to reduction of this ratio. We used four methods to estimate the genetic based inbreeding effective number of breeders N(bI(gen and the variance effective populations size N(eV(gen estimates from the genotype data. Two of these methods - a temporal moment-based (MBT and a likelihood-based approach (TM3 require at least two samples in time, while the other two were single-sample estimators - the linkage disequilibrium method with bias correction LDNe and the program ONeSAMP. The genetic based estimates were fairly similar across methods and also similar to the demographic estimates excluding those estimates, in which upper confidence interval boundaries were uninformative. For example, LDNe and ONeSAMP estimates ranged from 14-55 and 24-48 individuals, respectively. However, temporal methods suffered from a large variation in confidence intervals and concerns about the prior information. We conclude that the single-sample estimators are an acceptable short-cut to estimate N(bI for species such as geckos and will be of great importance for the monitoring of species in fragmented landscapes.

  4. Structural estimation of jump-diffusion processes in macroeconomics

    DEFF Research Database (Denmark)

    Posch, Olaf

    2009-01-01

    This paper shows how to solve and estimate a continuous-time dynamic stochastic general equilibrium (DSGE) model with jumps. It also shows that a continuous-time formulation can make it simpler (relative to its discrete-time version) to compute and estimate the deep parameters using the likelihoo...

  5. Estimating regional methane surface fluxes: the relative importance of surface and GOSAT mole fraction measurements

    Directory of Open Access Journals (Sweden)

    A. Fraser

    2013-06-01

    Full Text Available We use an ensemble Kalman filter (EnKF, together with the GEOS-Chem chemistry transport model, to estimate regional monthly methane (CH4 fluxes for the period June 2009–December 2010 using proxy dry-air column-averaged mole fractions of methane (XCH4 from GOSAT (Greenhouse gases Observing SATellite and/or NOAA ESRL (Earth System Research Laboratory and CSIRO GASLAB (Global Atmospheric Sampling Laboratory CH4 surface mole fraction measurements. Global posterior estimates using GOSAT and/or surface measurements are between 510–516 Tg yr−1, which is less than, though within the uncertainty of, the prior global flux of 529 ± 25 Tg yr−1. We find larger differences between regional prior and posterior fluxes, with the largest changes in monthly emissions (75 Tg yr−1 occurring in Temperate Eurasia. In non-boreal regions the error reductions for inversions using the GOSAT data are at least three times larger (up to 45% than if only surface data are assimilated, a reflection of the greater spatial coverage of GOSAT, with the two exceptions of latitudes >60° associated with a data filter and over Europe where the surface network adequately describes fluxes on our model spatial and temporal grid. We use CarbonTracker and GEOS-Chem XCO2 model output to investigate model error on quantifying proxy GOSAT XCH4 (involving model XCO2 and inferring methane flux estimates from surface mole fraction data and show similar resulting fluxes, with differences reflecting initial differences in the proxy value. Using a series of observing system simulation experiments (OSSEs we characterize the posterior flux error introduced by non-uniform atmospheric sampling by GOSAT. We show that clear-sky measurements can theoretically reproduce fluxes within 10% of true values, with the exception of tropical regions where, due to a large seasonal cycle in the number of measurements because of clouds and aerosols, fluxes are within 15% of true fluxes. We evaluate our

  6. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    Science.gov (United States)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially

  7. Marine sediments and Beryllium-10 record of the geomagnetic moment variations during the Brunhes period.

    Science.gov (United States)

    Ménabréaz, Lucie; Thouveny, Nicolas; Bourlès, Didier; Demory, François

    2010-05-01

    Over millennial time scales, the atmospheric production of the cosmonuclid 10Be (half-life 1.387 ± 0.012 Ma [Shmeleff et al., 2009; Korschinek et al., 2009]) is modulated by the geomagnetic field strength, following a negative power law (e.g. Lal, 1988; Masarik and Beer, 2009). With respect to paleomagnetic reconstructions, 10Be-derived paleointensity records can therefore constitute an alternative, global and independent reading of the dipole moment variations. During the last years, efforts have been made to extract a geomagnetic signal from single and stacked 10Be records in natural archives such as ice and marine sediments (e.g. Carcaillet et al., 2004; Christl et al., 2007; Muscheler et al., 2005). In marine sediments, the 10Be concentration results from complex interplay of several processes: cosmogenic production, adsorption on sediment particles, redistribution by fluviatile and oceanic transport, and deposition. Therefore, a correction procedure is required to consider both sediment redistribution and enhanced scavenging, which can alter the primary signatures. To reconstruct the succession of field intensity lows accompanying excursions during the Brunhes chron, we investigated authigenic 10Be/9Be record of marine sequences also studied for paleomagnetism and oxygen isotopes. Mid and low latitude sites were preferred in order to benefit from the most efficient modulation by the magnetospheric shielding. We present a high resolution authigenic 10Be/9Be record of the last 50 ka recovered from the Portuguese Margin, that deciphers the cosmonuclide 10Be overproduction created by the geomagnetic dipole low associated with the Laschamp excursion. This record is compared to other proxy records of the geomagnetic field variations for the same time interval: (1) the relative paleointensity (RPI) reconstructed from the same sediments and the GLOPIS-75 record (Laj et al., 2004), (2) the absolute VDM record based on absolute paleointensities measured on lava flows

  8. Estimating changes in unrecorded alcohol consumption in Norway using indicators of harm.

    Science.gov (United States)

    Norström, T

    1998-10-01

    To assess the value of using indicators of alcohol-related harm to estimate changes in unrecorded per capita consumption of alcohol. Unrecorded consumption was estimated from the discrepancy between the observed changes in a number of alcohol-related harm indicators and the changes that would be expected from changes in recorded consumption. The results were compared with estimates of unrecorded consumption from survey data. Four indicators of alcohol-related harm were used: alcohol-related mortality, assaults, drunken driving, and suicide. Estimates of unrecorded consumption from survey data for five different years were used as benchmarks. The best performing indicators were alcohol-related mortality, suicide and assaults, in that order. Combining these indicators yielded a prediction error averaging 12% in comparison with the benchmarks. The method seems worthy of further applications, but it should be regarded as a supplement rather than as a substitute for other approaches.

  9. A relative vulnerability estimation of flood disaster using data envelopment analysis in the Dongting Lake region of Hunan

    Directory of Open Access Journals (Sweden)

    C.-H. Li

    2013-07-01

    . On a temporal scale, the occurrence of a vibrating flood vulnerability trend is observed. A different picture is displayed with the disaster driver risk level, disaster environment stability level and disaster bearer sensitivity level. The flood relative vulnerability estimation method based on DEA is characteristic of good comparability, which takes the relative efficiency of disaster system input–output into account, and portrays a very diverse but consistent picture with varying time steps. Therefore, among different spatial and time domains, we could compare the disaster situations with what was reflected by the same disaster. Additionally, the method overcomes the subjectivity of a comprehensive flood index caused by using an a priori weighting system, which exists in disaster vulnerability estimation of current disasters.

  10. Numerical estimation in individuals with Down syndrome.

    Science.gov (United States)

    Lanfranchi, Silvia; Berteletti, Ilaria; Torrisi, Erika; Vianello, Renzo; Zorzi, Marco

    2014-10-31

    We investigated numerical estimation in children with Down syndrome (DS) in order to assess whether their pattern of performance is tied to experience (age), overall cognitive level, or specifically impaired. Siegler and Opfer's (2003) number to position task, which requires translating a number into a spatial position on a number line, was administered to a group of 21 children with DS and to two control groups of typically developing children (TD), matched for mental and chronological age. Results suggest that numerical estimation and the developmental transition between logarithm and linear patterns of estimates in children with DS is more similar to that of children with the same mental age than to children with the same chronological age. Moreover linearity was related to the cognitive level in DS while in TD children it was related to the experience level. Copyright © 2014. Published by Elsevier Ltd.

  11. Estimating Torque Imparted on Spacecraft Using Telemetry

    Science.gov (United States)

    Lee, Allan Y.; Wang, Eric K.; Macala, Glenn A.

    2013-01-01

    There have been a number of missions with spacecraft flying by planetary moons with atmospheres; there will be future missions with similar flybys. When a spacecraft such as Cassini flies by a moon with an atmosphere, the spacecraft will experience an atmospheric torque. This torque could be used to determine the density of the atmosphere. This is because the relation between the atmospheric torque vector and the atmosphere density could be established analytically using the mass properties of the spacecraft, known drag coefficient of objects in free-molecular flow, and the spacecraft velocity relative to the moon. The density estimated in this way could be used to check results measured by science instruments. Since the proposed methodology could estimate disturbance torque as small as 0.02 N-m, it could also be used to estimate disturbance torque imparted on the spacecraft during high-altitude flybys.

  12. Lower Bounds to the Reliabilities of Factor Score Estimators.

    Science.gov (United States)

    Hessen, David J

    2016-10-06

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone's factor score estimators, Bartlett's factor score estimators, and McDonald's factor score estimators are derived and conditions are given under which these lower bounds are equal. The relative performance of the derived lower bounds is studied using classic example data sets. The results show that estimates of the lower bounds to the reliabilities of Thurstone's factor score estimators are greater than or equal to the estimates of the lower bounds to the reliabilities of Bartlett's and McDonald's factor score estimators.

  13. An In-vivo investigation of transverse flow estimation

    DEFF Research Database (Denmark)

    Udesen, Jesper; Jensen, Jørgen Arendt

    2004-01-01

    , and 1.4 seconds of data is acquired. Using 2 parallel receive beamformers a transverse oscillation is introduced with an oscillation period 1.2 mm. The velocity estimation is performed using an extended autocorrelation algorithm. The volume flow can be estimated with a relative standard deviation of 13...

  14. Estimating cancer risk in relation to tritium exposure from routine operation of a nuclear-generating station in Pickering, Ontario.

    Science.gov (United States)

    Wanigaratne, S; Holowaty, E; Jiang, H; Norwood, T A; Pietrusiak, M A; Brown, P

    2013-09-01

    Evidence suggests that current levels of tritium emissions from CANDU reactors in Canada are not related to adverse health effects. However, these studies lack tritium-specific dose data and have small numbers of cases. The purpose of our study was to determine whether tritium emitted from a nuclear-generating station during routine operation is associated with risk of cancer in Pickering, Ontario. A retrospective cohort was formed through linkage of Pickering and north Oshawa residents (1985) to incident cancer cases (1985-2005). We examined all sites combined, leukemia, lung, thyroid and childhood cancers (6-19 years) for males and females as well as female breast cancer. Tritium estimates were based on an atmospheric dispersion model, incorporating characteristics of annual tritium emissions and meteorology. Tritium concentration estimates were assigned to each cohort member based on exact location of residence. Person-years analysis was used to determine whether observed cancer cases were higher than expected. Cox proportional hazards regression was used to determine whether tritium was associated with radiation-sensitive cancers in Pickering. Person-years analysis showed female childhood cancer cases to be significantly higher than expected (standardized incidence ratio [SIR] = 1.99, 95% confidence interval [CI]: 1.08-3.38). The issue of multiple comparisons is the most likely explanation for this finding. Cox models revealed that female lung cancer was significantly higher in Pickering versus north Oshawa (HR = 2.34, 95% CI: 1.23-4.46) and that tritium was not associated with increased risk. The improved methodology used in this study adds to our understanding of cancer risks associated with low-dose tritium exposure. Tritium estimates were not associated with increased risk of radiationsensitive cancers in Pickering.

  15. Estimation of morbidity effects

    International Nuclear Information System (INIS)

    Ostro, B.

    1994-01-01

    Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10

  16. Estimates of Fermilab Tevatron collider performance

    International Nuclear Information System (INIS)

    Dugan, G.

    1991-09-01

    This paper describes a model which has been used to estimate the average luminosity performance of the Tevatron collider. In the model, the average luminosity is related quantitatively to various performance parameters of the Fermilab Tevatron collider complex. The model is useful in allowing estimates to be developed for the improvements in average collider luminosity to be expected from changes in the fundamental performance parameters as a result of upgrades to various parts of the accelerator complex

  17. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  18. An Inertial Sensor-Based Method for Estimating the Athlete's Relative Joint Center Positions and Center of Mass Kinematics in Alpine Ski Racing

    Directory of Open Access Journals (Sweden)

    Benedikt Fasel

    2017-11-01

    Full Text Available For the purpose of gaining a deeper understanding of the relationship between external training load and health in competitive alpine skiing, an accurate and precise estimation of the athlete's kinematics is an essential methodological prerequisite. This study proposes an inertial sensor-based method to estimate the athlete's relative joint center positions and center of mass (CoM kinematics in alpine skiing. Eleven inertial sensors were fixed to the lower and upper limbs, trunk, and head. The relative positions of the ankle, knee, hip, shoulder, elbow, and wrist joint centers, as well as the athlete's CoM kinematics were validated against a marker-based optoelectronic motion capture system during indoor carpet skiing. For all joints centers analyzed, position accuracy (mean error was below 110 mm and precision (error standard deviation was below 30 mm. CoM position accuracy and precision were 25.7 and 6.7 mm, respectively. Both the accuracy and precision of the system to estimate the distance between the ankle of the outside leg and CoM (measure quantifying the skier's overall vertical motion were found to be below 11 mm. Some poorer accuracy and precision values (below 77 mm were observed for the athlete's fore-aft position (i.e., the projection of the outer ankle-CoM vector onto the line corresponding to the projection of ski's longitudinal axis on the snow surface. In addition, the system was found to be sensitive enough to distinguish between different types of turns (wide/narrow. Thus, the method proposed in this paper may also provide a useful, pervasive way to monitor and control adverse external loading patterns that occur during regular on-snow training. Moreover, as demonstrated earlier, such an approach might have a certain potential to quantify competition time, movement repetitions and/or the accelerations acting on the different segments of the human body. However, prior to getting feasible for applications in daily training

  19. An Inertial Sensor-Based Method for Estimating the Athlete's Relative Joint Center Positions and Center of Mass Kinematics in Alpine Ski Racing.

    Science.gov (United States)

    Fasel, Benedikt; Spörri, Jörg; Schütz, Pascal; Lorenzetti, Silvio; Aminian, Kamiar

    2017-01-01

    For the purpose of gaining a deeper understanding of the relationship between external training load and health in competitive alpine skiing, an accurate and precise estimation of the athlete's kinematics is an essential methodological prerequisite. This study proposes an inertial sensor-based method to estimate the athlete's relative joint center positions and center of mass (CoM) kinematics in alpine skiing. Eleven inertial sensors were fixed to the lower and upper limbs, trunk, and head. The relative positions of the ankle, knee, hip, shoulder, elbow, and wrist joint centers, as well as the athlete's CoM kinematics were validated against a marker-based optoelectronic motion capture system during indoor carpet skiing. For all joints centers analyzed, position accuracy (mean error) was below 110 mm and precision (error standard deviation) was below 30 mm. CoM position accuracy and precision were 25.7 and 6.7 mm, respectively. Both the accuracy and precision of the system to estimate the distance between the ankle of the outside leg and CoM (measure quantifying the skier's overall vertical motion) were found to be below 11 mm. Some poorer accuracy and precision values (below 77 mm) were observed for the athlete's fore-aft position (i.e., the projection of the outer ankle-CoM vector onto the line corresponding to the projection of ski's longitudinal axis on the snow surface). In addition, the system was found to be sensitive enough to distinguish between different types of turns (wide/narrow). Thus, the method proposed in this paper may also provide a useful, pervasive way to monitor and control adverse external loading patterns that occur during regular on-snow training. Moreover, as demonstrated earlier, such an approach might have a certain potential to quantify competition time, movement repetitions and/or the accelerations acting on the different segments of the human body. However, prior to getting feasible for applications in daily training, future studies

  20. Estimating carbon stock in secondary forests

    DEFF Research Database (Denmark)

    Breugel, Michiel van; Ransijn, Johannes; Craven, Dylan

    2011-01-01

    of trees and species for destructive biomass measurements. We assess uncertainties associated with these decisions using data from 94 secondary forest plots in central Panama and 244 harvested trees belonging to 26 locally abundant species. AGB estimates from species-specific models were used to assess...... is the use of allometric regression models to convert forest inventory data to estimates of aboveground biomass (AGB). The use of allometric models implies decisions on the selection of extant models or the development of a local model, the predictor variables included in the selected model, and the number...... relative errors of estimates from multispecies models. To reduce uncertainty in the estimation of plot AGB, including wood specific gravity (WSG) in the model was more important than the number of trees used for model fitting. However, decreasing the number of trees increased uncertainty of landscape...

  1. ROAD TRAFFIC ESTIMATION USING BLUETOOTH SENSORS

    Directory of Open Access Journals (Sweden)

    Monika N. BUGDOL

    2017-09-01

    Full Text Available The Bluetooth standard is a low-cost, very popular communication protocol offering a wide range of applications in many fields. In this paper, a novel system for road traffic estimation using Bluetooth sensors has been presented. The system consists of three main modules: filtration, statistical analysis of historical, and traffic estimation and prediction. The filtration module is responsible for the classification of road users and detecting measurements that should be removed. Traffic estimation has been performed on the basis of the data collected by Bluetooth measuring devices and information on external conditions (e.g., temperature, all of which have been gathered in the city of Bielsko-Biala (Poland. The obtained results are very promising. The smallest average relative error between the number of cars estimated by the model and the actual traffic was less than 10%.

  2. Estimating Population Turnover Rates by Relative Quantification Methods Reveals Microbial Dynamics in Marine Sediment.

    Science.gov (United States)

    Kevorkian, Richard; Bird, Jordan T; Shumaker, Alexander; Lloyd, Karen G

    2018-01-01

    The difficulty involved in quantifying biogeochemically significant microbes in marine sediments limits our ability to assess interspecific interactions, population turnover times, and niches of uncultured taxa. We incubated surface sediments from Cape Lookout Bight, North Carolina, USA, anoxically at 21°C for 122 days. Sulfate decreased until day 68, after which methane increased, with hydrogen concentrations consistent with the predicted values of an electron donor exerting thermodynamic control. We measured turnover times using two relative quantification methods, quantitative PCR (qPCR) and the product of 16S gene read abundance and total cell abundance (FRAxC, which stands for "fraction of read abundance times cells"), to estimate the population turnover rates of uncultured clades. Most 16S rRNA reads were from deeply branching uncultured groups, and ∼98% of 16S rRNA genes did not abruptly shift in relative abundance when sulfate reduction gave way to methanogenesis. Uncultured Methanomicrobiales and Methanosarcinales increased at the onset of methanogenesis with population turnover times estimated from qPCR at 9.7 ± 3.9 and 12.6 ± 4.1 days, respectively. These were consistent with FRAxC turnover times of 9.4 ± 5.8 and 9.2 ± 3.5 days, respectively. Uncultured Syntrophaceae , which are possibly fermentative syntrophs of methanogens, and uncultured Kazan-3A-21 archaea also increased at the onset of methanogenesis, with FRAxC turnover times of 14.7 ± 6.9 and 10.6 ± 3.6 days. Kazan-3A-21 may therefore either perform methanogenesis or form a fermentative syntrophy with methanogens. Three genera of sulfate-reducing bacteria, Desulfovibrio , Desulfobacter , and Desulfobacterium , increased in the first 19 days before declining rapidly during sulfate reduction. We conclude that population turnover times on the order of days can be measured robustly in organic-rich marine sediment, and the transition from sulfate-reducing to methanogenic conditions stimulates

  3. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  4. Real-Time Head Pose Estimation on Mobile Platforms

    Directory of Open Access Journals (Sweden)

    Jianfeng Ren

    2010-06-01

    Full Text Available Many computer vision applications such as augmented reality require head pose estimation. As far as the real-time implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to satisfy real-time constraints while maintaining reasonable head pose estimation accuracy. The introduced head pose estimation approach in this paper is an attempt to meet this objective. The approach consists of the following components: Viola-Jones face detection, color-based face tracking using an online calibration procedure, and head pose estimation using Hu moment features and Fisher linear discriminant. Experimental results running on an actual mobile device are reported exhibiting both the real- time and accuracy aspects of the developed approach.

  5. Estimation of peginesatide utilization requires patient-level data

    Directory of Open Access Journals (Sweden)

    Alex Yang

    2012-06-01

    Due to the nonlinear dose relationship between peginesatide and epoetin, facilities with similar epoetin use (<2% relative difference had up to 35% difference in estimate of peginesatide use. For accurate estimation of peginesatide utilization, it is important to base conversions on epoetin dose distribution rather than mean epoetin dose.fx1

  6. Estimating the re-identification risk of clinical data sets

    Directory of Open Access Journals (Sweden)

    Dankar Fida

    2012-07-01

    Full Text Available Abstract Background De-identification is a common way to protect patient privacy when disclosing clinical data for secondary purposes, such as research. One type of attack that de-identification protects against is linking the disclosed patient data with public and semi-public registries. Uniqueness is a commonly used measure of re-identification risk under this attack. If uniqueness can be measured accurately then the risk from this kind of attack can be managed. In practice, it is often not possible to measure uniqueness directly, therefore it must be estimated. Methods We evaluated the accuracy of uniqueness estimators on clinically relevant data sets. Four candidate estimators were identified because they were evaluated in the past and found to have good accuracy or because they were new and not evaluated comparatively before: the Zayatz estimator, slide negative binomial estimator, Pitman’s estimator, and mu-argus. A Monte Carlo simulation was performed to evaluate the uniqueness estimators on six clinically relevant data sets. We varied the sampling fraction and the uniqueness in the population (the value being estimated. The median relative error and inter-quartile range of the uniqueness estimates was measured across 1000 runs. Results There was no single estimator that performed well across all of the conditions. We developed a decision rule which selected between the Pitman, slide negative binomial and Zayatz estimators depending on the sampling fraction and the difference between estimates. This decision rule had the best consistent median relative error across multiple conditions and data sets. Conclusion This study identified an accurate decision rule that can be used by health privacy researchers and disclosure control professionals to estimate uniqueness in clinical data sets. The decision rule provides a reliable way to measure re-identification risk.

  7. An estimation of COPD cases and respiratory mortality related to Ground-Level Ozone in the metropolitan Ahvaz during 2011

    Directory of Open Access Journals (Sweden)

    Sahar Geravandi

    2016-02-01

    Full Text Available Background & Aims of the Study :  Ground-Level Ozone (GLO is the component of one of greatest concern that threatened human health in both developing as well as developed countries. The GLO mainly enters the body through the respiration and can cause decrements in pulmonary complications, eye burning, shortness of breath, coughing, failure of immune defense, decreases forced vital capacity, reduce lung function of the lungs and increase rate of mortality. Ahwaz with high emission air pollutants because of numerous industries is one of the metropolitan Iranian polluted. The aim of this study is evaluate to Chronic Obstructive Pulmonary Disease (COPD and respiratory mortality related to GLO in the air of metropolitan Ahvaz during 2011. Materials & Methods: We used the generalized additive Air Q model for estimation of COPD and respiratory mortality attributed to GLO pollutant. Data of GLO were collected in four monitoring stations Ahvaz Department of Environment. Raw data processing by Excel software and at final step they were converted as input file to the Air Q model for estimate number of COPD Cases and respiratory mortality. Results: According to result this study, The Naderi and Havashenasi had the highest and the lowest GLO concentrations. The results of this study showed that cumulative cases of COPD and respiratory mortality which related to GLO were 34 and 30 persons, respectively. Also, Findings showed that approximately 11 % COPD and respiratory mortality happened when the GLO concentrations was more than 20 μg/m 3 . Conclusions: exposure to GLO pollution has stronger effects on human health in Ahvaz. Findings showed that there were a significant relationship between concentration of GLO and COPD and respiratory mortality. Therefore; the higher ozone pollutant value can depict mismanagement in urban air quality.  

  8. Parameter Estimation of Nonlinear Models in Forestry.

    OpenAIRE

    Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.

    1999-01-01

    Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...

  9. Statistical methods of estimating mining costs

    Science.gov (United States)

    Long, K.R.

    2011-01-01

    Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.

  10. Towards a Model Climatology of Relative Humidity in the Upper Troposphere for Estimation of Contrail and Contrail-Induced Cirrus

    Science.gov (United States)

    Selkirk, Henry B.; Manyin, M.; Ott, L.; Oman, L.; Benson, C.; Pawson, S.; Douglass, A. R.; Stolarski, R. S.

    2011-01-01

    The formation of contrails and contrail cirrus is very sensitive to the relative humidity of the upper troposphere. To reduce uncertainty in an estimate of the radiative impact of aviation-induced cirrus, a model must therefore be able to reproduce the observed background moisture fields with reasonable and quantifiable fidelity. Here we present an upper tropospheric moisture climatology from a 26-year ensemble of simulations using the GEOS CCM. We compare this free-running model's moisture fields to those obtained from the MLS and AIRS satellite instruments, our most comprehensive observational databases for upper tropospheric water vapor. Published comparisons have shown a substantial wet bias in GEOS-5 assimilated fields with respect to MLS water vapor and ice water content. This tendency is clear as well in the GEOS CCM simulations. The GEOS-5 moist physics in the GEOS CCM uses a saturation adjustment that prevents supersaturation, which is unrealistic when compared to in situ moisture observations from MOZAIC aircraft and balloon sondes as we will show. Further, the large-scale satellite datasets also consistently underestimate super-saturation when compared to the in-situ observations. We place these results in the context of estimates of contrail and contrail cirrus frequency.

  11. Estimation of the Relative Contribution of Postprandial Glucose Exposure to Average Total Glucose Exposure in Subjects with Type 2 Diabetes

    Directory of Open Access Journals (Sweden)

    Bo Ahrén

    2016-01-01

    Full Text Available We hypothesized that the relative contribution of fasting plasma glucose (FPG versus postprandial plasma glucose (PPG to glycated haemoglobin (HbA1c could be calculated using an algorithm developed by the A1c-Derived Average Glucose (ADAG study group to make HbA1c values more clinically relevant to patients. The algorithm estimates average glucose (eAG exposure, which can be used to calculate apparent PPG (aPPG by subtracting FPG. The hypothesis was tested in a large dataset (comprising 17 studies from the vildagliptin clinical trial programme. We found that 24 weeks of treatment with vildagliptin monotherapy (n=2523 reduced the relative contribution of aPPG to eAG from 8.12% to 2.95% (by 64%, p<0.001. In contrast, when vildagliptin was added to metformin (n=2752, the relative contribution of aPPG to eAG insignificantly increased from 1.59% to 2.56%. In conclusion, glucose peaks, which are often prominent in patients with type 2 diabetes, provide a small contribution to the total glucose exposure assessed by HbA1c, and the ADAG algorithm is not robust enough to assess this small relative contribution in patients receiving combination therapy.

  12. USING COLORS TO IMPROVE PHOTOMETRIC METALLICITY ESTIMATES FOR GALAXIES

    International Nuclear Information System (INIS)

    Sanders, N. E.; Soderberg, A. M.; Levesque, E. M.

    2013-01-01

    There is a well known correlation between the mass and metallicity of star-forming galaxies. Because mass is correlated with luminosity, this relation is often exploited, when spectroscopy is not available, to estimate galaxy metallicities based on single band photometry. However, we show that galaxy color is typically more effective than luminosity as a predictor of metallicity. This is a consequence of the correlation between color and the galaxy mass-to-light ratio and the recently discovered correlation between star formation rate (SFR) and residuals from the mass-metallicity relation. Using Sloan Digital Sky Survey spectroscopy of ∼180, 000 nearby galaxies, we derive 'LZC relations', empirical relations between metallicity (in seven common strong line diagnostics), luminosity, and color (in 10 filter pairs and four methods of photometry). We show that these relations allow photometric metallicity estimates, based on luminosity and a single optical color, that are ∼50% more precise than those made based on luminosity alone; galaxy metallicity can be estimated to within ∼0.05-0.1 dex of the spectroscopically derived value depending on the diagnostic used. Including color information in photometric metallicity estimates also reduces systematic biases for populations skewed toward high or low SFR environments, as we illustrate using the host galaxy of the supernova SN 2010ay. This new tool will lend more statistical power to studies of galaxy populations, such as supernova and gamma-ray burst host environments, in ongoing and future wide-field imaging surveys

  13. Estimation of nuclear power plant aircraft hazards

    International Nuclear Information System (INIS)

    Gottlieb, P.

    1978-01-01

    The standard procedures for estimating aircraft risk to nuclear power plants provide a conservative estimate, which is adequate for most sites, which are not close to airports or heavily traveled air corridors. For those sites which are close to facilities handling large numbers of aircraft movements (airports or corridors), a more precise estimate of aircraft impact frequency can be obtained as a function of aircraft size. In many instances the very large commercial aircraft can be shown to have an acceptably small impact frequency, while the very small general aviation aircraft will not produce sufficiently serious impact to impair the safety-related functions. This paper examines the in between aircraft: primarily twin-engine, used for business, pleasure, and air taxi operations. For this group of aircraft the total impact frequency was found to be approximately once in one million years, the threshold above which further consideration of specific safety-related consequences would be required

  14. Risk estimates for the health effects of alpha radiation

    International Nuclear Information System (INIS)

    Thomas, D.C.; McNeill, K.G.

    1981-09-01

    This report provides risk estimates for various health effects of alpha radiation. Human and animal data have been used to characterize the shapes of dose-response relations and the effects of various modifying factors, but quantitative risk estimates are based solely on human data: for lung cancer, on miners in the Colorado plateau, Czechoslovakia, Sweden, Ontario and Newfoundland; for bone and head cancers, on radium dial painters and radium-injected patients. Slopes of dose-response relations for lung cancer show a tendency to decrease with increasing dose. Linear extrapolation is unlikely to underestimate the excess risk at low doses by more than a factor of l.5. Under the linear cell-killing model, our best estimate

  15. Time variations in geomagnetic intensity

    Science.gov (United States)

    Valet, Jean-Pierre

    2003-03-01

    After many years spent by paleomagnetists studying the directional behavior of the Earth's magnetic field at all possible timescales, detailed measurements of field intensity are now needed to document the variations of the entire vector and to analyze the time evolution of the field components. A significant step has been achieved by combining intensity records derived from archeological materials and from lava flows in order to extract the global field changes over the past 12 kyr. A second significant step was due to the emergence of coherent records of relative paleointensity using the remanent magnetization of sediments to retrace the evolution of the dipole field. A third step was the juxtaposition of these signals with those derived from cosmogenic isotopes. Contemporaneous with the acquisition of records, new techniques have been developed to constrain the geomagnetic origin of the signals. Much activity has also been devoted to improving the quality of determinations of absolute paleointensity from volcanic rocks with new materials, proper selection of samples, and investigations of complex changes in magnetization during laboratory experiments. Altogether these developments brought us from a situation where the field changes were restricted to the past 40 kyr to the emergence of a coherent picture of the changes in the geomagnetic dipole moment for at least the past 1 Myr. On longer timescales the field variability and its average behavior is relatively well documented for the past 400 Myr. Section 3 gives a summary of most methods and techniques that are presently used to track the field intensity changes in the past. In each case, current limits and potential promises are discussed. The section 4 describes the field variations measured so far over various timescales covered by the archeomagnetic and the paleomagnetic records. Preference has always been given to composite records and databases in order to extract and discuss major and global geomagnetic

  16. R2 TRI facilities with 1999-2011 risk related estimates throughout the census blockgroup

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset delineates the distribution of estimate risk from the TRI facilities for 1999 - 2011 throughout the census blockgroup of the region using Office of...

  17. Accuracy of prehospital transport time estimation.

    Science.gov (United States)

    Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W

    2014-01-01

    Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.

  18. Distance estimation experiment for aerial minke whale surveys

    Directory of Open Access Journals (Sweden)

    Lars Witting

    2009-09-01

    Full Text Available A comparative study between aerial cue–counting and digital photography surveys for minke whales conducted in Faxaflói Bay in September 2003 is used to check the perpendicular distances estimated by the cue-counting observers. The study involved 2 aircraft with the photo plane at 1,700 feet flying above the cue–counting plane at 750 feet. The observer–based distance estimates were calculated from head angles estimated by angle-boards and declination angles estimated by declinometers. These distances were checked against image–based estimates of the perpendicular distance to the same whale. The 2 independent distance estimates were obtained for 21 sightings of minke whale, and there was a good agreement between the 2 types of estimates. The relative absolute deviations between the 2 estimates were on average 23% (se: 6%, with the errors in the observer–based distance estimates resembling that of a log-normal distribution. The linear regression of the observer–based estimates (obs on the image–based estimates (img was Obs=1.1Img (R2=0.85 with an intercept fixed at zero. There was no evidence of a distance estimation bias that could generate a positive bias in the absolute abundance estimated by cue–counting.

  19. Parameter estimation in tree graph metabolic networks

    Directory of Open Access Journals (Sweden)

    Laura Astola

    2016-09-01

    Full Text Available We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis–Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.

  20. Parameter estimation in tree graph metabolic networks.

    Science.gov (United States)

    Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J

    2016-01-01

    We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.

  1. Using groundwater levels to estimate recharge

    Science.gov (United States)

    Healy, R.W.; Cook, P.G.

    2002-01-01

    Accurate estimation of groundwater recharge is extremely important for proper management of groundwater systems. Many different approaches exist for estimating recharge. This paper presents a review of methods that are based on groundwater-level data. The water-table fluctuation method may be the most widely used technique for estimating recharge; it requires knowledge of specific yield and changes in water levels over time. Advantages of this approach include its simplicity and an insensitivity to the mechanism by which water moves through the unsaturated zone. Uncertainty in estimates generated by this method relate to the limited accuracy with which specific yield can be determined and to the extent to which assumptions inherent in the method are valid. Other methods that use water levels (mostly based on the Darcy equation) are also described. The theory underlying the methods is explained. Examples from the literature are used to illustrate applications of the different methods.

  2. Incorporating Satellite Precipitation Estimates into a Radar-Gauge Multi-Sensor Precipitation Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Yuxiang He

    2018-01-01

    Full Text Available This paper presents a new and enhanced fusion module for the Multi-Sensor Precipitation Estimator (MPE that would objectively blend real-time satellite quantitative precipitation estimates (SQPE with radar and gauge estimates. This module consists of a preprocessor that mitigates systematic bias in SQPE, and a two-way blending routine that statistically fuses adjusted SQPE with radar estimates. The preprocessor not only corrects systematic bias in SQPE, but also improves the spatial distribution of precipitation based on SQPE and makes it closely resemble that of radar-based observations. It uses a more sophisticated radar-satellite merging technique to blend preprocessed datasets, and provides a better overall QPE product. The performance of the new satellite-radar-gauge blending module is assessed using independent rain gauge data over a five-year period between 2003–2007, and the assessment evaluates the accuracy of newly developed satellite-radar-gauge (SRG blended products versus that of radar-gauge products (which represents MPE algorithm currently used in the NWS (National Weather Service operations over two regions: (I Inside radar effective coverage and (II immediately outside radar coverage. The outcomes of the evaluation indicate (a ingest of SQPE over areas within effective radar coverage improve the quality of QPE by mitigating the errors in radar estimates in region I; and (b blending of radar, gauge, and satellite estimates over region II leads to reduction of errors relative to bias-corrected SQPE. In addition, the new module alleviates the discontinuities along the boundaries of radar effective coverage otherwise seen when SQPE is used directly to fill the areas outside of effective radar coverage.

  3. Information and crystal structure estimation

    International Nuclear Information System (INIS)

    Wilkins, S.W.; Commonwealth Scientific and Industrial Research Organization, Clayton; Varghese, J.N.; Steenstrup, S.

    1984-01-01

    The conceptual foundations of a general information-theoretic based approach to X-ray structure estimation are reexamined with a view to clarifying some of the subtleties inherent in the approach and to enhancing the scope of the method. More particularly, general reasons for choosing the minimum of the Shannon-Kullback measure for information as the criterion for inference are discussed and it is shown that the minimum information (or maximum entropy) principle enters the present treatment of the structure estimation problem in at least to quite separate ways, and that three formally similar but conceptually quite different expressions for relative information appear at different points in the theory. One of these is the general Shannon-Kullback expression, while the second is a derived form pertaining only under the restrictive assumptions of the present stochastic model for allowed structures, and the third is a measure of the additional information involved in accepting a fluctuation relative to an arbitrary mean structure. (orig.)

  4. Estimation After a Group Sequential Trial.

    Science.gov (United States)

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why

  5. Evaluation of sampling strategies to estimate crown biomass

    Directory of Open Access Journals (Sweden)

    Krishna P Poudel

    2015-01-01

    Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of

  6. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  7. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  8. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2015-01-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  9. Orbital Noise of the Earth Causes Intensity Fluctuation in the Geomagnetic Field

    Science.gov (United States)

    Liu, Han-Shou; Kolenkiewicz, R.; Wade, C., Jr.

    2003-01-01

    Orbital noise of Earth's obliquity can provide an insight into the core of the Earth that causes intensity fluctuations in the geomagnetic field. Here we show that noise spectrum of the obliquity frequency have revealed a series of frequency periods centered at 250-, 1OO-, 50-, 41-, 30-, and 26-kyr which are almost identical with the observed spectral peaks from the composite curve of 33 records of relative paleointensity spanning the past 800 kyr (Sint-800 data). A continuous record for the past two million years also reveals the presence of the major 100 kyr periodicity in obliquity noise and geomagnetic intensity fluctuations. These results of correlation suggest that obliquity noise may power the dynamo, located in the liquid outer core of the Earth, which generates the geomagnetic field.

  10. Estimates of US biofuels consumption, 1990

    International Nuclear Information System (INIS)

    1991-10-01

    This report is the sixth in the series of publications developed by the Energy Information Administration to quantify the amount of biofuel-derived primary energy used by the US economy. It provides preliminary estimates of 1990 US biofuels energy consumption by sector and by biofuels energy resource type. The objective of this report is to provide updated annual estimates of biofuels energy consumption for use by congress, federal and state agencies, and other groups involved in activities related to the use of biofuels. 5 figs., 10 tabs

  11. Estimates of US biofuels consumption, 1990

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-01

    This report is the sixth in the series of publications developed by the Energy Information Administration to quantify the amount of biofuel-derived primary energy used by the US economy. It provides preliminary estimates of 1990 US biofuels energy consumption by sector and by biofuels energy resource type. The objective of this report is to provide updated annual estimates of biofuels energy consumption for use by congress, federal and state agencies, and other groups involved in activities related to the use of biofuels. 5 figs., 10 tabs.

  12. Estimating waste disposal quantities from raw waste samples

    International Nuclear Information System (INIS)

    Negin, C.A.; Urland, C.S.; Hitz, C.G.; GPU Nuclear Corp., Middletown, PA)

    1985-01-01

    Estimating the disposal quantity of waste resulting from stabilization of radioactive sludge is complex because of the many factors relating to sample analysis results, radioactive decay, allowable disposal concentrations, and options for disposal containers. To facilitate this estimation, a microcomputer spread sheet template was created. The spread sheet has saved considerable engineering hours. 1 fig., 3 tabs

  13. Estimation of plasma cortisol by radiocompetition or radioimmunoassay. Use of commercial kits

    International Nuclear Information System (INIS)

    Rymer, J.C.

    1978-01-01

    The estimation of plasma cortisol is carried out in daily practice by numerous laboratories. The use of commercial kits permits a specific, sure and relatively precise estimation. Two kits are studied; the results obtained are compared with those given by a method of measurement of fluorescence. Economic assessment together with estimation of the risks due to manipulation of radioactive substances permit one to judge the relative values of these methods of analysis [fr

  14. Proposed Method for Estimating Health-Promoting Glucosinolates and Hydrolysis Products in Broccoli (Brassica oleracea var. italica) Using Relative Transcript Abundance.

    Science.gov (United States)

    Becker, Talon M; Jeffery, Elizabeth H; Juvik, John A

    2017-01-18

    Due to the importance of glucosinolates and their hydrolysis products in human nutrition and plant defense, optimizing the content of these compounds is a frequent breeding objective for Brassica crops. Toward this goal, we investigated the feasibility of using models built from relative transcript abundance data for the prediction of glucosinolate and hydrolysis product concentrations in broccoli. We report that predictive models explaining at least 50% of the variation for a number of glucosinolates and their hydrolysis products can be built for prediction within the same season, but prediction accuracy decreased when using models built from one season's data for prediction of an opposing season. This method of phytochemical profile prediction could potentially allow for lower phytochemical phenotyping costs and larger breeding populations. This, in turn, could improve selection efficiency for phase II induction potential, a type of chemopreventive bioactivity, by allowing for the quick and relatively cheap content estimation of phytochemicals known to influence the trait.

  15. Frequency Estimator Performance for a Software-Based Beacon Receiver

    Science.gov (United States)

    Zemba, Michael J.; Morse, Jacquelynne Rose; Nessel, James A.; Miranda, Felix

    2014-01-01

    As propagation terminals have evolved, their design has trended more toward a software-based approach that facilitates convenient adjustment and customization of the receiver algorithms. One potential improvement is the implementation of a frequency estimation algorithm, through which the primary frequency component of the received signal can be estimated with a much greater resolution than with a simple peak search of the FFT spectrum. To select an estimator for usage in a QV-band beacon receiver, analysis of six frequency estimators was conducted to characterize their effectiveness as they relate to beacon receiver design.

  16. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  17. Thermodynamic estimation: Ionic materials

    International Nuclear Information System (INIS)

    Glasser, Leslie

    2013-01-01

    Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy

  18. 32 CFR 1800.14 - Fee estimates (pre-request option).

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Fee estimates (pre-request option). 1800.14 Section 1800.14 National Defense Other Regulations Relating to National Defense NATIONAL... Requests § 1800.14 Fee estimates (pre-request option). In order to avoid unanticipated or potentially large...

  19. Robust Parameter and Signal Estimation in Induction Motors

    DEFF Research Database (Denmark)

    Børsting, H.

    This thesis deals with theories and methods for robust parameter and signal estimation in induction motors. The project originates in industrial interests concerning sensor-less control of electrical drives. During the work, some general problems concerning estimation of signals and parameters...... in nonlinear systems, have been exposed. The main objectives of this project are: - analysis and application of theories and methods for robust estimation of parameters in a model structure, obtained from knowledge of the physics of the induction motor. - analysis and application of theories and methods...... for robust estimation of the rotor speed and driving torque of the induction motor based only on measurements of stator voltages and currents. Only contimuous-time models have been used, which means that physical related signals and parameters are estimated directly and not indirectly by some discrete...

  20. Estimating the cost of a smoking employee.

    Science.gov (United States)

    Berman, Micah; Crane, Rob; Seiber, Eric; Munur, Mehmet

    2014-09-01

    We attempted to estimate the excess annual costs that a US private employer may attribute to employing an individual who smokes tobacco as compared to a non-smoking employee. Reviewing and synthesising previous literature estimating certain discrete costs associated with smoking employees, we developed a cost estimation approach that approximates the total of such costs for U.S. employers. We examined absenteeism, presenteesim, smoking breaks, healthcare costs and pension benefits for smokers. Our best estimate of the annual excess cost to employ a smoker is $5816. This estimate should be taken as a general indicator of the extent of excess costs, not as a predictive point value. Employees who smoke impose significant excess costs on private employers. The results of this study may help inform employer decisions about tobacco-related policies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. Comprehensive analysis of proton range uncertainties related to patient stopping-power-ratio estimation using the stoichiometric calibration

    Science.gov (United States)

    Yang, Ming; Zhu, X. Ronald; Park, Peter C.; Titt, Uwe; Mohan, Radhe; Virshup, Gary; Clayton, James E.; Dong, Lei

    2012-07-01

    The purpose of this study was to analyze factors affecting proton stopping-power-ratio (SPR) estimations and range uncertainties in proton therapy planning using the standard stoichiometric calibration. The SPR uncertainties were grouped into five categories according to their origins and then estimated based on previously published reports or measurements. For the first time, the impact of tissue composition variations on SPR estimation was assessed and the uncertainty estimates of each category were determined for low-density (lung), soft, and high-density (bone) tissues. A composite, 95th percentile water-equivalent-thickness uncertainty was calculated from multiple beam directions in 15 patients with various types of cancer undergoing proton therapy. The SPR uncertainties (1σ) were quite different (ranging from 1.6% to 5.0%) in different tissue groups, although the final combined uncertainty (95th percentile) for different treatment sites was fairly consistent at 3.0-3.4%, primarily because soft tissue is the dominant tissue type in the human body. The dominant contributing factor for uncertainties in soft tissues was the degeneracy of Hounsfield numbers in the presence of tissue composition variations. To reduce the overall uncertainties in SPR estimation, the use of dual-energy computed tomography is suggested. The values recommended in this study based on typical treatment sites and a small group of patients roughly agree with the commonly referenced value (3.5%) used for margin design. By using tissue-specific range uncertainties, one could estimate the beam-specific range margin by accounting for different types and amounts of tissues along a beam, which may allow for customization of range uncertainty for each beam direction.

  2. 32 CFR 1900.14 - Fee estimates (pre-request option).

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Fee estimates (pre-request option). 1900.14 Section 1900.14 National Defense Other Regulations Relating to National Defense CENTRAL INTELLIGENCE... § 1900.14 Fee estimates (pre-request option). In order to avoid unanticipated or potentially large fees...

  3. Estimation of on-farm interventions to control Campylobacter

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Borck Høg, Birgitte; Rosenquist, Hanne

    2015-01-01

    Before making risk management decisions to control Campylobacter prevalence in broiler flocks, it is useful to identify effective interventions. A given risk factor may seem to have a large effect, but in practice interventions related to this risk factor may have only limited effect due...... to a relative small proportion of the farms that can actually be intervened for the given risk factors. We present a novel tool for risk assessors to obtain such estimates of the effect of interventions before it is implemented at the farms. A statistical method was developed in order to estimate the flock...... population. In the present study risk factor estimates from a European study was used and the reference population consisted of data from the risk factor study plus extra data from a large questionnaire survey to improve the representativeness of the reference population. The results showed that some...

  4. Comparative analysis estimates the relative frequencies of co-divergence and cross-species transmission within viral families.

    Directory of Open Access Journals (Sweden)

    Jemma L Geoghegan

    2017-02-01

    Full Text Available The cross-species transmission of viruses from one host species to another is responsible for the majority of emerging infections. However, it is unclear whether some virus families have a greater propensity to jump host species than others. If related viruses have an evolutionary history of co-divergence with their hosts there should be evidence of topological similarities between the virus and host phylogenetic trees, whereas host jumping generates incongruent tree topologies. By analyzing co-phylogenetic processes in 19 virus families and their eukaryotic hosts we provide a quantitative and comparative estimate of the relative frequency of virus-host co-divergence versus cross-species transmission among virus families. Notably, our analysis reveals that cross-species transmission is a near universal feature of the viruses analyzed here, with virus-host co-divergence occurring less frequently and always on a subset of viruses. Despite the overall high topological incongruence among virus and host phylogenies, the Hepadnaviridae, Polyomaviridae, Poxviridae, Papillomaviridae and Adenoviridae, all of which possess double-stranded DNA genomes, exhibited more frequent co-divergence than the other virus families studied here. At the other extreme, the virus and host trees for all the RNA viruses studied here, particularly the Rhabdoviridae and the Picornaviridae, displayed high levels of topological incongruence, indicative of frequent host switching. Overall, we show that cross-species transmission plays a major role in virus evolution, with all the virus families studied here having the potential to jump host species, and that increased sampling will likely reveal more instances of host jumping.

  5. Graphs to estimate an individualized risk of breast cancer.

    Science.gov (United States)

    Benichou, J; Gail, M H; Mulvihill, J J

    1996-01-01

    Clinicians who counsel women about their risk for developing breast cancer need a rapid method to estimate individualized risk (absolute risk), as well as the confidence limits around that point. The Breast Cancer Detection Demonstration Project (BCDDP) model (sometimes called the Gail model) assumes no genetic model and simultaneously incorporates five risk factors, but involves cumbersome calculations and interpolations. This report provides graphs to estimate the absolute risk of breast cancer from the BCDDP model. The BCDDP recruited 280,000 women from 1973 to 1980 who were monitored for 5 years. From this cohort, 2,852 white women developed breast cancer and 3,146 controls were selected, all with complete risk-factor information. The BCDDP model, previously developed from these data, was used to prepare graphs that relate a specific summary relative-risk estimate to the absolute risk of developing breast cancer over intervals of 10, 20, and 30 years. Once a summary relative risk is calculated, the appropriate graph is chosen that shows the 10-, 20-, or 30-year absolute risk of developing breast cancer. A separate graph gives the 95% confidence limits around the point estimate of absolute risk. Once a clinician rules out a single gene trait that predisposes to breast cancer and elicits information on age and four risk factors, the tables and figures permit an estimation of a women's absolute risk of developing breast cancer in the next three decades. These results are intended to be applied to women who undergo regular screening. They should be used only in a formal counseling program to maximize a woman's understanding of the estimates and the proper use of them.

  6. Satisfaction and self-estimated performance in relation to indoor environmental parameters and building features

    DEFF Research Database (Denmark)

    Wargocki, Pawel; Frontczak, Monika; Schiavon, Stefano

    2012-01-01

    The paper examines how satisfaction with indoor environmental parameters and building features affects satisfaction and self-estimated job performance. The analyses used subjective responses from around 50, 000 occupants collected mainly in US office buildings using a web-based survey administered...

  7. Estimation of relative biological effectiveness for low energy protons using cytogenetic end points in mammalian cells

    International Nuclear Information System (INIS)

    Bhat, N.N.; Nairy, Rajesh; Chaurasia, Rajesh; Desai, Utkarsha; Shirsath, K.B.; Anjaria, K.B.; Sreedevi, B.

    2013-01-01

    A facility has been designed and developed to facilitate irradiation of biological samples to proton beam using folded tandem ion accelerator (FOTIA) at BARC. The primary proton beam from the accelerator was diffused using gold foil and channelled through a drift tube. Scattered beam was monitored and calibrated. Uniformity and dosimetry studies were conducted to calibrate the setup for precise irradiation of mammalian cells. Irradiation conditions and geometry were optimized for mammalian cells and other biological samples in thin layer. The irradiation facility is housed in a clean air laminar flow to help exposure of samples in aseptic conditions. The set up has been used for studying various radiobiological endpoints in many biological model systems. CHO, MCF-7, A-549 and INT-407 cell lines were studied in the present investigation using micronucleus (MN) induction as an indicator of radiation damage. The mammalian cells grown on petri plates to about 40 % confluence (log phase) were exposed to proton beam of known doses in the range of 0.1 to 2 Gy. The dose estimation was done based on specific ionization in cell medium. Studies were also conducted using 60 Co gamma radiation to compare the results. Linear quadratic response was observed for all the cell lines when exposed to 60 Co gamma radiation. In contrast, linear response was observed for proton beam. In addition, very significant increase in the MN yield was observed for proton beam compared to 60 Co gamma radiation. Estimated α and β values for CHO cells is found to be 0.02±0.003 Gy-1 and 0.042±0.006 Gy-2 respectively for 60 Co gamma radiation. For proton beam, estimated α for linear fit is found to be 0.37±0.011 Gy-1. Estimated RBE was found to be in the range of 4-8 for all the cell lines and dose ranges studied. In conclusion, the proton irradiation facility developed for mammalian cells has helped to study various radiobiological endpoints. In this presentation, facility description, MN as

  8. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  9. Estimation of the uncertainties considered in NPP PSA level 2

    International Nuclear Information System (INIS)

    Kalchev, B.; Hristova, R.

    2005-01-01

    The main approaches of the uncertainties analysis are presented. The sources of uncertainties which should be considered in PSA level 2 for WWER reactor such as: uncertainties propagated from level 1 PSA; uncertainties in input parameters; uncertainties related to the modelling of physical phenomena during the accident progression and uncertainties related to the estimation of source terms are defined. The methods for estimation of the uncertainties are also discussed in this paper

  10. A Method for A Priori Implementation Effort Estimation for Hardware Design

    DEFF Research Database (Denmark)

    Abildgren, Rasmus; Diguet, Jean-Philippe; Gogniat, Guy

    2008-01-01

    This paper presents a metric-based approach for estimating the hardware implementation effort (in terms of time) for an application in relation to the number of independent paths of its algorithms. We define a metric which exploits the relation between the number of independent paths in an algori...... facilitating designers and managers needs for estimating the time-to-market schedule....

  11. Estimating least-developed countries’ vulnerability to climate-related extreme events over the next 50 years

    Science.gov (United States)

    Patt, Anthony G.; Tadross, Mark; Nussbaumer, Patrick; Asante, Kwabena; Metzger, Marc; Rafael, Jose; Goujon, Anne; Brundrit, Geoff

    2010-01-01

    When will least developed countries be most vulnerable to climate change, given the influence of projected socio-economic development? The question is important, not least because current levels of international assistance to support adaptation lag more than an order of magnitude below what analysts estimate to be needed, and scaling up support could take many years. In this paper, we examine this question using an empirically derived model of human losses to climate-related extreme events, as an indicator of vulnerability and the need for adaptation assistance. We develop a set of 50-year scenarios for these losses in one country, Mozambique, using high-resolution climate projections, and then extend the results to a sample of 23 least-developed countries. Our approach takes into account both potential changes in countries’ exposure to climatic extreme events, and socio-economic development trends that influence countries’ own adaptive capacities. Our results suggest that the effects of socio-economic development trends may begin to offset rising climate exposure in the second quarter of the century, and that it is in the period between now and then that vulnerability will rise most quickly. This implies an urgency to the need for international assistance to finance adaptation. PMID:20080585

  12. Estimating least-developed countries' vulnerability to climate-related extreme events over the next 50 years.

    Science.gov (United States)

    Patt, Anthony G; Tadross, Mark; Nussbaumer, Patrick; Asante, Kwabena; Metzger, Marc; Rafael, Jose; Goujon, Anne; Brundrit, Geoff

    2010-01-26

    When will least developed countries be most vulnerable to climate change, given the influence of projected socio-economic development? The question is important, not least because current levels of international assistance to support adaptation lag more than an order of magnitude below what analysts estimate to be needed, and scaling up support could take many years. In this paper, we examine this question using an empirically derived model of human losses to climate-related extreme events, as an indicator of vulnerability and the need for adaptation assistance. We develop a set of 50-year scenarios for these losses in one country, Mozambique, using high-resolution climate projections, and then extend the results to a sample of 23 least-developed countries. Our approach takes into account both potential changes in countries' exposure to climatic extreme events, and socio-economic development trends that influence countries' own adaptive capacities. Our results suggest that the effects of socio-economic development trends may begin to offset rising climate exposure in the second quarter of the century, and that it is in the period between now and then that vulnerability will rise most quickly. This implies an urgency to the need for international assistance to finance adaptation.

  13. Quantitative Estimation for the Effectiveness of Automation

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun

    2012-01-01

    In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities

  14. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  15. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  16. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  17. Estimation of relative order tensors, and reconstruction of vectors in space using unassigned RDC data and its application

    Science.gov (United States)

    Miao, Xijiang; Mukhopadhyay, Rishi; Valafar, Homayoun

    2008-10-01

    Advances in NMR instrumentation and pulse sequence design have resulted in easier acquisition of Residual Dipolar Coupling (RDC) data. However, computational and theoretical analysis of this type of data has continued to challenge the international community of investigators because of their complexity and rich information content. Contemporary use of RDC data has required a-priori assignment, which significantly increases the overall cost of structural analysis. This article introduces a novel algorithm that utilizes unassigned RDC data acquired from multiple alignment media ( nD-RDC, n ⩾ 3) for simultaneous extraction of the relative order tensor matrices and reconstruction of the interacting vectors in space. Estimation of the relative order tensors and reconstruction of the interacting vectors can be invaluable in a number of endeavors. An example application has been presented where the reconstructed vectors have been used to quantify the fitness of a template protein structure to the unknown protein structure. This work has other important direct applications such as verification of the novelty of an unknown protein and validation of the accuracy of an available protein structure model in drug design. More importantly, the presented work has the potential to bridge the gap between experimental and computational methods of structure determination.

  18. Unbiased estimators for spatial distribution functions of classical fluids

    Science.gov (United States)

    Adib, Artur B.; Jarzynski, Christopher

    2005-01-01

    We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.

  19. A Note On the Estimation of the Poisson Parameter

    Directory of Open Access Journals (Sweden)

    S. S. Chitgopekar

    1985-01-01

    distribution when there are errors in observing the zeros and ones and obtains both the maximum likelihood and moments estimates of the Poisson mean and the error probabilities. It is interesting to note that either method fails to give unique estimates of these parameters unless the error probabilities are functionally related. However, it is equally interesting to observe that the estimate of the Poisson mean does not depend on the functional relationship between the error probabilities.

  20. Covariance matrix estimation for stationary time series

    OpenAIRE

    Xiao, Han; Wu, Wei Biao

    2011-01-01

    We obtain a sharp convergence rate for banded covariance matrix estimates of stationary processes. A precise order of magnitude is derived for spectral radius of sample covariance matrices. We also consider a thresholded covariance matrix estimator that can better characterize sparsity if the true covariance matrix is sparse. As our main tool, we implement Toeplitz [Math. Ann. 70 (1911) 351–376] idea and relate eigenvalues of covariance matrices to the spectral densities or Fourier transforms...

  1. Non-destructive estimation of Oecophylla smaragdina colony biomass

    DEFF Research Database (Denmark)

    Pinkalski, Christian Alexander Stidsen; Offenberg, Joachim; Jensen, Karl-Martin Vagn

    in mango plantations in Darwin, Australia. The total nest volume of O. smaragdina colonies in a tree was related to the activity of the ants (R2=0.85), estimated as the density of ant trails in the tree. Subsequently, the relation between nest volume and ant biomass (R2=0.70) was added to enable...... a prediction of ant biomass directly from ant activity. With this combined regression the ant biomass in a tree equaled 244.5 g fresh mass*ant activity. Similarly, the number of workers in trees was estimated using the relationship between nest volume and worker numbers (R2=0.84). Based on the model, five O...

  2. Best-Estimates in Bond Markets with Reinvestment Risk

    Directory of Open Access Journals (Sweden)

    Anne MacKay

    2015-07-01

    Full Text Available The concept of best-estimate, prescribed by regulators to value insurance liabilities for accounting and solvency purposes, has recently been discussed extensively in the industry and related academic literature. To differentiate hedgeable and non-hedgeable risks in a general case, recent literature defines best-estimates using orthogonal projections of a claim on the space of replicable payoffs. In this paper, we apply this concept of best-estimate to long-maturity claims in a market with reinvestment risk, since in this case the total liability cannot easily be separated into hedgeable and non-hedgeable parts. We assume that a limited number of short-maturity bonds are traded, and derive the best-estimate price of bonds with longer maturities, thus obtaining a best-estimate yield curve. We therefore use the multifactor Vasiˇcek model and derive within this framework closed-form expressions for the best-estimate prices of long-term bonds.

  3. The relative validity and repeatability of an FFQ for estimating intake of zinc and its absorption modifiers in young and older Saudi adults.

    Science.gov (United States)

    Alsufiani, Hadeil M; Yamani, Fatmah; Kumosani, Taha A; Ford, Dianne; Mathers, John C

    2015-04-01

    To assess the relative validity and repeatability of a sixty-four-item FFQ for estimating dietary intake of Zn and its absorption modifiers in Saudi adults. In addition, we used the FFQ to investigate the effect of age and gender on these intakes. To assess validity, all participants completed the FFQ (FFQ1) and a 3 d food record. After 1 month, the FFQ was administered for a second time (FFQ2) to assess repeatability. Jeddah, Saudi Arabia. One hundred males and females aged 20-30 years and 60-70 years participated. Mean intakes of Zn and protein from FFQ1 were significantly higher than those from the food record while there were no detectable differences between tools for measurement of phytic acid intake. Estimated intakes of Zn, protein and phytate by both approaches were strongly correlated (Prange of intakes while for Zn and phytic acid, the difference increased with increasing mean intake. Zn and protein intakes from FFQ1 and FFQ2 were highly correlated (r>0·68, Padults consumed less Zn and protein compared with young adults. Intakes of all dietary components were lower in females than in males. The FFQ developed and tested in the current study demonstrated reasonable relative validity and high repeatability and was capable of detecting differences in intakes between age and gender groups.

  4. Dutch diabetes prevalence estimates (DUDE-1)

    NARCIS (Netherlands)

    Kleefstra, Nanne; Landman, Gijsw. D.; Van Hateren, Kornelis J. J.; Meulepas, Marianne; Romeijnders, Arnold; Rutten, Guy E. H.; Klomp, Maarten; Houweling, Sebastiaan T.; Bilo, Henk J. G.

    2016-01-01

    Background: Recent decades have seen a constant upward projection in the prevalence of diabetes. Attempts to estimate diabetes prevalence rates based on relatively small population samples quite often result in underestimation. The aim of the present study was to investigate whether the Dutch

  5. Dutch diabetes prevalence estimates (DUDE-1)

    NARCIS (Netherlands)

    Kleefstra, Nanne; Landman, Gijsw. D.; Van Hateren, Kornelis J. J.; Meulepas, Marianne; Romeijnders, Arnold; Rutten, Guy E. H.; Klomp, Maarten; Houweling, Sebastiaan T.; Bilo, Henk J. G.

    Background: Recent decades have seen a constant upward projection in the prevalence of diabetes. Attempts to estimate diabetes prevalence rates based on relatively small population samples quite often result in underestimation. The aim of the present study was to investigate whether the Dutch

  6. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  7. Estimated Bathymetry of the Puerto Rico shelf

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This classification of estimated depth represents the relative bathymetry of Puerto Rico's shallow waters based on Landsat imagery for NOAA's Coastal Centers for...

  8. Ant-inspired density estimation via random walks.

    Science.gov (United States)

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  9. Estimating the relation between groundwater and river water by measuring the concentration of Rn-222

    Energy Technology Data Exchange (ETDEWEB)

    Yoneda, Minoru; Morisawa, Shinsuke [Kyoto Univ. (Japan). Faculty of Engineering

    1997-02-01

    This study aimed to estimate the relationship between groundwater in shallow layer and river water by determining the concentrations of {sup 222}Rn and nitric nitrogen along with water temperature. The region around ca. 20 km along river A in a certain basin was chosen as a test area. The Rn concentration of groundwater was determined by Rn extracting with toluene and counting in liquid scintillation counter, whereas for river water, it was determined by activated charcoal passive collector method developed by the authors, by which the amount of Rn adsorbed on activated charcoal was estimated by Ge-solid state detector. In addition, water temperature and nitric nitrogen concentration were measured at various points in the test area. Thus, a distribution map of the three parameters was made on the basis of the data obtained in December, 1989. Since Rn concentration is generally higher in ground water than river water and the water temperature in December is higher in the former, it seems likely that the concentrations of Rn and nitric nitrogen would become higher in the area where ground water soaks into river water. Thus, the directions of ground water flow at the respective sites along river A were estimated from the data regarding the properties of ground water. (M.N.)

  10. Effect of the Absorbed Photosynthetically Active Radiation Estimation Error on Net Primary Production Estimation - A Study with MODIS FPAR and TOMS Ultraviolet Reflective Products

    International Nuclear Information System (INIS)

    Kobayashi, H.; Matsunaga, T.; Hoyano, A.

    2002-01-01

    Absorbed photosynthetically active radiation (APAR), which is defined as downward solar radiation in 400-700 nm absorbed by vegetation, is one of the significant variables for Net Primary Production (NPP) estimation from satellite data. Toward the reduction of the uncertainties in the global NPP estimation, it is necessary to clarify the APAR accuracy. In this paper, first we proposed the improved PAR estimation method based on Eck and Dye's method in which the ultraviolet (UV) reflectivity data derived from Total Ozone Mapping Spectrometer (TOMS) at the top of atmosphere were used for clouds transmittance estimation. The proposed method considered the variable effects of land surface UV reflectivity on the satellite-observed UV data. Monthly mean PAR comparisons between satellite-derived and ground-based data at various meteorological stations in Japan indicated that the improved PAR estimation method reduced the bias errors in the summer season. Assuming the relative error of the fraction of PAR (FPAR) derived from Moderate Resolution Imaging Spectroradiometer (MODIS) to be 10%, we estimated APAR relative errors to be 10-15%. Annual NPP is calculated using APAR derived from MODIS/ FPAR and the improved PAR estimation method. It is shown that random and bias errors of annual NPP in a 1 km resolution pixel are less than 4% and 6% respectively. The APAR bias errors due to the PAR bias errors also affect the estimated total NPP. We estimated the most probable total annual NPP in Japan by subtracting the bias PAR errors. It amounts about 248 MtC/yr. Using the improved PAR estimation method, and Eck and Dye's method, total annual NPP is 4% and 9% difference from most probable value respectively. The previous intercomparison study among using fifteen NPP models4) showed that global NPP estimations among NPP models are 44.4-66.3 GtC/yr (coefficient of variation = 14%). Hence we conclude that the NPP estimation uncertainty due to APAR estimation error is small

  11. Stable Parameter Estimation for Autoregressive Equations with Random Coefficients

    Directory of Open Access Journals (Sweden)

    V. B. Goryainov

    2014-01-01

    Full Text Available In recent yearsthere has been a growing interest in non-linear time series models. They are more flexible than traditional linear models and allow more adequate description of real data. Among these models a autoregressive model with random coefficients plays an important role. It is widely used in various fields of science and technology, for example, in physics, biology, economics and finance. The model parameters are the mean values of autoregressive coefficients. Their evaluation is the main task of model identification. The basic method of estimation is still the least squares method, which gives good results for Gaussian time series, but it is quite sensitive to even small disturbancesin the assumption of Gaussian observations. In this paper we propose estimates, which generalize the least squares estimate in the sense that the quadratic objective function is replaced by an arbitrary convex and even function. Reasonable choice of objective function allows you to keep the benefits of the least squares estimate and eliminate its shortcomings. In particular, you can make it so that they will be almost as effective as the least squares estimate in the Gaussian case, but almost never loose in accuracy with small deviations of the probability distribution of the observations from the Gaussian distribution.The main result is the proof of consistency and asymptotic normality of the proposed estimates in the particular case of the one-parameter model describing the stationary process with finite variance. Another important result is the finding of the asymptotic relative efficiency of the proposed estimates in relation to the least squares estimate. This allows you to compare the two estimates, depending on the probability distribution of innovation process and of autoregressive coefficients. The results can be used to identify an autoregressive process, especially with nonGaussian nature, and/or of autoregressive processes observed with gross

  12. Costs of regulatory compliance: categories and estimating techniques

    International Nuclear Information System (INIS)

    Schulte, S.C.; McDonald, C.L.; Wood, M.T.; Cole, R.M.; Hauschulz, K.

    1978-10-01

    Use of the categorization scheme and cost estimating approaches presented in this report can make cost estimates of regulation required compliance activities of value to policy makers. The report describes a uniform assessment framework that when used would assure that cost studies are generated on an equivalent basis. Such normalization would make comparisons of different compliance activity cost estimates more meaningful, thus enabling the relative merits of different regulatory options to be more effectively judged. The framework establishes uniform cost reporting accounts and cost estimating approaches for use in assessing the costs of complying with regulatory actions. The framework was specifically developed for use in a current study at Pacific Northwest Laboratory. However, use of the procedures for other applications is also appropriate

  13. Flat-Top Realized Kernel Estimation of Quadratic Covariation with Non-Synchronous and Noisy Asset Prices

    DEFF Research Database (Denmark)

    Varneskov, Rasmus T.

    . Lastly, two small empirical applications to high frequency stock market data illustrate the bias reduction relative to competing estimators in estimating correlations, realized betas, and mean-variance frontiers, as well as the use of the new estimators in the dynamics of hedging....... problems. These transformations are all shown to inherit the desirable asymptotic properties of the generalized at-top realized kernels. A simulation study shows that the class of estimators has a superior finite sample tradeoff between bias and root mean squared error relative to competing estimators...

  14. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  15. Bayesian estimation of isotopic age differences

    International Nuclear Information System (INIS)

    Curl, R.L.

    1988-01-01

    Isotopic dating is subject to uncertainties arising from counting statistics and experimental errors. These uncertainties are additive when an isotopic age difference is calculated. If large, they can lead to no significant age difference by classical statistics. In many cases, relative ages are known because of stratigraphic order or other clues. Such information can be used to establish a Bayes estimate of age difference which will include prior knowledge of age order. Age measurement errors are assumed to be log-normal and a noninformative but constrained bivariate prior for two true ages in known order is adopted. True-age ratio is distributed as a truncated log-normal variate. Its expected value gives an age-ratio estimate, and its variance provides credible intervals. Bayesian estimates of ages are different and in correct order even if measured ages are identical or reversed in order. For example, age measurements on two samples might both yield 100 ka with coefficients of variation of 0.2. Bayesian estimates are 22.7 ka for age difference with a 75% credible interval of [4.4, 43.7] ka

  16. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    Science.gov (United States)

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  17. Parameter estimation in stochastic differential equations

    CERN Document Server

    Bishwal, Jaya P N

    2008-01-01

    Parameter estimation in stochastic differential equations and stochastic partial differential equations is the science, art and technology of modelling complex phenomena and making beautiful decisions. The subject has attracted researchers from several areas of mathematics and other related fields like economics and finance. This volume presents the estimation of the unknown parameters in the corresponding continuous models based on continuous and discrete observations and examines extensively maximum likelihood, minimum contrast and Bayesian methods. Useful because of the current availability of high frequency data is the study of refined asymptotic properties of several estimators when the observation time length is large and the observation time interval is small. Also space time white noise driven models, useful for spatial data, and more sophisticated non-Markovian and non-semimartingale models like fractional diffusions that model the long memory phenomena are examined in this volume.

  18. Statistical significant change versus relevant or important change in (quasi) experimental design : some conceptual and methodological problems in estimating magnitude of intervention-related change in health services research

    NARCIS (Netherlands)

    Middel, Berrie; van Sonderen, Eric

    2002-01-01

    This paper aims to identify problems in estimating and the interpretation of the magnitude of intervention-related change over time or responsiveness assessed with health outcome measures. Responsiveness is a problematic construct and there is no consensus on how to quantify the appropriate index to

  19. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  20. Contributions of national and global health estimates to monitoring health-related Sustainable Development Goals in Thailand.

    Science.gov (United States)

    Bundhamcharoen, Kanitta; Limwattananon, Supon; Kusreesakul, Khanitta; Tangcharoensathien, Viroj

    2017-01-01

    The Millennium Development Goals (MDGs) triggered increased demand for data on child and maternal mortality for monitoring progress. With the advent of the Sustainable Development Goals (SDGs) and growing evidence of an epidemiological transition towards non-communicable diseases, policy makers need data on mortality and disease trends and distribution to inform effective policies and support monitoring progress. Where there are limited capacities to produce national health estimates (NHEs), global health estimates (GHEs) can fill gaps for global monitoring and comparisons. This paper draws lessons learned from Thailand's burden of disease study (BOD) on capacity development for NHEs, and discusses the contributions and limitation of GHEs in informing policies at country level. Through training and technical support by external partners, capacities are gradually strengthened and institutionalized to enable regular updates of BOD at national and sub-national levels. Initially, the quality of cause of death reporting in the death certificates was inadequate, especially for deaths occurring in the community. Verbal autopsies were conducted, using domestic resources, to determine probable causes of deaths occurring in the community. This helped improve the estimation of years of life lost. Since the achievement of universal health coverage in 2002, the quality of clinical data on morbidities has also considerably improved. There are significant discrepancies between the 2010 Global Burden of Diseases (GBD) estimates for Thailand and the 1999 nationally generated BOD, especially for years of life lost due to HIV/AIDS, and the ranking of priority diseases. National ownership of NHEs and effective interfaces between researchers and decision makers contribute to enhanced country policy responses, while sub-national data are intended to be used by various sub-national-level partners. Though GHEs contribute to benchmarking country achievement compared with global health

  1. Cost estimates to guide manufacturing of composite waved beam

    International Nuclear Information System (INIS)

    Ye Jinrui; Zhang Boming; Qi Haiming

    2009-01-01

    A cost estimation model on the basis of manufacturing process has been presented. In the model, the effects of the material, labor, tool and equipment were discussed, and the corresponding formulas were provided. A method of selecting estimation variables has been provided based on a case study of composite waved beam using autoclave cure. The model parameters related to the process time estimation of the lay-up procedure were analyzed and modified for different part configurations. The result shows that there is little error while comparing the estimated process time with the practical one. The model is verified to be applicable to guide the design and manufacturing of the composite material

  2. Developmental and individual differences in pure numerical estimation.

    Science.gov (United States)

    Booth, Julie L; Siegler, Robert S

    2006-01-01

    The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1, kindergartners and 1st, 2nd, and 3rd graders were presented problems involving the numbers 0-100; in Experiment 2, 2nd and 4th graders were presented problems involving the numbers 0-1,000. Parallel developmental trends, involving increasing reliance on linear representations of numbers and decreasing reliance on logarithmic ones, emerged across different types of estimation. Consistent individual differences across tasks were also apparent, and all types of estimation skill were positively related to math achievement test scores. Implications for understanding of mathematics learning in general are discussed. Copyright 2006 APA, all rights reserved.

  3. Paleosecular variation analysis of high-latitude paleomagnetic data from the volcanic island of Jan Mayen

    Science.gov (United States)

    Cromwell, G.; Tauxe, L.; Staudigel, H.; Pedersen, L. R.; Constable, C.; Pedersen, R.; Duncan, R. A.; Staudigel, P.

    2009-12-01

    Recent investigation of high-latitude paleomagnetic data from the Erebus Volcanic Province (EVP), Antarctica shows a departure from magnetic dipole predictions for paleointensity data for the period 0-5 Ma. The average EVP paleointensity (31.5 +/- 2.4 μT) is equivalent to low-latitude measurements (1) or approximately half the strength predicted for a dipole at high-latitude. Also, paleosecular variation models (e.g., 2,3) predict dispersions of directions that are much lower than the high latitude observations. Observed low intensity values may be the result of reduced convective flow inside the tangent cylinder of the Earth’s core or insufficient temporal sampling (1). More high-latitude paleomagnetic data are necessary in order to investigate the cause of the depressed intensity values and to provide better geographic and temporal resolution for future statistical paleosecular variation models. To address this, we carried out two field seasons, one in Spitzbergen (79°N, 14°E) and one on the young volcanic island of Jan Mayen (71°N, 8°W). The latter sampling effort was guided by age analyses of samples obtained by P. Imsland (unpublished and 4). We will present new paleodirectional and paleointensity data from a total of 25 paleomagnetic sites. These data enhance the temporal resolution of global paleomagnetic data and allow for a more complete evaluation of the time-averaged magnetic field from 0-5 Ma. We will present a new analysis of paleosecular variation based on our new data, in combination with other recently published data sets. (1) Lawrence, K.P., L.Tauxe, H. Staudigel, C.G. Constable, A. Koppers, W. MacIntosh, C.L. Johnson, Paleomagnetic field properties at high southern latitude. Geochemistry Geophysics Geosystems 10 (2009). (2) McElhinny, M.W., P.L. McFadden, Paleosecular variation over the past 5 Myr based on a new generalized database. Geophysics Journal International 131 (1997), 240-252. (3) Tauxe, L., Kent, D.V., A simplified statistical

  4. Consequences of alternative tree-level biomass estimation procedures on U.S. forest carbon stock estimates

    Science.gov (United States)

    Grant M. Domke; Christopher W. Woodall; James E. Smith; James A. Westfall; Ronald E. McRoberts

    2012-01-01

    Forest ecosystems are the largest terrestrial carbon sink on earth and their management has been recognized as a relatively cost-effective strategy for offsetting greenhouse gas emissions. Forest carbon stocks in the U.S. are estimated using data from the USDA Forest Service, Forest Inventory and Analysis (FIA) program. In an attempt to balance accuracy with...

  5. Estimating the Reference Incremental Cost-Effectiveness Ratio for the Australian Health System.

    Science.gov (United States)

    Edney, Laura Catherine; Haji Ali Afzali, Hossein; Cheng, Terence Chai; Karnon, Jonathan

    2018-02-01

    Spending on new healthcare technologies increases net population health when the benefits of a new technology are greater than their opportunity costs-the benefits of the best alternative use of the additional resources required to fund a new technology. The objective of this study was to estimate the expected incremental cost per quality-adjusted life-year (QALY) gained of increased government health expenditure as an empirical estimate of the average opportunity costs of decisions to fund new health technologies. The estimated incremental cost-effectiveness ratio (ICER) is proposed as a reference ICER to inform value-based decision making in Australia. Empirical top-down approaches were used to estimate the QALY effects of government health expenditure with respect to reduced mortality and morbidity. Instrumental variable two-stage least-squares regression was used to estimate the elasticity of mortality-related QALY losses to a marginal change in government health expenditure. Regression analysis of longitudinal survey data representative of the general population was used to isolate the effects of increased government health expenditure on morbidity-related, QALY gains. Clinical judgement informed the duration of health-related quality-of-life improvement from the annual increase in government health expenditure. The base-case reference ICER was estimated at AUD28,033 per QALY gained. Parametric uncertainty associated with the estimation of mortality- and morbidity-related QALYs generated a 95% confidence interval AUD20,758-37,667. Recent public summary documents suggest new technologies with ICERs above AUD40,000 per QALY gained are recommended for public funding. The empirical reference ICER reported in this article suggests more QALYs could be gained if resources were allocated to other forms of health spending.

  6. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying; Chang, Xiaohui; Guan, Yongtao

    2018-01-01

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  7. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  8. MONITORED GEOLOGIC REPOSITORY LIFE CYCLE COST ESTIMATE ASSUMPTIONS DOCUMENT

    International Nuclear Information System (INIS)

    R.E. Sweeney

    2001-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost (LCC) estimate and schedule update incorporating information from the Viability Assessment (VA) , License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  9. Monitored Geologic Repository Life Cycle Cost Estimate Assumptions Document

    International Nuclear Information System (INIS)

    Sweeney, R.

    2000-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost estimate and schedule update incorporating information from the Viability Assessment (VA), License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  10. Estimation of contact resistance in proton exchange membrane fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lianhong; Liu, Ying; Song, Haimin; Wang, Shuxin [School of Mechanical Engineering, Tianjin University, 92 Weijin Road, Nankai District, Tianjin 300072 (China); Zhou, Yuanyuan; Hu, S. Jack [Department of Mechanical Engineering, The University of Michigan, Ann Arbor, MI 48109-2125 (United States)

    2006-11-22

    The contact resistance between the bipolar plate (BPP) and the gas diffusion layer (GDL) is an important factor contributing to the power loss in proton exchange membrane (PEM) fuel cells. At present there is still not a well-developed method to estimate such contact resistance. This paper proposes two effective methods for estimating the contact resistance between the BPP and the GDL based on an experimental contact resistance-pressure constitutive relation. The constitutive relation was obtained by experimentally measuring the contact resistance between the GDL and a flat plate of the same material and processing conditions as the BPP under stated contact pressure. In the first method, which was a simplified prediction, the contact area and contact pressure between the BPP and the GDL were analyzed with a simple geometrical relation and the contact resistance was obtained by the contact resistance-pressure constitutive relation. In the second method, the contact area and contact pressure between the BPP and GDL were analyzed using FEM and the contact resistance was computed for each contact element according to the constitutive relation. The total contact resistance was then calculated by considering all contact elements in parallel. The influence of load distribution on contact resistance was also investigated. Good agreement was demonstrated between experimental results and predictions by both methods. The simplified prediction method provides an efficient approach to estimating the contact resistance in PEM fuel cells. The proposed methods for estimating the contact resistance can be useful in modeling and optimizing the assembly process to improve the performance of PEM fuel cells. (author)

  11. Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems

    Directory of Open Access Journals (Sweden)

    H. Vincent Poor

    2008-05-01

    Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.

  12. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  13. Preoperative Estimation of Future Remnant Liver Function Following Portal Vein Embolization Using Relative Enhancement on Gadoxetic Acid Disodium-Enhanced Magnetic Resonance Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Yozo [Department of Radiology, Aichi Medical University, Aichi 480-1195 (Japan); Department of Diagnostic and Interventional Radiology, Aichi Cancer Center Hospital, Nagoya 464-8681 (Japan); Matsushima, Shigeru; Inaba, Yoshitaka [Department of Diagnostic and Interventional Radiology, Aichi Cancer Center Hospital, Nagoya 464-8681 (Japan); Sano, Tsuyoshi [Department of Gastroenterological Surgery, Aichi Cancer Center Hospital, Nagoya 464-8681 (Japan); Yamaura, Hidekazu; Kato, Mina [Department of Diagnostic and Interventional Radiology, Aichi Cancer Center Hospital, Nagoya 464-8681 (Japan); Shimizu, Yasuhiro; Senda, Yoshiki [Department of Gastroenterological Surgery, Aichi Cancer Center Hospital, Nagoya 464-8681 (Japan); Ishiguchi, Tsuneo [Department of Radiology, Aichi Medical University, Aichi 480-1195 (Japan)

    2015-11-01

    To retrospectively evaluate relative enhancement (RE) in the hepatobiliary phase of gadoxetic acid disodium-enhanced magnetic resonance (MR) imaging as a preoperative estimation of future remnant liver (FRL) function in a patients who underwent portal vein embolization (PVE). In 53 patients, the correlation between the indocyanine green clearance (ICG-K) and RE imaging was analyzed before hepatectomy (first analysis). Twenty-three of the 53 patients underwent PVE followed by a repeat RE imaging and ICG test before an extended hepatectomy and their results were further analyzed (second analysis). Whole liver function and FRL function were calculated on the MR imaging as follows: RE x total liver volume (RE Index) and FRL-RE x FRL volume (Rem RE Index), respectively. Regarding clinical outcome, posthepatectomy liver failure (PHLF) was evaluated in patients undergoing PVE. Indocyanine green clearance correlated with the RE Index (r = 0.365, p = 0.007), and ICG-K of FRL (ICG-Krem) strongly correlated with the Rem RE Index (r = 0.738, p < 0.001) in the first analysis. Both the ICG-Krem and the Rem RE Index were significantly correlated after PVE (r = 0.508, p = 0.013) at the second analysis. The rate of improvement of the Rem RE Index from before PVE to after PVE was significantly higher than that of ICG-Krem (p = 0.014). Patients with PHLF had a significantly lower Rem RE Index than patients without PHLF (p = 0.023). Relative enhancement imaging can be used to estimate FRL function after PVE.

  14. Fast skin dose estimation system for interventional radiology.

    Science.gov (United States)

    Takata, Takeshi; Kotoku, Jun'ichi; Maejima, Hideyuki; Kumagai, Shinobu; Arai, Norikazu; Kobayashi, Takenori; Shiraishi, Kenshiro; Yamamoto, Masayoshi; Kondo, Hiroshi; Furui, Shigeru

    2018-03-01

    To minimise the radiation dermatitis related to interventional radiology (IR), rapid and accurate dose estimation has been sought for all procedures. We propose a technique for estimating the patient skin dose rapidly and accurately using Monte Carlo (MC) simulation with a graphical processing unit (GPU, GTX 1080; Nvidia Corp.). The skin dose distribution is simulated based on an individual patient's computed tomography (CT) dataset for fluoroscopic conditions after the CT dataset has been segmented into air, water and bone based on pixel values. The skin is assumed to be one layer at the outer surface of the body. Fluoroscopic conditions are obtained from a log file of a fluoroscopic examination. Estimating the absorbed skin dose distribution requires calibration of the dose simulated by our system. For this purpose, a linear function was used to approximate the relation between the simulated dose and the measured dose using radiophotoluminescence (RPL) glass dosimeters in a water-equivalent phantom. Differences of maximum skin dose between our system and the Particle and Heavy Ion Transport code System (PHITS) were as high as 6.1%. The relative statistical error (2 σ) for the simulated dose obtained using our system was ≤3.5%. Using a GPU, the simulation on the chest CT dataset aiming at the heart was within 3.49 s on average: the GPU is 122 times faster than a CPU (Core i7-7700K; Intel Corp.). Our system (using the GPU, the log file, and the CT dataset) estimated the skin dose more rapidly and more accurately than conventional methods.

  15. Improved Garvey-Kelson Relations

    International Nuclear Information System (INIS)

    Collis, W. J. F.; Vinko, J. D.; Tripodi, P.

    2009-01-01

    In this paper we develop improved methods of estimating atomic weights and ground state binding energies, similar to the local relations of Garvey-Kelson formulae. We show (Fig. 1) that both the longitudinal (GKL) and transverse(GKT) Garvey-Kelson relations[2] can be derived mathematically from a simpler relationship between 4 nuclides (C1). This base can be used in turn to construct more accurate relations to predict the properties of nuclei far from stability. Historically, the Garvey Kelson local relations appear to be one of the most accurate estimators of atomic mass. However their utility is limited by progressive loss of accuracy in the prediction of nuclei far from stability where it may also be impossible to find the appropriate neighbours. The scheme presented in this paper requires the known masses of only 3 neighbours and is more accurate.(author)

  16. Approximated EU GHG inventory: Early estimates for 2011

    Energy Technology Data Exchange (ETDEWEB)

    Herold, A. [Oeko-Institut (Oeko), Freiburg (Germany); Fernandez, R. [European Environment Agency (EEA), Copenhagen (Denmark)

    2012-10-15

    The objective of this report is to provide an early estimate of greenhouse gas (GHG) emissions in the EU-15 and EU-27 for the year 2011. The official submission of 2011 data to the United Nations Framework Convention on Climate Change (UNFCCC) will occur in 2013. In recent years, the EEA and its European Topic Centre on Air Pollution and Climate Change Mitigation have developed a methodology to estimate GHG emissions using a bottom up approach - based on data or estimates for individual countries, sectors and gases - to derive EU GHG estimates in the preceding year (t-1). For transparency, this report shows the country-level GHG estimates from which the EU estimates have been derived. The 2011 estimates are based on the latest activity data available at country level and assume no change in emission factors or methodologies as compared to the official 2012 submissions to UNFCCC (which relate to emissions in 2010). Some Member States estimate and publish their own early estimates of GHG emissions for the preceding year. Where such estimates exist they are clearly referenced in this report in order to ensure complete transparency regarding the different GHG estimates available. Member State early estimates were also used for quality assurance and quality control of the EEA's GHG early estimates for 2011. Finally, the EEA has also used the early estimates of 2011 GHG emissions produced by EEA member countries to assess progress towards the Kyoto targets in its annual trends and projections report (due to be published alongside the present report). In that report, the EEA's early estimates for 2011 were only used for countries that lack their own early estimates to track progress towards national and EU targets. (LN)

  17. Approximated EU GHG inventory: Early estimates for 2010

    Energy Technology Data Exchange (ETDEWEB)

    Herold, A.; Busche, J.; Hermann, H.; Joerss, W.; Scheffler, M. (OEko-Institut, Freiburg (Germany))

    2011-10-15

    The objective of this report is to provide an early estimate of greenhouse gas (GHG) emissions in the EU-15 and EU-27 for the year 2010. The official submission of 2010 data to the United Nations Framework Convention on Climate Change (UNFCCC) will occur in 2012. In recent years, the EEA and its European Topic Centre on Air Pollution and Climate Change Mitigation have developed a methodology to estimate GHG emissions using a bottom up approach - based on data or estimates for individual countries, sectors and gases - to derive EU GHG estimates in the preceding year (t-1). For transparency, this report shows the country-level GHG estimates from which the EU estimates have been derived. The 2010 estimates are based on the latest activity data available at country level and assume no change in emission factors or methodologies as compared to the official 2011 submissions to UNFCCC (which re-late to emissions in 2009). Some Member States estimate and publish their own early estimates of GHG emissions for the preceding year. Where such estimates exist they are clearly referenced in this report in order to ensure complete transparency regarding the different GHG estimates available. Member State early estimates were also used for quality assurance and quality control of the EEA's GHG early estimates for 2010. Finally, EEA has also used the early estimates of 2010 GHG emissions produced by EEA member countries to assess progress towards the Kyoto targets in its annual trends and projections report (due to be published alongside the present report). In that report, the EEA's early estimates for 2010 were only used for countries that lack their own early estimates to track progress towards national and EU targets. (Author)

  18. Control and estimation of piecewise affine systems

    CERN Document Server

    Xu, Jun

    2014-01-01

    As a powerful tool to study nonlinear systems and hybrid systems, piecewise affine (PWA) systems have been widely applied to mechanical systems. Control and Estimation of Piecewise Affine Systems presents several research findings relating to the control and estimation of PWA systems in one unified view. Chapters in this title discuss stability results of PWA systems, using piecewise quadratic Lyapunov functions and piecewise homogeneous polynomial Lyapunov functions. Explicit necessary and sufficient conditions for the controllability and reachability of a class of PWA systems are

  19. Towards More Comprehensive Projections of Urban Heat-Related Mortality: Estimates for New York City Under Multiple Population, Adaptation, and Climate Scenarios

    Science.gov (United States)

    Petkova, Elisaveta P.; Vink, Jan K.; Horton, Radley M.; Gasparrini, Antonio; Bader, Daniel A.; Francis, Joe D.; Kinney, Patrick L.

    2016-01-01

    High temperatures have substantial impacts on mortality and, with growing concerns about climate change, numerous studies have developed projections of future heat-related deaths around the world. Projections of temperature-related mortality are often limited by insufficient information necessary to formulate hypotheses about population sensitivity to high temperatures and future demographics. This study has derived projections of temperature-related mortality in New York City by taking into account future patterns of adaptation or demographic change, both of which can have profound influences on future health burdens. We adopt a novel approach to modeling heat adaptation by incorporating an analysis of the observed population response to heat in New York City over the course of eight decades. This approach projects heat-related mortality until the end of the 21st century based on observed trends in adaptation over a substantial portion of the 20th century. In addition, we incorporate a range of new scenarios for population change until the end of the 21st century. We then estimate future heat-related deaths in New York City by combining the changing temperature-mortality relationship and population scenarios with downscaled temperature projections from the 33 global climate models (GCMs) and two Representative Concentration Pathways (RCPs).The median number of projected annual heat-related deaths across the 33 GCMs varied greatly by RCP and adaptation and population change scenario, ranging from 167 to 3331 in the 2080s compared to 638 heat-related deaths annually between 2000 and 2006.These findings provide a more complete picture of the range of potential future heat-related mortality risks across the 21st century in New York, and highlight the importance of both demographic change and adaptation responses in modifying future risks.

  20. Gaussian particle filter based pose and motion estimation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Determination of relative three-dimensional (3D) position, orientation, and relative motion between two reference frames is an important problem in robotic guidance, manipulation, and assembly as well as in other fields such as photogrammetry.A solution to pose and motion estimation problem that uses two-dimensional (2D) intensity images from a single camera is desirable for real-time applications. The difficulty in performing this measurement is that the process of projecting 3D object features to 2D images is a nonlinear transformation. In this paper, the 3D transformation is modeled as a nonlinear stochastic system with the state estimation providing six degrees-of-freedom motion and position values, using line features in image plane as measuring inputs and dual quaternion to represent both rotation and translation in a unified notation. A filtering method called the Gaussian particle filter (GPF) based on the particle filtering concept is presented for 3D pose and motion estimation of a moving target from monocular image sequences. The method has been implemented with simulated data, and simulation results are provided along with comparisons to the extended Kalman filter (EKF) and the unscented Kalman filter (UKF) to show the relative advantages of the GPF. Simulation results showed that GPF is a superior alternative to EKF and UKF.

  1. Development of cancer risk estimates from epidemiologic studies

    International Nuclear Information System (INIS)

    Webster, E.W.

    1983-01-01

    Radiation risk estimates may be made for an increase in mortality from, or for an increase in incidence of, particular types of disease. For both endpoints, two numerical systems of risk expression are used: the absolute risk system (usually the excess deaths or cases per million persons per year per rad), and the relative risk system (usually excess deaths or cases per year per rad expressed as a percentage of those normally expected). Risks may be calculated for specific age groups or for a general population. An alternative in both risk systems is the estimation of cumulative of lifetime risk rather than annual risk (e.g. in excess deaths per million per rad over a specified long period including the remainder of lifespan). The derivation of both absolute and relative risks is illustrated by examples. The effects on risk estimates of latent period, follow-up time, age at exposure and age standardization within dose groups are illustrated. The dependence of the projected cumulative (lifetime) risk on the adoption of a constant absolute risk or constant relative risk is noted. The use of life-table data in the adjustment of cumulative risk for normal mortality following single or annual doses is briefly discussed

  2. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  3. Unbiased multi-fidelity estimate of failure probability of a free plane jet

    Science.gov (United States)

    Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin

    2017-11-01

    Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.

  4. 3-D Vector Flow Estimation With Row-Column-Addressed Arrays.

    Science.gov (United States)

    Holbek, Simon; Christiansen, Thomas Lehrmann; Stuart, Matthias Bo; Beers, Christopher; Thomsen, Erik Vilain; Jensen, Jorgen Arendt

    2016-11-01

    Simulation and experimental results from 3-D vector flow estimations for a 62 + 62 2-D row-column (RC) array with integrated apodization are presented. A method for implementing a 3-D transverse oscillation (TO) velocity estimator on a 3-MHz RC array is developed and validated. First, a parametric simulation study is conducted, where flow direction, ensemble length, number of pulse cycles, steering angles, transmit/receive apodization, and TO apodization profiles and spacing are varied, to find the optimal parameter configuration. The performance of the estimator is evaluated with respect to relative mean bias ~B and mean standard deviation ~σ . Second, the optimal parameter configuration is implemented on the prototype RC probe connected to the experimental ultrasound scanner SARUS. Results from measurements conducted in a flow-rig system containing a constant laminar flow and a straight-vessel phantom with a pulsating flow are presented. Both an M-mode and a steered transmit sequence are applied. The 3-D vector flow is estimated in the flow rig for four representative flow directions. In the setup with 90° beam-to-flow angle, the relative mean bias across the entire velocity profile is (-4.7, -0.9, 0.4)% with a relative standard deviation of (8.7, 5.1, 0.8)% for ( v x , v y , v z ). The estimated peak velocity is 48.5 ± 3 cm/s giving a -3% bias. The out-of-plane velocity component perpendicular to the cross section is used to estimate volumetric flow rates in the flow rig at a 90° beam-to-flow angle. The estimated mean flow rate in this setup is 91.2 ± 3.1 L/h corresponding to a bias of -11.1%. In a pulsating flow setup, flow rate measured during five cycles is 2.3 ± 0.1 mL/stroke giving a negative 9.7% bias. It is concluded that accurate 3-D vector flow estimation can be obtained using a 2-D RC-addressed array.

  5. [Methods and Applications to estimate the conversion factor of Resource-Based Relative Value Scale for nurse-midwife's delivery service in the national health insurance].

    Science.gov (United States)

    Kim, Jinhyun; Jung, Yoomi

    2009-08-01

    This paper analyzed alternative methods of calculating the conversion factor for nurse-midwife's delivery services in the national health insurance and estimated the optimal reimbursement level for the services. A cost accounting model and Sustainable Growth Rate (SGR) model were developed to estimate the conversion factor of Resource-Based Relative Value Scale (RBRVS) for nurse-midwife's services, depending on the scope of revenue considered in financial analysis. The data and sources from the government and the financial statements from nurse-midwife clinics were used in analysis. The cost accounting model and SGR model showed a 17.6-37.9% increase and 19.0-23.6% increase, respectively, in nurse-midwife fee for delivery services in the national health insurance. The SGR model measured an overall trend of medical expenditures rather than an individual financial status of nurse-midwife clinics, and the cost analysis properly estimated the level of reimbursement for nurse-midwife's services. Normal vaginal delivery in nurse-midwife clinics is considered cost-effective in terms of insurance financing. Upon a declining share of health expenditures on midwife clinics, designing a reimbursement strategy for midwife's services could be an opportunity as well as a challenge when it comes to efficient resource allocation.

  6. Estimation of delays and other parameters in nonlinear functional differential equations

    Science.gov (United States)

    Banks, H. T.; Lamm, P. K. D.

    1983-01-01

    A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.

  7. Factors Related to Significant Improvement of Estimated Glomerular Filtration Rates in Chronic Hepatitis B Patients Receiving Telbivudine Therapy

    Directory of Open Access Journals (Sweden)

    Te-Fu Lin

    2017-01-01

    Full Text Available Background and Aim. The improvement of estimated glomerular filtration rates (eGFRs in chronic hepatitis B (CHB patients receiving telbivudine therapy is well known. The aim of this study was to clarify the kinetics of eGFRs and to identify the significant factors related to the improvement of eGFRs in telbivudine-treated CHB patients in a real-world setting. Methods. Serial eGFRs were calculated every 3 months using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI equation. The patients were classified as CKD-1, -2, or -3 according to a baseline eGFR of ≥90, 60–89, or <60 mL/min/1.73 m2, respectively. A significant improvement of eGFR was defined as a more than 10% increase from the baseline. Results. A total of 129 patients were enrolled, of whom 36% had significantly improved eGFRs. According to a multivariate analysis, diabetes mellitus (DM (p=0.028 and CKD-3 (p=0.043 were both significantly related to such improvement. The rates of significant improvement of eGFR were about 73% and 77% in patients with DM and CKD-3, respectively. Conclusions. Telbivudine is an alternative drug of choice for the treatment of hepatitis B patients for whom renal safety is a concern, especially patients with DM and CKD-3.

  8. Estimated doses related to 222Rn concentration in bunker for radiotherapy and storage of radioisotopes

    International Nuclear Information System (INIS)

    Mestre, Freddy; Carrizales-Silva, Lila; Sajo-Bohus, Laszlo; Diaz, Cruz

    2013-01-01

    It was done a survey in radiotherapy services underground hospitals and clinics of Venezuela and Paraguay in order to estimate the concentrations of radon and its possible consequences on worker occupational exposure. Passive dosimeters were used to assess nuclear traces (NTD type CR-39®). The concentration of 222 Rn is determined based on the density of traces using the calibration coefficient of 1 tr/cm 2 equivalent to 0,434 Bqm -3 per month of exposure. Assuming the most likely environmental conditions and the dose conversion factor equal to 9.0 x 10 -6 mSv h -1 by Bqm -3 , it was determined the average values and estimated the possible risks to health that are on average 3.0 mSva -1 and 150 micro risk cancer

  9. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks

    DEFF Research Database (Denmark)

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl

    2018-01-01

    This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditi...... in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species....

  10. An overall estimation of losses caused by diseases in the Brazilian fish farms.

    Science.gov (United States)

    Tavares-Dias, Marcos; Martins, Maurício Laterça

    2017-12-01

    Parasitic and infectious diseases are common in finfish, but are difficult to accurately estimate the economic impacts on the production in a country with large dimensions like Brazil. The aim of this study was to estimate the costs caused by economic losses of finfish due to mortality by diseases in Brazil. A model for estimating the costs related to parasitic and bacterial diseases in farmed fish and an estimative of these economic impacts are presented. We used official data of production and mortality of finfish for rough estimation of economic losses. The losses herein presented are related to direct and indirect economic costs for freshwater farmed fish, which were estimated in US$ 84 million per year. Finally, it was possible to establish by the first time an estimative of overall losses in finfish production in Brazil using data available from production. Therefore, this current estimative must help researchers and policy makers to approximate the economic costs of diseases for fish farming industry, as well as for developing of public policies on the control measures of diseases and priority research lines.

  11. Internal Medicine residents use heuristics to estimate disease probability

    Directory of Open Access Journals (Sweden)

    Sen Phang

    2015-12-01

    Conclusions: Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing.

  12. Estimating relations between temperature, relative humidity as independed variables and selected water quality parameters in Lake Manzala, Egypt

    Directory of Open Access Journals (Sweden)

    Gehan A.H. Sallam

    2018-03-01

    Full Text Available In Egypt, Lake Manzala is the largest and the most productive lake of northern coastal lakes. In this study, the continuous measurements data of the Real Time Water Quality Monitoring stations in Lake Manzala were statistically analyzed to measure the regional and seasonal variations of the selected water quality parameters in relation to the change of air temperature and relative humidity. Simple formulas are elaborated using the DataFit software to predict the selected water quality parameters of the Lake including pH, Dissolved Oxygen (DO, Electrical Conductivity (EC, Total Dissolved Solids (TDS, Turbidity, and Chlorophyll as a function of air temperature, relative humidity and quantities and qualities of the drainage water that discharge into the lake. An empirical positive relation was found between air temperature and the relative humidity and pH, EC and TDS and negative relation with DO. There is no significant effect on the other two parameters of turbidity and chlorophyll.

  13. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  14. Spatial Bias in Field-Estimated Unsaturated Hydraulic Properties

    Energy Technology Data Exchange (ETDEWEB)

    HOLT,ROBERT M.; WILSON,JOHN L.; GLASS JR.,ROBERT J.

    2000-12-21

    Hydraulic property measurements often rely on non-linear inversion models whose errors vary between samples. In non-linear physical measurement systems, bias can be directly quantified and removed using calibration standards. In hydrologic systems, field calibration is often infeasible and bias must be quantified indirectly. We use a Monte Carlo error analysis to indirectly quantify spatial bias in the saturated hydraulic conductivity, K{sub s}, and the exponential relative permeability parameter, {alpha}, estimated using a tension infiltrometer. Two types of observation error are considered, along with one inversion-model error resulting from poor contact between the instrument and the medium. Estimates of spatial statistics, including the mean, variance, and variogram-model parameters, show significant bias across a parameter space representative of poorly- to well-sorted silty sand to very coarse sand. When only observation errors are present, spatial statistics for both parameters are best estimated in materials with high hydraulic conductivity, like very coarse sand. When simple contact errors are included, the nature of the bias changes dramatically. Spatial statistics are poorly estimated, even in highly conductive materials. Conditions that permit accurate estimation of the statistics for one of the parameters prevent accurate estimation for the other; accurate regions for the two parameters do not overlap in parameter space. False cross-correlation between estimated parameters is created because estimates of K{sub s} also depend on estimates of {alpha} and both parameters are estimated from the same data.

  15. Failing to Estimate the Costs of Offshoring

    DEFF Research Database (Denmark)

    Møller Larsen, Marcus

    2016-01-01

    This article investigates cost estimation errors in the context of offshoring. It is argued that an imprecise estimation of the costs related to implementing a firm activity in a foreign location has a negative impact on the process performance of that activity. Performance is deterred...... as operations are likely to be disrupted by managerial distraction and resource misallocation. It is also argued that this relationship is mitigated by the extent to which firms use modularity to coordinate the activity but worsened by the extent to which ongoing communication is used. The results, based...

  16. Estimation of diffuse from measured global solar radiation

    International Nuclear Information System (INIS)

    Moriarty, W.W.

    1991-01-01

    A data set of quality controlled radiation observations from stations scattered throughout Australia was formed and further screened to remove residual doubtful observations. It was then divided into groups by solar elevation, and used to find average relationships for each elevation group between relative global radiation (clearness index - the measured global radiation expressed as a proportion of the radiation on a horizontal surface at the top of the atmosphere) and relative diffuse radiation. Clear-cut relationships were found, which were then fitted by polynomial expressions giving the relative diffuse radiation as a function of relative global radiation and solar elevation. When these expressions were used to estimate the diffuse radiation from the global, the results had a slightly smaller spread of errors than those from an earlier technique given by Spencer. It was found that the errors were related to cloud amount, and further relationships were developed giving the errors as functions of global radiation, solar elevation, and the fraction of sky obscured by high cloud and by opaque (low and middle level) cloud. When these relationships were used to adjust the first estimates of diffuse radiation, there was a considerable reduction in the number of large errors

  17. How brain response and eating habits modulate food energy estimation.

    Science.gov (United States)

    Mengotti, P; Aiello, M; Terenzi, D; Miniussi, C; Rumiati, R I

    2018-05-01

    The estimates we do of the energy content of different foods tend to be inaccurate, depending on several factors. The elements influencing such evaluation are related to the differences in the portion size of the foods shown, their energy density (kcal/g), but also to individual differences of the estimators, such as their body-mass index (BMI) or eating habits. Within this context the contribution of brain regions involved in food-related decisions to the energy estimation process is still poorly understood. Here, normal-weight and overweight/obese women with restrained or non-restrained eating habits, received anodal transcranial direct current stimulation (AtDCS) to modulate the activity of the left dorsolateral prefrontal cortex (dlPFC) while they performed a food energy estimation task. Participants were asked to judge the energy content of food images, unaware that all foods, for the quantity presented, shared the same energy content. Results showed that food energy density was a reliable predictor of their energy content estimates, suggesting that participants relied on their knowledge about the food energy density as a proxy for estimating food energy content. The neuromodulation of the dlPFC interacted with individual differences in restrained eating, increasing the precision of the energy content estimates in participants with higher scores in the restrained eating scale. Our study highlights the importance of eating habits, such as restrained eating, in modulating the activity of the left dlPFC during food appraisal. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. VERTICAL ACTIVITY ESTIMATION USING 2D RADAR

    African Journals Online (AJOL)

    hennie

    estimates on aircraft vertical behaviour from a single 2D radar track. ... Fortunately, the problem of detecting relative vertical motion using a single 2D ..... awareness tools in scenarios where aerial activity sensing is typically limited to 2D.

  19. Single event upset threshold estimation based on local laser irradiation

    International Nuclear Information System (INIS)

    Chumakov, A.I.; Egorov, A.N.; Mavritsky, O.B.; Yanenko, A.V.

    1999-01-01

    An approach for estimation of ion-induced SEU threshold based on local laser irradiation is presented. Comparative experiment and software simulation research were performed at various pulse duration and spot size. Correlation of single event threshold LET to upset threshold laser energy under local irradiation was found. The computer analysis of local laser irradiation of IC structures was developed for SEU threshold LET estimation. The correlation of local laser threshold energy with SEU threshold LET was shown. Two estimation techniques were suggested. The first one is based on the determination of local laser threshold dose taking into account the relation of sensitive area to local irradiated area. The second technique uses the photocurrent peak value instead of this relation. The agreement between the predicted and experimental results demonstrates the applicability of this approach. (authors)

  20. Third molar mineralization in relation to chronologic age estimation of the Han in central southern China.

    Science.gov (United States)

    Liu, Ying; Geng, Kun; Chu, Yanhao; Xu, Mindi; Zha, Lagabaiyila

    2018-03-03

    The purpose of this study is to provide a forensic reference data about estimating chronologic age by evaluating the third molar mineralization of Han in central southern China. The mineralization degree of third molars was assessed by Demirjian's classification with modification for 2519 digital orthopantomograms (1190 males, 1329 females; age 8-23 years). The mean ages of the initial mineralization and the crown completion of third molars were around 9.66 and 13.88 years old in males and 9.52 and 14.09 years old in females. The minimum ages of apical closure were around 16 years in both sexes. Twenty-eight at stage C and stage G and 38 and 48 at stage F occurred earlier in males than in females. There was no significant difference between maxillary and mandibular teeth in males and females except that stage C in males. Two formulas were devised to estimate age based on mineralization stages and sexes. In Hunan Province, the person will probably be over age 14, when a third molar reaches the stage G. The results of the study could provide reference for age estimation in forensic cases and clinical dentistry.

  1. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L.; DuFrain, R.J.

    1986-01-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  2. Uncertainty estimation and risk prediction in air quality

    International Nuclear Information System (INIS)

    Garaud, Damien

    2011-01-01

    This work is about uncertainty estimation and risk prediction in air quality. Firstly, we build a multi-model ensemble of air quality simulations which can take into account all uncertainty sources related to air quality modeling. Ensembles of photochemical simulations at continental and regional scales are automatically generated. Then, these ensemble are calibrated with a combinatorial optimization method. It selects a sub-ensemble which is representative of uncertainty or shows good resolution and reliability for probabilistic forecasting. This work shows that it is possible to estimate and forecast uncertainty fields related to ozone and nitrogen dioxide concentrations or to improve the reliability of threshold exceedance predictions. The approach is compared with Monte Carlo simulations, calibrated or not. The Monte Carlo approach appears to be less representative of the uncertainties than the multi-model approach. Finally, we quantify the observational error, the representativeness error and the modeling errors. The work is applied to the impact of thermal power plants, in order to quantify the uncertainty on the impact estimates. (author) [fr

  3. TOTAL INFRARED LUMINOSITY ESTIMATION OF RESOLVED AND UNRESOLVED GALAXIES

    International Nuclear Information System (INIS)

    Boquien, M.; Calzetti, D.; Bendo, G.; Dale, D.; Engelbracht, C.; Kennicutt, R.; Lee, J. C.; Van Zee, L.; Moustakas, J.

    2010-01-01

    The total infrared (TIR) luminosity from galaxies can be used to examine both star formation and dust physics. We provide here new relations to estimate the TIR luminosity from various Spitzer bands, in particular from the 8 μm and 24 μm bands. To do so, we use data for 45'' subregions within a subsample of nearby face-on spiral galaxies from the Spitzer Infrared Nearby Galaxies Survey (SINGS) that have known oxygen abundances as well as integrated galaxy data from the SINGS, the Local Volume Legacy survey (LVL), and Engelbracht et al. samples. Taking into account the oxygen abundances of the subregions, the star formation rate intensity, and the relative emission of the polycyclic aromatic hydrocarbons at 8 μm, the warm dust at 24 μm, and the cold dust at 70 μm and 160 μm, we derive new relations to estimate the TIR luminosity from just one or two of the Spitzer bands. We also show that the metallicity and the star formation intensity must be taken into account when estimating the TIR luminosity from two wave bands, especially when data longward of 24 μm are not available.

  4. Minimax estimation of qubit states with Bures risk

    Science.gov (United States)

    Acharya, Anirudh; Guţă, Mădălin

    2018-04-01

    The central problem of quantum statistics is to devise measurement schemes for the estimation of an unknown state, given an ensemble of n independent identically prepared systems. For locally quadratic loss functions, the risk of standard procedures has the usual scaling of 1/n. However, it has been noticed that for fidelity based metrics such as the Bures distance, the risk of conventional (non-adaptive) qubit tomography schemes scales as 1/\\sqrt{n} for states close to the boundary of the Bloch sphere. Several proposed estimators appear to improve this scaling, and our goal is to analyse the problem from the perspective of the maximum risk over all states. We propose qubit estimation strategies based on separate adaptive measurements, and collective measurements, that achieve 1/n scalings for the maximum Bures risk. The estimator involving local measurements uses a fixed fraction of the available resource n to estimate the Bloch vector direction; the length of the Bloch vector is then estimated from the remaining copies by measuring in the estimator eigenbasis. The estimator based on collective measurements uses local asymptotic normality techniques which allows us to derive upper and lower bounds to its maximum Bures risk. We also discuss how to construct a minimax optimal estimator in this setup. Finally, we consider quantum relative entropy and show that the risk of the estimator based on collective measurements achieves a rate O(n-1log n) under this loss function. Furthermore, we show that no estimator can achieve faster rates, in particular the ‘standard’ rate n ‑1.

  5. Child mortality estimation: consistency of under-five mortality rate estimates using full birth histories and summary birth histories.

    Directory of Open Access Journals (Sweden)

    Romesh Silva

    Full Text Available Given the lack of complete vital registration data in most developing countries, for many countries it is not possible to accurately estimate under-five mortality rates from vital registration systems. Heavy reliance is often placed on direct and indirect methods for analyzing data collected from birth histories to estimate under-five mortality rates. Yet few systematic comparisons of these methods have been undertaken. This paper investigates whether analysts should use both direct and indirect estimates from full birth histories, and under what circumstances indirect estimates derived from summary birth histories should be used.Usings Demographic and Health Surveys data from West Africa, East Africa, Latin America, and South/Southeast Asia, I quantify the differences between direct and indirect estimates of under-five mortality rates, analyze data quality issues, note the relative effects of these issues, and test whether these issues explain the observed differences. I find that indirect estimates are generally consistent with direct estimates, after adjustment for fertility change and birth transference, but don't add substantial additional insight beyond direct estimates. However, choice of direct or indirect method was found to be important in terms of both the adjustment for data errors and the assumptions made about fertility.Although adjusted indirect estimates are generally consistent with adjusted direct estimates, some notable inconsistencies were observed for countries that had experienced either a political or economic crisis or stalled health transition in their recent past. This result suggests that when a population has experienced a smooth mortality decline or only short periods of excess mortality, both adjusted methods perform equally well. However, the observed inconsistencies identified suggest that the indirect method is particularly prone to bias resulting from violations of its strong assumptions about recent mortality

  6. Turbidity-controlled suspended sediment sampling for runoff-event load estimation

    Science.gov (United States)

    Jack Lewis

    1996-01-01

    Abstract - For estimating suspended sediment concentration (SSC) in rivers, turbidity is generally a much better predictor than water discharge. Although it is now possible to collect continuous turbidity data even at remote sites, sediment sampling and load estimation are still conventionally based on discharge. With frequent calibration the relation of turbidity to...

  7. Value drivers: an approach for estimating health and disease management program savings.

    Science.gov (United States)

    Phillips, V L; Becker, Edmund R; Howard, David H

    2013-12-01

    Health and disease management (HDM) programs have faced challenges in documenting savings related to their implementation. The objective of this eliminate study was to describe OptumHealth's (Optum) methods for estimating anticipated savings from HDM programs using Value Drivers. Optum's general methodology was reviewed, along with details of 5 high-use Value Drivers. The results showed that the Value Driver approach offers an innovative method for estimating savings associated with HDM programs. The authors demonstrated how real-time savings can be estimated for 5 Value Drivers commonly used in HDM programs: (1) use of beta-blockers in treatment of heart disease, (2) discharge planning for high-risk patients, (3) decision support related to chronic low back pain, (4) obesity management, and (5) securing transportation for primary care. The validity of savings estimates is dependent on the type of evidence used to gauge the intervention effect, generating changes in utilization and, ultimately, costs. The savings estimates derived from the Value Driver method are generally reasonable to conservative and provide a valuable framework for estimating financial impacts from evidence-based interventions.

  8. Estimation of in-vivo pulses in medical ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1994-01-01

    and the three-dimensional, attenuated ultrasound field for a concave transducer. Pulses are estimated from in-vivo liver data showing good resemblance to a pulse measured as the response from a planar reflector and then properly attenuated. The main application for the algorithm is to function......An algorithm for the estimation of one-dimensional in-vivo ultrasound pulses is derived. The routine estimates a set of ARMA parameters describing the pulse and uses data from a number of adjacent rf lines. Using multiple lines results in a decrease in variance on the estimated parameters...... and significantly reduces the risk of terminating the algorithm at a local minimum. Examples from use on synthetic data confirms the reduction in variance and increased chance of successful minimization termination. Simulations are also reported indicating the relation between the one-dimensional pulse...

  9. Accuracy of Travel Time Estimation using Bluetooth Technology

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Skoven Pedersen, Kristian; Tørholm Christensen, Lars

    2012-01-01

    Short-term travel time information plays a critical role in Advanced Traffic Information Systems (ATIS) and Advanced Traffic Management Systems (ATMS). In this context, the need for accurate and reliable travel time information sources is becoming increasingly important. Bluetooth Technology (BT......) has been used as a relatively new cost-effective source of travel time estimation. However, due to low sampling rate of BT compared to other sensor technologies, existence of outliers may significantly affect the accuracy and reliability of the travel time estimates obtained using BT. In this study......, the concept of outliers and corresponding impacts on travel time accuracy are discussed. Four different estimators named Min-BT, Max-BT, Med-BT and Avg-BT with different outlier detection logic are presented in this paper. These methods are used to estimate travel times using a BT derived dataset. In order...

  10. Traveltime approximations and parameter estimation for orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-30

    Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters if we relate them analytically to traveltimes. Using perturbation theory, we have developed traveltime approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2, and Δχ in inhomogeneous background media. The parameter Δχ is related to Tsvankin-Thomsen notation and ensures easier computation of traveltimes in the background model. Specifically, our expansion assumes an inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. We have used the Shanks transform to enhance the accuracy of the formulas. A homogeneous medium simplification of the traveltime expansion provided a nonhyperbolic moveout description of the traveltime that was more accurate than other derived approximations. Moreover, the formulation provides a computationally efficient tool to solve the eikonal equation of an orthorhombic medium, without any constraints on the background model complexity. Although, the expansion is based on the factorized representation of the perturbation parameters, smooth variations of these parameters (represented as effective values) provides reasonable results. Thus, this formulation provides a mechanism to estimate the three effective parameters η1, η2, and Δχ. We have derived Dix-type formulas for orthorhombic medium to convert the effective parameters to their interval values.

  11. Permeability Estimation of Rock Reservoir Based on PCA and Elman Neural Networks

    Science.gov (United States)

    Shi, Ying; Jian, Shaoyong

    2018-03-01

    an intelligent method which based on fuzzy neural networks with PCA algorithm, is proposed to estimate the permeability of rock reservoir. First, the dimensionality reduction process is utilized for these parameters by principal component analysis method. Further, the mapping relationship between rock slice characteristic parameters and permeability had been found through fuzzy neural networks. The estimation validity and reliability for this method were tested with practical data from Yan’an region in Ordos Basin. The result showed that the average relative errors of permeability estimation for this method is 6.25%, and this method had the better convergence speed and more accuracy than other. Therefore, by using the cheap rock slice related information, the permeability of rock reservoir can be estimated efficiently and accurately, and it is of high reliability, practicability and application prospect.

  12. ASYMPTOTIC COMPARISONS OF U-STATISTICS, V-STATISTICS AND LIMITS OF BAYES ESTIMATES BY DEFICIENCIES

    OpenAIRE

    Toshifumi, Nomachi; Hajime, Yamato; Graduate School of Science and Engineering, Kagoshima University:Miyakonojo College of Technology; Faculty of Science, Kagoshima University

    2001-01-01

    As estimators of estimable parameters, we consider three statistics which are U-statistic, V-statistic and limit of Bayes estimate. This limit of Bayes estimate, called LB-statistic in this paper, is obtained from Bayes estimate of estimable parameter based on Dirichlet process, by letting its parameter tend to zero. For the estimable parameter with non-degenerate kernel, the asymptotic relative efficiencies of LB-statistic with respect to U-statistic and V-statistic and that of V-statistic w...

  13. Estimates of the Economic Effects of Sea Level Rise

    International Nuclear Information System (INIS)

    Darwin, R.F.; Tol, R.S.J.

    2001-01-01

    Regional estimates of direct cost (DC) are commonly used to measure the economic damages of sea level rise. Such estimates suffer from three limitations: (1) values of threatened endowments are not well known, (2) loss of endowments does not affect consumer prices, and (3) international trade is disregarded. Results in this paper indicate that these limitations can significantly affect economic assessments of sea level rise. Current uncertainty regarding endowment values (as reflected in two alternative data sets), for example, leads to a 17 percent difference in coastal protection, a 36 percent difference in the amount of land protected, and a 36 percent difference in DC globally. Also, global losses in equivalent variation (EV), a welfare measure that accounts for price changes, are 13 percent higher than DC estimates. Regional EV losses may be up to 10 percent lower than regional DC, however, because international trade tends to redistribute losses from regions with relatively high damages to regions with relatively low damages. 43 refs

  14. Estimating and Testing the Sources of Evoked Potentials in the Brain.

    Science.gov (United States)

    Huizenga, Hilde M.; Molenaar, Peter C. M.

    1994-01-01

    The source of an event-related brain potential (ERP) is estimated from multivariate measures of ERP on the head under several mathematical and physical constraints on the parameters of the source model. Statistical aspects of estimation are discussed, and new tests are proposed. (SLD)

  15. Landsat Imagery-Based Above Ground Biomass Estimation and Change Investigation Related to Human Activities

    Directory of Open Access Journals (Sweden)

    Chaofan Wu

    2016-02-01

    Full Text Available Forest biomass is a significant indicator for substance accumulation and forest succession, and a spatiotemporal biomass map would provide valuable information for forest management and scientific planning. In this study, Landsat imagery and field data cooperated with a random forest regression approach were used to estimate spatiotemporal Above Ground Biomass (AGB in Fuyang County, Zhejiang Province of East China. As a result, the AGB retrieval showed an increasing trend for the past decade, from 74.24 ton/ha in 2004 to 99.63 ton/ha in 2013. Topography and forest management were investigated to find their relationships with the spatial distribution change of biomass. In general, the simulated AGB increases with higher elevation, especially in the range of 80–200 m, wherein AGB acquires the highest increase rate. Moreover, the forest policy of ecological forest has a positive effect on the AGB increase, particularly within the national level ecological forest. The result in this study demonstrates that human activities have a great impact on biomass distribution and change tendency. Furthermore, Landsat image-based biomass estimates would provide illuminating information for forest policy-making and sustainable development.

  16. The Source Signature Estimator - System Improvements and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sabel, Per; Brink, Mundy; Eidsvig, Seija; Jensen, Lars

    1998-12-31

    This presentation relates briefly to the first part of the joint project on post-survey analysis of shot-by-shot based source signature estimation. The improvements of a Source Signature Estimator system are analysed. The notional source method can give suboptimal results when not inputting the real array geometry, i.e. actual separations between the sub-arrays of an air gun array, to the notional source algorithm. This constraint has been addressed herein and was implemented for the first time in the field in summer 1997. The second part of this study will show the potential advantages for interpretation when the signature estimates are then to be applied in the data processing. 5 refs., 1 fig.

  17. Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data

    Science.gov (United States)

    Dias, Nelson Luís

    2018-01-01

    A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.

  18. Comparison of relative efficiency of genomic SSR and EST-SSR markers in estimating genetic diversity in sugarcane.

    Science.gov (United States)

    Parthiban, S; Govindaraj, P; Senthilkumar, S

    2018-03-01

    Twenty-five primer pairs developed from genomic simple sequence repeats (SSR) were compared with 25 expressed sequence tags (EST) SSRs to evaluate the efficiency of these two sets of primers using 59 sugarcane genetic stocks. The mean polymorphism information content (PIC) of genomic SSR was higher (0.72) compared to the PIC value recorded by EST-SSR marker (0.62). The relatively low level of polymorphism in EST-SSR markers may be due to the location of these markers in more conserved and expressed sequences compared to genomic sequences which are spread throughout the genome. Dendrogram based on the genomic SSR and EST-SSR marker data showed differences in grouping of genotypes. A total of 59 sugarcane accessions were grouped into 6 and 4 clusters using genomic SSR and EST-SSR, respectively. The highly efficient genomic SSR could subcluster the genotypes of some of the clusters formed by EST-SSR markers. The difference in dendrogram observed was probably due to the variation in number of markers produced by genomic SSR and EST-SSR and different portion of genome amplified by both the markers. The combined dendrogram (genomic SSR and EST-SSR) more clearly showed the genetic relationship among the sugarcane genotypes by forming four clusters. The mean genetic similarity (GS) value obtained using EST-SSR among 59 sugarcane accessions was 0.70, whereas the mean GS obtained using genomic SSR was 0.63. Although relatively lower level of polymorphism was displayed by the EST-SSR markers, genetic diversity shown by the EST-SSR was found to be promising as they were functional marker. High level of PIC and low genetic similarity values of genomic SSR may be more useful in DNA fingerprinting, selection of true hybrids, identification of variety specific markers and genetic diversity analysis. Identification of diverse parents based on cluster analysis can be effectively done with EST-SSR as the genetic similarity estimates are based on functional attributes related to

  19. Wegner estimate for sparse and other generalized alloy type potentials

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    (1) or close relatives. Moreover, all known proofs of localization in multidimensional ... The Wegner estimate is also related to the integrated density of states (IDS). ...... operator with surface potential, Rev. Math. Phys. 12(4) (2000) 561–573.

  20. Bayesian phylogenetic estimation of fossil ages.

    Science.gov (United States)

    Drummond, Alexei J; Stadler, Tanja

    2016-07-19

    Recent advances have allowed for both morphological fossil evidence and molecular sequences to be integrated into a single combined inference of divergence dates under the rule of Bayesian probability. In particular, the fossilized birth-death tree prior and the Lewis-Mk model of discrete morphological evolution allow for the estimation of both divergence times and phylogenetic relationships between fossil and extant taxa. We exploit this statistical framework to investigate the internal consistency of these models by producing phylogenetic estimates of the age of each fossil in turn, within two rich and well-characterized datasets of fossil and extant species (penguins and canids). We find that the estimation accuracy of fossil ages is generally high with credible intervals seldom excluding the true age and median relative error in the two datasets of 5.7% and 13.2%, respectively. The median relative standard error (RSD) was 9.2% and 7.2%, respectively, suggesting good precision, although with some outliers. In fact, in the two datasets we analyse, the phylogenetic estimate of fossil age is on average less than 2 Myr from the mid-point age of the geological strata from which it was excavated. The high level of internal consistency found in our analyses suggests that the Bayesian statistical model employed is an adequate fit for both the geological and morphological data, and provides evidence from real data that the framework used can accurately model the evolution of discrete morphological traits coded from fossil and extant taxa. We anticipate that this approach will have diverse applications beyond divergence time dating, including dating fossils that are temporally unconstrained, testing of the 'morphological clock', and for uncovering potential model misspecification and/or data errors when controversial phylogenetic hypotheses are obtained based on combined divergence dating analyses.This article is part of the themed issue 'Dating species divergences using