WorldWideScience

Sample records for quantifying herbage mass

  1. The relative efficiency of three methods of estimating herbage mass ...

    African Journals Online (AJOL)

    The methods involved were randomly placed circular quadrats; randomly placed narrow strips; and disc meter sampling. Disc meter and quadrat sampling appear to be more efficient than strip sampling. In a subsequent small plot grazing trial the estimates of herbage mass, using the disc meter, had a consistent precision ...

  2. Incorporating a prediction of postgrazing herbage mass into a whole-farm model for pasture-based dairy systems.

    Science.gov (United States)

    Gregorini, P; Galli, J; Romera, A J; Levy, G; Macdonald, K A; Fernandez, H H; Beukes, P C

    2014-07-01

    The DairyNZ whole-farm model (WFM; DairyNZ, Hamilton, New Zealand) consists of a framework that links component models for animal, pastures, crops, and soils. The model was developed to assist with analysis and design of pasture-based farm systems. New (this work) and revised (e.g., cow, pasture, crops) component models can be added to the WFM, keeping the model flexible and up to date. Nevertheless, the WFM does not account for plant-animal relationships determining herbage-depletion dynamics. The user has to preset the maximum allowable level of herbage depletion [i.e., postgrazing herbage mass (residuals)] throughout the year. Because residuals have a direct effect on herbage regrowth, the WFM in its current form does not dynamically simulate the effect of grazing pressure on herbage depletion and consequent effect on herbage regrowth. The management of grazing pressure is a key component of pasture-based dairy systems. Thus, the main objective of the present work was to develop a new version of the WFM able to predict residuals, and thereby simulate related effects of grazing pressure dynamically at the farm scale. This objective was accomplished by incorporating a new component model into the WFM. This model represents plant-animal relationships, for example sward structure and herbage intake rate, and resulting level of herbage depletion. The sensitivity of the new version of the WFM was evaluated and then the new WFM was tested against an experimental data set previously used to evaluate the WFM and to illustrate the adequacy and improvement of the model development. Key outputs variables of the new version pertinent to this work (milk production, herbage dry matter intake, intake rate, harvesting efficiency, and residuals) responded acceptably to a range of input variables. The relative prediction errors for monthly and mean annual residual predictions were 20 and 5%, respectively. Monthly predictions of residuals had a line bias (1.5%), with a proportion

  3. Effect of pre-grazing herbage mass on dairy cow performance, grass dry matter production and output from perennial ryegrass (Lolium perenne L.) pastures.

    Science.gov (United States)

    Wims, C M; Delaby, L; Boland, T M; O'Donovan, M

    2014-01-01

    A grazing study was undertaken to examine the effect of maintaining three levels of pre-grazing herbage mass (HM) on dairy cow performance, grass dry matter (DM) production and output from perennial ryegrass (Lolium perenne L.) pastures. Cows were randomly assigned to one of three pre-grazing HM treatments: 1150 - Low HM (L), 1400 - Medium HM (M) or 2000 kg DM/ha - High HM (H). Herbage accumulation under grazing was lowest (Ppastures required more grass silage supplementation during the grazing season (+73 kg DM/cow) to overcome pasture deficits due to lower pasture growth rates (Ppasture intake, although cows grazing the L pastures had to graze a greater daily area (Ppasture reduces pasture DM production and at a system level may increase the requirement for imported feed.

  4. Competition between herbage plants

    NARCIS (Netherlands)

    Wit, de C.T.; Bergh, van den J.P.

    1965-01-01

    Starting from work with annuals a model of competition between herbage plants is discussed. It is shown that their mutual interference can only be described adequately if they are grown in mixture and also in monoculture

  5. Effect of the type of silage on milk yield, intake and rumen metabolism of dairy cows grazing swards with low herbage mass.

    Science.gov (United States)

    Ruiz-Albarrán, Miguel; Balocchi, Oscar A; Noro, Mirela; Wittwer, Fernando; Pulido, Rubén G

    2016-07-01

    The aim of this study was to evaluate the effect of herbage allowance (HA) and type of silage supplemented (TS) on milk yield, dry matter intake (DMI) and metabolism of dairy cows in early lactation. Thirty-six Holstein-Friesian dairy cows were allocated to four treatments derived from an arrangement of two HA (LHA = 17 or HHA = 25 kg of DM/cow/day) and two TS (grass (GS) or maize (MS)). Herbage allowance had no effect on DMI or milk yield. Rumen pH and NH3 -N concentration were not affected by HA. The efficiency of microbial protein synthesis in the rumen (microbial protein (MP)) was affected by HA with 21.5 and 23.9 g microbial nitrogen per kg ruminal digestible organic matter for LHA and HHA, respectively (P content by 0.10 % (P < 0.023) and herbage DMI by 2.2 kg/cow/day, and showed lower values for milk urea compared to GS (P < 0.001). The former results suggest that TS had a greater effect on milk yield, total feed intake and energy intake than increase in herbage allowance; however, increase in HA had greater effects on MP than TS. © 2015 Japanese Society of Animal Science.

  6. Relations between soil factors and herbage yields of natural ...

    African Journals Online (AJOL)

    Keywords: Cation exchange capacity; Correlation matrix; Nitrogen supplies; Root mass; Root measurements; Soil acidity; Soil variables; Soil water content; Soil water measurements; Yield measurements; nitrogen supply; ph; herbage yield; grassland; soils; productivity; soil depth; dry matter yield; grasses; water content; n; ...

  7. Improvement of herbage by heavy ion beams

    International Nuclear Information System (INIS)

    Xie Hongmei; Hao Jifang; Wei Zengquan; Xie Zhongkui; Li Fengqin; Wang Yajun

    2004-01-01

    Herbage seeds of legume and grass were irradiated in penetration by 80 MeV/u 20 Ne 10+ ions. The results of field tests and observations of the root-tip cells showed that growth of the seedling was obviously weakened with increasing doses. Frequencies of chromosomal aberration and micronucleus increased significantly with increasing doses. According to the field growth tests, radiation sensitivity of grass herbage to the heavy ion beams was much higher than leguminous herbage, and suitable dose of the heavy ion irradiation for the grass and leguminous herbage is 20-30 Gy and 150 Gy, respectively

  8. Herbage intake by grazing dairy cows

    NARCIS (Netherlands)

    Meijs, J.A.C.

    1981-01-01

    An extensive review of the literature is given of
    - nine possible methods for estimating herbage intake by grazing ruminants, with special attention to the sward-cutting and indirect animal methods
    - the factors determining the herbage intake by grazing ruminants.

    The

  9. Effect of herbage composition on the digestibility and voluntary feed ...

    African Journals Online (AJOL)

    Excluding the mineral fractions, only three of the chemical components of the herbage emerged as important, namely, the DM content of the herbage as fed, accounting for 32% of the variance in DMD, the NPN content of the herbage accounting for only 12.2% of the variance and the ash content of the herbage accounting ...

  10. Herbage intake of dairy cows in mixed sequential grazing with breeding ewes as followers.

    Science.gov (United States)

    Jiménez-Rosales, Juan Daniel; Améndola-Massiotti, Ricardo Daniel; Burgueño-Ferreira, Juan Andrés; Ramírez-Valverde, Rodolfo; Topete-Pelayo, Pedro; Huerta-Bravo, Maximino

    2018-03-01

    This study aimed to evaluate the hypothesis that mixed sequential grazing of dairy cows and breeding ewes is beneficial. During the seasons of spring-summer 2013 and autumn-winter 2013-2014, 12 (spring-summer) and 16 (autumn-winter) Holstein Friesian cows and 24 gestating (spring-summer) and lactating (autumn-winter) Pelibuey ewes grazed on six (spring-summer) and nine (autumn-winter) paddocks of alfalfa and orchard grass mixed pastures. The treatments "single species cow grazing" (CowG) and "mixed sequential grazing with ewes as followers of cows" (MixG) were evaluated, under a completely randomized design with two replicates per paddock. Herbage mass on offer (HO) and residual herbage mass (RH) were estimated by cutting samples. The estimate of herbage intake (HI) of cows was based on the use of internal and external markers; the apparent HI of ewes was calculated as the difference between HO (RH of cows) and RH. Even though HO was higher in CowG, the HI of cows was higher in MixG during spring-summer and similar in both treatments during autumn-winter, implying that in MixG the effects on the cows HI of higher alfalfa proportion and herbage accumulation rate evolving from lower residual herbage mass in the previous cycle counteracted that of a higher HO in CowG. The HI of ewes was sufficient to enable satisfactory performance as breeding ewes. Thus, the benefits of mixed sequential grazing arose from higher herbage accumulation, positive changes in botanical composition, and the achievement of sheep production without negative effects on the herbage intake of cows.

  11. Estrutura do pasto disponível e do resíduo pós-pastejo em pastagens de capim-cameroon e capim-marandu Pasture structure and post-grazing herbage mass in pastures of elephantgrass cv. Cameroon and palisadegrass cv. Marandu

    Directory of Open Access Journals (Sweden)

    Cláudia de Paula Rezende

    2008-10-01

    Full Text Available Avaliaram-se, em dois experimentos, as características estruturais e o teor de proteína bruta das forrageiras Pennisetum purpureum cv. Cameroon e Brachiaria brizantha cv. Marandu em pastagens submetidas a quatro taxas de lotação rotacionadas. No período das águas, as taxas de lotação impostas a ambas as pastagens foram de 3, 4, 5 e 6 novilhos/ha e, no período seco, de 2, 3, 4 e 5 novilhos/ha. Nos dois experimentos, o capim-cameroon apresentou maior massa de forragem total e massa de forragem verde, que decresceram com o aumento da desfolha imposta pelas taxas de lotação, principalmente no período seco. Nessa forrageira, o aumento da taxa de lotação comprometeu em maior intensidade a fração de matéria seca de lâmina foliar verde. No período das águas, as taxas de lotação que propiciaram maior oferta de massa seca de forragem total, forragem verde e fração de lâmina foliar foram as de 5 e 4 novilhos/ha, respectivamente, para os capins cameroon e marandu. No período seco, no entanto, os maiores valores obtidos para essas variáveis foram, respectivamente, de 4 e 3 novilhos/ha. As taxas de lotação não influenciaram o teor de proteína bruta das gramíneas, que foi maior no componente lâmina foliar verde da planta.Two experiments were carried out to evaluate structural characteristics and crude protein concentration of Pennisetum purpureum cv. Cameroon and Brachiaria brizantha cv. Marandu under four rotational stocking rates. During the rainfall period, the studied stocking rates for both pastures were 3, 4, 5, and 6 steers/ha and in the dry period the stocking rates were 2, 3, 4, and 5 steers/ha. In both experiments, elephantgrass cv. Cameroon produced greater total herbage mass and green herbage mass, which decreased with the increase in defoliation as a result of the used stocking rates, mainly during the dry period. For elephantgrass pastures, the increase in stocking rate had a greater negative effect on the green

  12. Quantifying mass balance processes on the Southern Patagonia Icefield

    DEFF Research Database (Denmark)

    Schaefer, M.; Machguth, Horst; Falvey, M.

    2015-01-01

    We present surface mass balance simulations of the Southern Patagonia Icefield (SPI) driven by downscaled reanalysis data. The simulations were evaluated and interpreted using geodetic mass balances, measured point balances and a complete velocity field of the icefield for spring 2004. The high m...

  13. Changes in nutritive value and herbage yield during extended growth intervals in grass-legume mixtures

    DEFF Research Database (Denmark)

    Elgersma, Anjo; Søegaard, Karen

    2018-01-01

    . Perennial ryegrass was sown with each of four legumes: red clover, white clover, lucerne and birdsfoot trefoil, and white clover was sown with hybrid ryegrass, meadow fescue and timothy. Effects of species composition on herbage yield, contents of N, neutral detergent fibre (NDF), acid detergent fibre (ADF...... in quality parameters differed among species and functional groups, i.e., grasses and legumes. Results are discussed in the context of quantifying the impact of delaying the harvest date of grass–legume mixtures and relationships between productivity and components of feed quality....

  14. Responses of herbage and browse production to six range management strategies.

    Science.gov (United States)

    H. Reed Sanderson; Thomas M. Quigley; Arthur R. Tiedemann

    1990-01-01

    From 1977 through 1986, herbage and browse production was sampled on 619 sites representing 10 ecosystems and 51 resource units on the Oregon Range Evaluation study area. We determined the effects of six range management strategies and cultural treatments on combined herbage and browse production. Mean herbage and browse production on the forest ecosystems was 145...

  15. Comparison of techniques for estimating herbage intake by grazing dairy cows

    NARCIS (Netherlands)

    Smit, H.J.; Taweel, H.Z.; Tas, B.M.; Tamminga, S.; Elgersma, A.

    2005-01-01

    For estimating herbage intake during grazing, the traditional sward cutting technique was compared in grazing experiments in 2002 and 2003 with the recently developed n-alkanes technique and with the net energy method. The first method estimates herbage intake by the difference between the herbage

  16. Monitoring and assessment of ingestive chewing sounds for prediction of herbage intake rate in grazing cattle.

    Science.gov (United States)

    Galli, J R; Cangiano, C A; Pece, M A; Larripa, M J; Milone, D H; Utsumi, S A; Laca, E A

    2018-05-01

    Accurate measurement of herbage intake rate is critical to advance knowledge of the ecology of grazing ruminants. This experiment tested the integration of behavioral and acoustic measurements of chewing and biting to estimate herbage dry matter intake (DMI) in dairy cows offered micro-swards of contrasting plant structure. Micro-swards constructed with plastic pots were offered to three lactating Holstein cows (608±24.9 kg of BW) in individual grazing sessions (n=48). Treatments were a factorial combination of two forage species (alfalfa and fescue) and two plant heights (tall=25±3.8 cm and short=12±1.9 cm) and were offered on a gradient of increasing herbage mass (10 to 30 pots) and number of bites (~10 to 40 bites). During each grazing session, sounds of biting and chewing were recorded with a wireless microphone placed on the cows' foreheads and a digital video camera to allow synchronized audio and video recordings. Dry matter intake rate was higher in tall alfalfa than in the other three treatments (32±1.6 v. 19±1.2 g/min). A high proportion of jaw movements in every grazing session (23 to 36%) were compound jaw movements (chew-bites) that appeared to be a key component of chewing and biting efficiency and of the ability of cows to regulate intake rate. Dry matter intake was accurately predicted based on easily observable behavioral and acoustic variables. Chewing sound energy measured as energy flux density (EFD) was linearly related to DMI, with 74% of EFD variation explained by DMI. Total chewing EFD, number of chew-bites and plant height (tall v. short) were the most important predictors of DMI. The best model explained 91% of the variation in DMI with a coefficient of variation of 17%. Ingestive sounds integrate valuable information to remotely monitor feeding behavior and predict DMI in grazing cows.

  17. Herbage Dynamics and Soils of two Different Sites of Calotropis ...

    African Journals Online (AJOL)

    To determine herbage dynamics, herbs in each quadrat of the experimental sites were harvested, sorted out according to species, counted and identified in the herbarium. Simpson's index, D = ∑Pi2 was used to obtain relative frequencies and abundance of species. Soil samples were derived from the quadrats and ...

  18. A model to quantify the resilience of mass railway transportation systems

    International Nuclear Information System (INIS)

    Adjetey-Bahun, Kpotissan; Birregah, Babiga; Châtelet, Eric; Planchet, Jean-Luc

    2016-01-01

    Traditional risk management approaches focus on perturbation events' likelihood and their consequences. However, recent events show that not all perturbation events can be foreseen. The concept of resilience has been introduced to measure not only the system's ability to absorb perturbations, but also its ability to rapidly recover from perturbations. In this work, we propose a simulation-based model for quantifying resilience in mass railway transportation systems by quantifying passenger delay and passenger load as the system's performance indicators. We integrate all subsystems that make up mass railway transportation systems (transportation, power, telecommunication and organisation subsystems) and their interdependencies. The model is applied to the Paris mass railway transportation system. The model's results show that since trains continue running within the system even by decreasing their speed, the system remains resilient. During the normal operation of the system as well as during perturbation, the model shows similarities with reality. The perturbation management plan that consists of setting up temporary train services on part of the impacted line while repairing the failed system's component is considered in this work. We also assess the extent to which some resilient system's capacities (i.e. absorption, adaptation and recovery) can increase the resilience of the system. - Highlights: • The need of resilience quantification models in sociotechnical systems. • We propose a simulation-based model. • This model is applied to Paris mass railway transportation system.

  19. Diferentes massas de forragem sobre as variáveis morfogênicas e estruturais de azevém anual Different herbage masses on morphogenetic and structural traits of Italian ryegrass

    Directory of Open Access Journals (Sweden)

    Anna Carolina Cerato Confortin

    2013-03-01

    Full Text Available Características morfogênicas e estruturais de azevém (Lolium multiflorum Lam. foram avaliadas sob pastejo de borregas, em diferentes massas de forragem (MF: "Alta", "Média" e "Baixa", correspondentes a 1800-2000; 1400-1600 e 1000-1200kg ha-1 de matéria seca (MS, respectivamente. O delineamento experimental foi inteiramente casualizado, com três tratamentos e duas repetições de área. O método de pastejo foi de lotação contínua, com número variável de animais. Os dados foram submetidos a análises de correlação e regressão polinomial. A altura do pseudocolmo, o comprimento de lâminas intactas e desfolhadas e o número de folhas em senescência de azevém aumentaram linearmente com a elevação dos valores das massas de forragem. O número de folhas verdes ajustou-se ao modelo de regressão quadrático; o número de folhas em expansão e a densidade populacional de perfilhos não se ajustaram a nenhum modelo de regressão. Em pastagem de azevém, o manejo com massas de forragem dentro da faixa compreendida entre 1.100 e 1.800kg ha-1de MS não provoca alterações nas características morfogênicas dessa gramínea, mas causa diferenças nas características estruturais do dossel. Quando o azevém é manejado com 1.460kg ha-1de MS, seus perfilhos mantêm maior número de folhas verdes e com 1.800kg ha-1 de MS existe maior número de lâminas foliares em senescência e com maior comprimento.Morphogenetic and structural characteristics of Italian ryegrass (Lolium multiflorum Lam., utilized by female lambs and managed with different forage masses (FM were studied. The experimental design was completely randomized, with two area replications and three treatments, consisting of forage masses: "High", "Mean" and "Low", corresponding to 1,800-2,000; 1,400-1,600 and 1,000-1,200kg ha-1 of dry matter (DM, respectively. The grazing method was continuous with variable stocking rate. Data were subjected to correlation analysis and polynomial

  20. Comparison of methods for estimating herbage intake in grazing dairy cows

    DEFF Research Database (Denmark)

    Hellwing, Anne Louise Frydendahl; Lund, Peter; Weisbjerg, Martin Riis

    2015-01-01

    Estimation of herbage intake is a challenge both under practical and experimental conditions. The aim of this study was to estimate herbage intake with different methods for cows grazing 7 h daily on either spring or autumn pastures. In order to generate variation between cows, the 20 cows per...... season, and the herbage intake was estimated twice during each season. Cows were on pasture from 8:00 until 15:00, and were subsequently housed inside and fed a mixed ration (MR) based on maize silage ad libitum. Herbage intake was estimated with nine different methods: (1) animal performance (2) intake...

  1. Relationships of 35 lower limb muscles to height and body mass quantified using MRI.

    Science.gov (United States)

    Handsfield, Geoffrey G; Meyer, Craig H; Hart, Joseph M; Abel, Mark F; Blemker, Silvia S

    2014-02-07

    Skeletal muscle is the most abundant tissue in the body and serves various physiological functions including the generation of movement and support. Whole body motor function requires adequate quantity, geometry, and distribution of muscle. This raises the question: how do muscles scale with subject size in order to achieve similar function across humans? While much of the current knowledge of human muscle architecture is based on cadaver dissection, modern medical imaging avoids limitations of old age, poor health, and limited subject pool, allowing for muscle architecture data to be obtained in vivo from healthy subjects ranging in size. The purpose of this study was to use novel fast-acquisition MRI to quantify volumes and lengths of 35 major lower limb muscles in 24 young, healthy subjects and to determine if muscle size correlates with bone geometry and subject parameters of mass and height. It was found that total lower limb muscle volume scales with mass (R(2)=0.85) and with the height-mass product (R(2)=0.92). Furthermore, individual muscle volumes scale with total muscle volume (median R(2)=0.66), with the height-mass product (median R(2)=0.61), and with mass (median R(2)=0.52). Muscle volume scales with bone volume (R(2)=0.75), and muscle length relative to bone length is conserved (median s.d.=2.1% of limb length). These relationships allow for an arbitrary subject's individual muscle volumes to be estimated from mass or mass and height while muscle lengths may be estimated from limb length. The dataset presented here can further be used as a normative standard to compare populations with musculoskeletal pathologies. © 2013 Published by Elsevier Ltd.

  2. The effect of winter burning and mowing on seasonal herbage yield ...

    African Journals Online (AJOL)

    The response of Eragrostis curvula cv. Ermelo to three levels of fertilizer (no nitrogen or phosphorus; 53 kg nitrogen + 16 kg phosphorus per ha; and 106 kg nitrogen + 32 kg phosphorus per ha), three times of removing unutilized herbage (July, August and September) and three methods of herbage removal (annual mowing ...

  3. The effects of different nitrogen doses on herbage and seed yields of ...

    African Journals Online (AJOL)

    The effects of different nitrogen doses on herbage and seed yields of annual ... 250, 270 and 290 kg ha-1) of and some agricultural characteristics of annual ryegrass cv. ... doses are observed to be important for all properties of herbage yield and ... It was obtained for the seed production that the highest number of tiller (626 ...

  4. Quantifying Protein-Carbohydrate Interactions Using Liquid Sample Desorption Electrospray Ionization Mass Spectrometry

    Science.gov (United States)

    Yao, Yuyu; Shams-Ud-Doha, Km; Daneshfar, Rambod; Kitova, Elena N.; Klassen, John S.

    2015-01-01

    The application of liquid sample desorption electrospray ionization mass spectrometry (liquid sample DESI-MS) for quantifying protein-carbohydrate interactions in vitro is described. Association constants for the interactions between lysozyme and β-D-GlcNAc-(1 → 4)-β-D-GlcNAc-(1 → 4)-D-GlcNAc and β-D-GlcNAc-(1 → 4)-β-D-GlcNAc-(1 → 4)-β-D-GlcNAc-(1 → 4)-D-GlcNAc, and between a single chain antibody and α-D-Galp-(1 → 2)-[α-D-Abep-(1 → 3)]-α-D-Manp-OCH3 and β-D-Glcp-(1 → 2)-[α-D-Abep-(1 → 3)]-α-D-Manp-OCH3 measured using liquid sample DESI-MS were found to be in good agreement with values measured by isothermal titration calorimetry and the direct ESI-MS assay. The reference protein method, which was originally developed to correct ESI mass spectra for the occurrence of nonspecific ligand-protein binding, was shown to reliably correct liquid sample DESI mass spectra for nonspecific binding. The suitability of liquid sample DESI-MS for quantitative binding measurements carried out using solutions containing high concentrations of the nonvolatile biological buffer phosphate buffered saline (PBS) was also explored. Binding of lysozyme to β-D-GlcNAc-(1 → 4)-β-D-GlcNAc-(1 → 4)-D-GlcNAc in aqueous solutions containing up to 1× PBS was successfully monitored using liquid sample DESI-MS; with ESI-MS the binding measurements were limited to concentrations less than 0.02 X PBS.

  5. Quantifying in-stream retention of nitrate at catchment scales using a practical mass balance approach.

    Science.gov (United States)

    Schwientek, Marc; Selle, Benny

    2016-02-01

    As field data on in-stream nitrate retention is scarce at catchment scales, this study aimed at quantifying net retention of nitrate within the entire river network of a fourth-order stream. For this purpose, a practical mass balance approach combined with a Lagrangian sampling scheme was applied and seasonally repeated to estimate daily in-stream net retention of nitrate for a 17.4 km long, agriculturally influenced, segment of the Steinlach River in southwestern Germany. This river segment represents approximately 70% of the length of the main stem and about 32% of the streambed area of the entire river network. Sampling days in spring and summer were biogeochemically more active than in autumn and winter. Results obtained for the main stem of Steinlach River were subsequently extrapolated to the stream network in the catchment. It was demonstrated that, for baseflow conditions in spring and summer, in-stream nitrate retention could sum up to a relevant term of the catchment's nitrogen balance if the entire stream network was considered.

  6. The effects of different nitrogen doses on herbage and seed yields of ...

    African Journals Online (AJOL)

    Jane

    2011-10-05

    Oct 5, 2011 ... the production of herbage and seed in the research. According to the two-year ...... Ecology, Production, and Management of Lolium for Forage in the. USA, F.M. .... Education and Application Handbook of Pasture. Principle 1.

  7. Herbage availability €rs a stress factor on grazed Coastcross II ...

    African Journals Online (AJOL)

    ) relationships for Coastcross ll Bermuda grass grazed for four consecutive summer periods by young growing beef cattle. Stocking rate affected the daily. LWG/animal through its influence on herbage availability. Rota- tional grazing showed a ...

  8. Nutritive value of some herbages for dromedary camel in Iran.

    Science.gov (United States)

    Towhidi, A

    2007-01-01

    To prepare standard tables of chemical composition of feedstuffs and to determine digestibility and palatability of different plant species in dromedary camel, this research was carried out by considering the most consuming herbages of Iranian desert rages. The plant species were included Atriplex lentiformis, Alhagi persarum, Seidlitzia rosmarinus, Saueda fruticosa, Haloxylon ammodendron, Tamarix kotschyi, Hammada salicornica, Salsola yazdiana, Salsola tomentosa, Tamarix aphylla and Artemisia sieberi. Thirty samples of the browsing parts were collected from the rangelands of Yazd province in autumn. Chemical composition of samples including Dry Matter (DM), Crude Protein (CP), Crude Fiber (CF), Neutral Detergent Fiber (NDF), Acid Detergent Fiber (ADF), Ether Extract (EE), Total Ash (TA), macro elements (Ca, P, Mg, K), micro elements (Fe, Mg, Cu, Zn)and gross energy (GE) were analyzed. The in vitro digestibility was determined by camel rumen liquor in Tilley and Terry method. Palatability of the plants were measured by three mature camels in cafeteria trials. The camels voluntarily fed 11 plant species during one hour for six days. Data were analyzed by GLM method in SAS software. The highest CP (18.3%) and the lowest NDF (40.4%) and ADF (35.4%) were related to Tamarix aphylla. The lowest CP (5.5%) and the highest NDF (72.8%) and ADF (59.6%) were related to Artemisia sieberi. The highest organic matter digestibility in dry matter was related to Haloxylon ammodendron. The results also indicated that Atriplex lentiformis, Alhagi persarum, Seidlitzia rosmarinus, Saueda fruticosa, Haloxylon ammodendron, Salsola tomentosa, Hammada salicornica, Tamarix kotschyi, Salsola yazdiana, Tamarix aphylla and Artemisia sieberi were more pleasure feed, respectively. It was not observed any correlation between %DOMD and chemical composition. Moreover, There was not a consistent relationship between the palatability of herbages with %DOMD or chemical composition.

  9. Caracterização da estrutura da vegetação numa pastagem natural do Bioma Pampa submetida a diferentes estratégias de manejo da oferta de forragem Structural characterization of a natural pasture vegetation from Pampa Biome under different herbage allowance management strategies

    Directory of Open Access Journals (Sweden)

    Fabio Pereira Neves

    2009-09-01

    Full Text Available Os objetivos neste trabalho foram descrever e investigar a dinâmica espaço-temporal de sítios alimentares em diferentes faixas de massa de forragem e altura do pasto, assim como o percentual de área efetivamente pastejada, a taxa de acúmulo de forragem e a produção de matéria seca de uma pastagem natural do Bioma Pampa. O delineamento utilizado foi o de blocos ao acaso com duas repetições, com três ofertas de forragem fixas (8, 12 e 16% e três ofertas de forragem variáveis ao longo do ano (8-12%, 12-8% e 16-12%, com o primeiro valor correspondendo à primavera. Na oferta fixa de forragem de 8%, observou-se maior percentual de área efetivamente pastejada, porém os valores médios de altura e massa de forragem foram inferiores aos das demais ofertas. As maiores ofertas de forragem, 16 e 16-12%, apresentaram altura média do pasto superior, 9,0 cm e massa de forragem de 2.000 kg/ha de matéria seca, porém, com área efetivamente pastejada inferior às observadas nas ofertas de 8 e 8-12%. Mesmo estratégias de manejo tão distintas, aproximadamente 60 a 70% dos sítios alimentares ocorreram em faixas consideradas limitantes ao potencial de ingestão de forragem por bovinos, com exceção dos tratamentos 16% e 16-12%, nos quais se observou menor frequência de sítios alimentares na faixa inferior a 6,0 cm de altura. Os sítios alimentares, em geral, se concentraram em faixas com altura The objective of this trial was to investigate the spatio-temporal dynamics of feeding sites stratified by strips of herbage mass and sward height, and the percentage of effectively grazed area, herbage growth rate and dry matter yield of a natural pasture from Pampa Biome managed with different herbage allowance management strategies. A randomized block design with two replications was used with three fixed herbage allowance (8, 12 and 16% and three herbage allowance variables throughout the year (8-12%, 12-8%, and 16-12%, with first value

  10. How does the suppression of energy supplementation affect herbage intake, performance and parasitism in lactating saddle mares?

    Science.gov (United States)

    Collas, C; Fleurance, G; Cabaret, J; Martin-Rosset, W; Wimel, L; Cortet, J; Dumont, B

    2014-08-01

    Agroecology opens up new perspectives for the design of sustainable farming systems by using the stimulation of natural processes to reduce the inputs needed for production. In horse farming systems, the challenge is to maximize the proportion of forages in the diet, and to develop alternatives to synthetic chemical drugs for controlling gastrointestinal nematodes. Lactating saddle mares, with high nutritional requirements, are commonly supplemented with concentrates at pasture, although the influence of energy supplementation on voluntary intake, performance and immune response against parasites has not yet been quantified. In a 4-month study, 16 lactating mares experimentally infected with cyathostome larvae either received a daily supplement of barley (60% of energy requirements for lactation) or were non-supplemented. The mares were rotationally grazed on permanent pastures over three vegetation cycles. All the mares met their energy requirements and maintained their body condition score higher than 3. In both treatments, they produced foals with a satisfying growth rate (cycle 1: 1293 g/day; cycle 2: 1029 g/day; cycle 3: 559 g/day) and conformation (according to measurements of height at withers and cannon bone width at 11 months). Parasite egg excretion by mares increased in both groups during the grazing season (from 150 to 2011 epg), independently of whether they were supplemented or not. This suggests that energy supplementation did not improve mare ability to regulate parasite burden. Under unlimited herbage conditions, grass dry matter intake by supplemented mares remained stable around 22.6 g DM/kg LW per day (i.e. 13.5 kg DM/al per day), whereas non-supplemented mares increased voluntary intake from 22.6 to 28.0 g DM/kg LW per day (13.5 to 17.2 kg DM/al per day) between mid-June and the end of August. Hence total digestible dry matter intake and net energy intake did not significantly differ between supplemented and non-supplemented mares during the

  11. Anthropometry profiles of elite rugby players: quantifying changes in lean mass.

    Science.gov (United States)

    Duthie, G M; Pyne, D B; Hopkins, W G; Livingstone, S; Hooper, S L

    2006-03-01

    To demonstrate the utility of a practical measure of lean mass for monitoring changes in the body composition of athletes. Between 1999 and 2003 body mass and sum of seven skinfolds were recorded for 40 forwards and 32 backs from one Super 12 rugby union franchise. Players were assessed on 13 (7) occasions (mean (SD)) over 1.9 (1.3) years. Mixed modelling of log transformed variables provided a lean mass index (LMI) of the form mass/skinfolds(x), for monitoring changes in mass controlled for changes in skinfold thickness. Mean effects of phase of season and time in programme were modelled as percentage changes. Effects were standardised for interpretation of magnitudes. The exponent x was 0.13 for forwards and 0.14 for backs (90% confidence limits +/-0.03). The forwards had a small decrease in skinfolds (5.3%, 90% confidence limits +/-2.2%) between preseason and competition phases, and a small increase (7.8%, 90% confidence limits +/-3.1%) during the club season. A small decrease in LMI (approximately 1.5%) occurred after one year in the programme for forwards and backs, whereas increases in skinfolds for forwards became substantial (4.3%, 90% confidence limits +/-2.2%) after three years. Individual variation in body composition was small within a season (within subject SD: body mass, 1.6%; skinfolds, 6.8%; LMI, 1.1%) and somewhat greater for body mass (2.1%) and LMI (1.7%) between seasons. Despite a lack of substantial mean changes, there was substantial individual variation in lean mass within and between seasons. An index of lean mass based on body mass and skinfolds is a potentially useful tool for assessing body composition of athletes.

  12. Quantifying the resolution level where the GRACE satellites can separate Greenland's glacial mass balance from surface mass balance

    Science.gov (United States)

    Bonin, J. A.; Chambers, D. P.

    2015-09-01

    Mass change over Greenland can be caused by either changes in the glacial dynamic mass balance (DMB) or the surface mass balance (SMB). The GRACE satellite gravity mission cannot directly separate the two physical causes because it measures the sum of the entire mass column with limited spatial resolution. We demonstrate one theoretical way to indirectly separate cumulative SMB from DMB with GRACE, using a least squares inversion technique with knowledge of the location of the glaciers. However, we find that the limited 60 × 60 spherical harmonic representation of current GRACE data does not provide sufficient resolution to adequately accomplish the task. We determine that at a maximum degree/order of 90 × 90 or above, a noise-free gravity measurement could theoretically separate the SMB from DMB signals. However, current GRACE satellite errors are too large at present to separate the signals. A noise reduction of a factor of 10 at a resolution of 90 × 90 would provide the accuracy needed for the interannual cumulative SMB and DMB to be accurately separated.

  13. Quantifying the effect of medium composition on the diffusive mass transfer of hydrophobic organic chemicals through unstirred boundary layers

    DEFF Research Database (Denmark)

    Mayer, Philipp; Karlson, U.; Christensen, P.S.

    2005-01-01

    Unstirred boundary layers (UBLs) often act as a bottleneck for the diffusive transport of hydrophobic organic compounds (HOCs) in the environment. Therefore, a microscale technique was developed for quantifying mass transfer through a 100-μm thin UBL, with the medium composition of the UBL...... as the controllable factor. The model compound fluoranthene had to (1) partition from a contaminated silicone disk (source) into the medium, (2) then diffuse through 100 μm of medium (UBL), and finally (3) partition into a clean silicone layer (sink). The diffusive mass transfer from source to sink was monitored over...... of magnitude. These results demonstrate that medium constituents, which normally are believed to bind hydrophobic organic chemicals, actually can enhance the diffusive mass transfer of HOCs in the vicinity of a diffusion source (e.g., contaminated soil particles). The technique can be used to evaluate...

  14. Milk production, grazing behavior and nutritional status of dairy cows grazing two herbage allowances during winter

    Directory of Open Access Journals (Sweden)

    Miguel Ruiz-Albarran

    2016-03-01

    Full Text Available Winter grazing provides a useful means for increasing the proportion of grazed herbage in the annual diet of dairy cows. This season is characterized by low herbage growth rate, low herbage allowance, and low herbage intake and hence greater needs for supplements to supply the requirements of lactating dairy cows. The aim of this study was to determine the influence of herbage allowance (HA offered to autumn calving dairy cows grazing winter herbage on milk production, nutritional status, and grazing behavior. The study took 63 d using 32 multiparous Holstein-Friesian dairy cows. Prior to experimental treatment, milk production averaged 20.2 ± 1.7 kg d-1, body weight was 503 ± 19 kg, and days in milking were 103 ± 6. Experimental animals were randomly assigned to two treatments according to HA offered above ground level: low (17 kg DM cow-1 d-1 vs. high HA (25 kg DM cow¹ d¹. All cows were supplemented with grass silage supplying daily 6.25 and 4.6 kg DM of concentrate (concentrate commercial plus high corn moisture. Decreasing HA influenced positively milk production (+25%, milk protein (+20 kg, and milk fat (+17 kg per hectare; however no effects on milk production per cow or energy metabolic status were observed in the cows. In conclusion, a low HA showed to be the most significant influencing factor on milk and milk solids production per hectare in dairy cows grazing restricted winter and supplemented with grass silage and concentrate; but no effect on the milk production per cow was found.

  15. Herbage intake regulation and growth of rabbits raised on grasslands: back to basics and looking forward.

    Science.gov (United States)

    Martin, G; Duprat, A; Goby, J-P; Theau, J-P; Roinsard, A; Descombes, M; Legendre, H; Gidenne, T

    2016-10-01

    Organic agriculture is developing worldwide, and organic rabbit production has developed within this context. It entails raising rabbits in moving cages or paddocks, which enables them to graze grasslands. As organic farmers currently lack basic technical information, the objective of this article is to characterize herbage intake, feed intake and the growth rate of rabbits raised on grasslands in different environmental and management contexts (weather conditions, grassland type and complete feed supplementation). Three experiments were performed with moving cages at an experimental station. From weaning, rabbits grazed a natural grassland, a tall fescue grassland and a sainfoin grassland in experiments 1, 2 and 3, respectively. Rabbit diets were supplemented with a complete pelleted feed limited to 69 g dry matter (DM)/rabbit per day in experiment 1 and 52 g DM/rabbit per day in experiments 2 and 3. Herbage allowance and fiber, DM and protein contents, as well as rabbit intake and live weight, were measured weekly. Mean herbage DM intake per rabbit per day differed significantly (P<0.001) between experiments. It was highest in experiment 1 (78.5 g DM/day) and was 43.9 and 51.2 g DM/day in experiments 2 and 3, respectively. Herbage allowance was the most significant determinant of herbage DM intake during grazing, followed by rabbit metabolic weight (live weight0.75) and herbage protein and fiber contents. Across experiments, a 10 g DM increase in herbage allowance and a 100 g increase in rabbit metabolic weight corresponded to a mean increase of 6.8 and 9.6 g of herbage DM intake, respectively. When including complete feed, daily mean DM intakes differed significantly among experiments (P<0.001), ranging from 96.1 g DM/rabbit per day in experiment 2 to 163.6 g DM/rabbit per day in experiment 1. Metabolic weight of rabbits raised on grasslands increased linearly over time in all three experiments, yielding daily mean growth rates of 26.2, 19.2 and 28.5 g/day in

  16. Development of a bedside viable ultrasound protocol to quantify appendicular lean tissue mass.

    Science.gov (United States)

    Paris, Michael T; Lafleur, Benoit; Dubin, Joel A; Mourtzakis, Marina

    2017-10-01

    Ultrasound is a non-invasive and readily available tool that can be prospectively applied at the bedside to assess muscle mass in clinical settings. The four-site protocol, which images two anatomical sites on each quadriceps, may be a viable bedside method, but its ability to predict musculature has not been compared against whole-body reference methods. Our primary objectives were to (i) compare the four-site protocol's ability to predict appendicular lean tissue mass from dual-energy X-ray absorptiometry; (ii) optimize the predictability of the four-site protocol with additional anatomical muscle thicknesses and easily obtained covariates; and (iii) assess the ability of the optimized protocol to identify individuals with low lean tissue mass. This observational cross-sectional study recruited 96 university and community dwelling adults. Participants underwent ultrasound scans for assessment of muscle thickness and whole-body dual-energy X-ray absorptiometry scans for assessment of appendicular lean tissue. Ultrasound protocols included (i) the nine-site protocol, which images nine anterior and posterior muscle groups in supine and prone positions, and (ii) the four-site protocol, which images two anterior sites on each quadriceps muscle group in a supine position. The four-site protocol was strongly associated (R 2  = 0.72) with appendicular lean tissue mass, but Bland-Altman analysis displayed wide limits of agreement (-5.67, 5.67 kg). Incorporating the anterior upper arm muscle thickness, and covariates age and sex, alongside the four-site protocol, improved the association (R 2  = 0.91) with appendicular lean tissue and displayed narrower limits of agreement (-3.18, 3.18 kg). The optimized protocol demonstrated a strong ability to identify low lean tissue mass (area under the curve = 0.89). The four-site protocol can be improved with the addition of the anterior upper arm muscle thickness, sex, and age when predicting appendicular lean tissue mass

  17. Herbage intake and animal performance of cattle grazing dwarf elaphant grass with two access times to a forage peanut area

    Directory of Open Access Journals (Sweden)

    Diego Melo de Liz

    2014-12-01

    Full Text Available Relatively short grazing periods in a pure legume pasture can be an alternative for increasing animal performance in medium-quality tropical pastures. Thus, the aim was to evaluate the herbage intake and animal performance of steers grazing dwarf elephant grass (Pennisetum purpureum Schum. cv. BRS Kurumi with two access times [2 h (07:00 - 9:00 and 6 h (07:00 - 13:00] to an area of forage peanut (Arachis pintoi cv. Amarillo. Twelve steers (219 ± 28.8 kg LW were divided into four groups and assessed during three consecutive grazing cycles, from January to March 2013. The crude protein and neutral detergent fiber contents were 158 and 577 g/kg dry matter (DM for dwarf elephant grass and 209 and 435 g/kg DM for forage peanut, respectively. The pre-grazing height and leaf mass of dwarf elephant grass and forage peanut were 94 cm and 2782 kg DM/ha and 15 cm and 1751 kg DM/ha, respectively. The herbage intake (mean = 2.7 ± 0.06% LW and average daily weight gain (mean = 1.16 ± 0.31 kg/day were similar for both treatments. However, animals with 2-h access to the legume paddock grazed for 71% of the time, whereas those with 6-h access grazed for 48% of the time. The performance of the steers that were allowed to graze forage peanut pasture for 2 h is similar to that of those that were allowed to graze the legume pasture for 6 h.

  18. Quantifying biological samples using Linear Poisson Independent Component Analysis for MALDI-ToF mass spectra

    Science.gov (United States)

    Deepaisarn, S; Tar, P D; Thacker, N A; Seepujak, A; McMahon, A W

    2018-01-01

    Abstract Motivation Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI) facilitates the analysis of large organic molecules. However, the complexity of biological samples and MALDI data acquisition leads to high levels of variation, making reliable quantification of samples difficult. We present a new analysis approach that we believe is well-suited to the properties of MALDI mass spectra, based upon an Independent Component Analysis derived for Poisson sampled data. Simple analyses have been limited to studying small numbers of mass peaks, via peak ratios, which is known to be inefficient. Conventional PCA and ICA methods have also been applied, which extract correlations between any number of peaks, but we argue makes inappropriate assumptions regarding data noise, i.e. uniform and Gaussian. Results We provide evidence that the Gaussian assumption is incorrect, motivating the need for our Poisson approach. The method is demonstrated by making proportion measurements from lipid-rich binary mixtures of lamb brain and liver, and also goat and cow milk. These allow our measurements and error predictions to be compared to ground truth. Availability and implementation Software is available via the open source image analysis system TINA Vision, www.tina-vision.net. Contact paul.tar@manchester.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29091994

  19. Rapid Quadrupole-Time-of-Flight Mass Spectrometry Method Quantifies Oxygen-Rich Lignin Compound in Complex Mixtures

    Science.gov (United States)

    Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.

    2018-03-01

    Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.

  20. A single-run liquid chromatography mass spectrometry method to quantify neuroactive kynurenine pathway metabolites in rat plasma.

    Science.gov (United States)

    Orsatti, Laura; Speziale, Roberto; Orsale, Maria Vittoria; Caretti, Fulvia; Veneziano, Maria; Zini, Matteo; Monteagudo, Edith; Lyons, Kathryn; Beconi, Maria; Chan, Kelvin; Herbst, Todd; Toledo-Sherman, Leticia; Munoz-Sanjuan, Ignacio; Bonelli, Fabio; Dominguez, Celia

    2015-03-25

    Neuroactive metabolites in the kynurenine pathway of tryptophan catabolism are associated with neurodegenerative disorders. Tryptophan is transported across the blood-brain barrier and converted via the kynurenine pathway to N-formyl-L-kynurenine, which is further degraded to L-kynurenine. This metabolite can then generate a group of metabolites called kynurenines, most of which have neuroactive properties. The association of tryptophan catabolic pathway alterations with various central nervous system (CNS) pathologies has raised interest in analytical methods to accurately quantify kynurenines in body fluids. We here describe a rapid and sensitive reverse-phase HPLC-MS/MS method to quantify L-kynurenine (KYN), kynurenic acid (KYNA), 3-hydroxy-L-kynurenine (3HK) and anthranilic acid (AA) in rat plasma. Our goal was to quantify these metabolites in a single run; given their different physico-chemical properties, major efforts were devoted to develop a chromatography suitable for all metabolites that involves plasma protein precipitation with acetonitrile followed by chromatographic separation by C18 RP chromatography, detected by electrospray mass spectrometry. Quantitation range was 0.098-100 ng/ml for 3HK, 9.8-20,000 ng/ml for KYN, 0.49-1000 ng/ml for KYNA and AA. The method was linear (r>0.9963) and validation parameters were within acceptance range (calibration standards and QC accuracy within ±30%). Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Multi-isotope imaging mass spectrometry quantifies stem cell division and metabolism.

    Science.gov (United States)

    Steinhauser, Matthew L; Bailey, Andrew P; Senyo, Samuel E; Guillermier, Christelle; Perlstein, Todd S; Gould, Alex P; Lee, Richard T; Lechene, Claude P

    2012-01-15

    Mass spectrometry with stable isotope labels has been seminal in discovering the dynamic state of living matter, but is limited to bulk tissues or cells. We developed multi-isotope imaging mass spectrometry (MIMS) that allowed us to view and measure stable isotope incorporation with submicrometre resolution. Here we apply MIMS to diverse organisms, including Drosophila, mice and humans. We test the 'immortal strand hypothesis', which predicts that during asymmetric stem cell division chromosomes containing older template DNA are segregated to the daughter destined to remain a stem cell, thus insuring lifetime genetic stability. After labelling mice with (15)N-thymidine from gestation until post-natal week 8, we find no (15)N label retention by dividing small intestinal crypt cells after a four-week chase. In adult mice administered (15)N-thymidine pulse-chase, we find that proliferating crypt cells dilute the (15)N label, consistent with random strand segregation. We demonstrate the broad utility of MIMS with proof-of-principle studies of lipid turnover in Drosophila and translation to the human haematopoietic system. These studies show that MIMS provides high-resolution quantification of stable isotope labels that cannot be obtained using other techniques and that is broadly applicable to biological and medical research.

  2. Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty

    Science.gov (United States)

    Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.

  3. Herbage Production and Quality of Shrub Indigofera Treated by Different Concentration of Foliar Fertilizer

    Directory of Open Access Journals (Sweden)

    L. Abdullah

    2010-12-01

    Full Text Available A field experiment on fodder legume Indigofera sp. was conducted to investigate the effects of foliar fertilizer concentration on forage yield and quality, and to identify optimum concentrations among the fertilizer treatments on herbage yield, chemical composition (CP, NDF, ADF, minerals, and in vitro dry matter (IVDMD as wll as organic matter (IVOMD digestibility in goat’s rumen. Randomized block design was used for the six concentration of fertilizer treatments; control, 10, 20, 30, 40, and 50 g/10 l with 3 replicates. Leaves were sprayed with foliar fertilizer at 30, 34, 38, and 42 days after harvest. Samples were collected at 2 harvest times with 60 days cutting interval. Application of the foliar fertilizer up to 30 g/10 l significantly increased herbage DM yield, twig numbers, tannin, saponin, Ca and P content, as well as herbage digestibility (IVDMD and IVOMD. The lower and higher concentration of foliar fertilizer resulted in lower value of those parameters, but NDF and ADF contents had the opposite patterns. The optimum level of foliar fertilizer that resulted the highest herbage yield and quality was 30 g/10 l, and the highest in vitro digestibility and Ca concentration was 20 g/10 l.

  4. Cultivar effects of perennial ryegrass on herbage intake by grazing dairy cows

    NARCIS (Netherlands)

    Smit, H.J.

    2006-01-01

    Perennial ryegrass is the most abundant grass species in temperate climates. An increased herbage intake of dairy cows by breeding new cultivars could have a large potential impact on agriculture. The effects of cultivars on sward structure, nutritive value, physical characteristics and disease

  5. Perennial ryegrass for dairy cows: effects of cultivar on herbage intake during grazing

    NARCIS (Netherlands)

    Smit, H.J.

    2005-01-01

    Keywords:Perennial ryegrass, Lolium perenne , sward morphology, sward cutting, n-alkanes, herbage intake, selection, preference.Perennial ryegrass ( Lolium perenne L.) is the most important species for feeding dairy cows. The majority of the farmers in the Netherlands graze their

  6. Chemical composition of lamina and sheath of Lolium perenne as affected by herbage management

    NARCIS (Netherlands)

    Hoekstra, N.J.; Struik, P.C.; Lantinga, E.A.; Schulte, R.P.O.

    2007-01-01

    The quality of grass in terms of form and relative amounts of energy and protein affects both animal production per unit of intake and nitrogen (N) utilization. Quality can be manipulated by herbage management and choice of cultivar. The effects of N application rate (0, 90 or 390 kg N ha¿1 year¿1),

  7. Effect of fertilizer type on cadmium and fluorine concentrations in clover herbage

    International Nuclear Information System (INIS)

    McLaughlin, M.J.

    2002-01-01

    This study investigated whether changing phosphatic fertilizer type affects the accumulation of cadmium (Cd) and fluorine (F) in pasture herbage. North Carolina phosphate rock and partially acidulated fertilizers derived from this rock generally have higher Cd and F concentrations compared to single superphosphate currently manufactured in Australia. Clover herbage from sites of the National Reactive Phosphate Rock (RPR) trial was collected and analysed for concentrations of Cd (11 sites) and F (4 sites). A comparison was made between pastures fertilized with 4 rates of single superphosphate, North Carolina phosphate rock, and partially acidulated phosphate rock having Cd concentrations of 283, 481, and 420 mg Cd/kg P respectively, and 170, 271, and 274 g F/kg P respectively. One site used Hemrawein (Egypt) phosphate rock (HRP) having a Cd and F concentration of 78 mg Cd/kg P and 256 g F/kg P respectively. To help identify differences in herbage Cd concentrations between sites, unfertilised soils from each site were analyzed for total and extractable Cd contents. At one site Cd concentrations in bulk herbage (clover, grasses and weeds) were related to infestation of the pasture by capeweed (Arctotheca calendula L. Levyns). There were no significant differences between F in herbage from plots fertilized with single superphosphate, partially acidulated phosphate rock or North Carolina phosphate rock, or between sites. Concentrations of F in herbage were low, generally less than 10 mg F /kg. However, there were large differences in Cd concentrations in herbage between sites, while differences between fertilizer treatments were small in comparison. The site differences were only weakly related to total or extractable (0.01 mol/L CaCl 2 ) Cd concentrations in soil. Significant differences in Cd concentrations in clover due to fertilizer type were found at 5 sites. North Carolina phosphate rock treatments had significantly higher Cd concentrations in clover compared to

  8. Precursors predicted by artificial neural networks for mass balance calculations: Quantifying hydrothermal alteration in volcanic rocks

    Science.gov (United States)

    Trépanier, Sylvain; Mathieu, Lucie; Daigneault, Réal; Faure, Stéphane

    2016-04-01

    This study proposes an artificial neural networks-based method for predicting the unaltered (precursor) chemical compositions of hydrothermally altered volcanic rock. The method aims at predicting precursor's major components contents (SiO2, FeOT, MgO, CaO, Na2O, and K2O). The prediction is based on ratios of elements generally immobile during alteration processes; i.e. Zr, TiO2, Al2O3, Y, Nb, Th, and Cr, which are provided as inputs to the neural networks. Multi-layer perceptron neural networks were trained on a large dataset of least-altered volcanic rock samples that document a wide range of volcanic rock types, tectonic settings and ages. The precursors thus predicted are then used to perform mass balance calculations. Various statistics were calculated to validate the predictions of precursors' major components, which indicate that, overall, the predictions are precise and accurate. For example, rank-based correlation coefficients were calculated to compare predicted and analysed values from a least-altered test dataset that had not been used to train the networks. Coefficients over 0.87 were obtained for all components, except for Na2O (0.77), indicating that predictions for alkali might be less performant. Also, predictions are performant for most volcanic rock compositions, except for ultra-K rocks. The proposed method provides an easy and rapid solution to the often difficult task of determining appropriate volcanic precursor compositions to rocks modified by hydrothermal alteration. It is intended for large volcanic rock databases and is most useful, for example, to mineral exploration performed in complex or poorly known volcanic settings. The method is implemented as a simple C++ console program.

  9. Quantifying Uncertainties in Mass-Dimensional Relationships Through a Comparison Between CloudSat and SPartICus Reflectivity Factors

    Science.gov (United States)

    Mascio, J.; Mace, G. G.

    2015-12-01

    CloudSat and CALIPSO, two of the satellites in the A-Train constellation, use algorithms to calculate the scattering properties of small cloud particles, such as the T-matrix method. Ice clouds (i.e. cirrus) cause problems with these cloud property retrieval algorithms because of their variability in ice mass as a function of particle size. Assumptions regarding the microphysical properties, such as mass-dimensional (m-D) relationships, are often necessary in retrieval algorithms for simplification, but these assumptions create uncertainties of their own. Therefore, ice cloud property retrieval uncertainties can be substantial and are often not well known. To investigate these uncertainties, reflectivity factors measured by CloudSat are compared to those calculated from particle size distributions (PSDs) to which different m-D relationships are applied. These PSDs are from data collected in situ during three flights of the Small Particles in Cirrus (SPartICus) campaign. We find that no specific habit emerges as preferred and instead we conclude that the microphysical characteristics of ice crystal populations tend to be distributed over a continuum and, therefore, cannot be categorized easily. To quantify the uncertainties in the mass-dimensional relationships, an optimal estimation inversion was run to retrieve the m-D relationship per SPartICus flight, as well as to calculate uncertainties of the m-D power law.

  10. Fatty acid, tocopherol and carotenoid content in herbage and milk affected by sward composition and season of grazing

    DEFF Research Database (Denmark)

    Larsen, Mette Krogh; Fretté, Xavier; Kristensen, Troels

    2012-01-01

    BACKGROUND: The aim of the present work was to study to what extent grazing large amounts ofwhite clover (WCL), red clover (RCL), lucerne (LUC) or chicory (CHI) was suitable for production of bovine milk with a high milk fat content of tocopherols, carotenoids, α-linolenic acid and conjugated......), carotenoids (6 μg g−1) and α-tocopherol (21 μg g−1 milk fat). There were minor differences between herbage types and periods, but multivariate analysis of these data showed no clear grouping. Chemical composition of herbage varied with species as well as period, but it was not possible to relatemilk and feed...... contents of specific fatty acids, carotenoids or tocopherols. CONCLUSION: All four herbages tested were suitable for production of milk with a high content of beneficial compounds. Thus any of these herbages could be used in production of such differentiated milk based on a large proportion of grazing...

  11. 13C- and 15N-Labeling Strategies Combined with Mass Spectrometry Comprehensively Quantify Phospholipid Dynamics in C. elegans.

    Directory of Open Access Journals (Sweden)

    Blair C R Dancy

    Full Text Available Membranes define cellular and organelle boundaries, a function that is critical to all living systems. Like other biomolecules, membrane lipids are dynamically maintained, but current methods are extremely limited for monitoring lipid dynamics in living animals. We developed novel strategies in C. elegans combining 13C and 15N stable isotopes with mass spectrometry to directly quantify the replenishment rates of the individual fatty acids and intact phospholipids of the membrane. Using multiple measurements of phospholipid dynamics, we found that the phospholipid pools are replaced rapidly and at rates nearly double the turnover measured for neutral lipid populations. In fact, our analysis shows that the majority of membrane lipids are replaced each day. Furthermore, we found that stearoyl-CoA desaturases (SCDs, critical enzymes in polyunsaturated fatty acid production, play an unexpected role in influencing the overall rates of membrane maintenance as SCD depletion affected the turnover of nearly all membrane lipids. Additionally, the compromised membrane maintenance as defined by LC-MS/MS with SCD RNAi resulted in active phospholipid remodeling that we predict is critical to alleviate the impact of reduced membrane maintenance in these animals. Not only have these combined methodologies identified new facets of the impact of SCDs on the membrane, but they also have great potential to reveal many undiscovered regulators of phospholipid metabolism.

  12. Quantifying Freshwater Mass Balance in the Central Tibetan Plateau by Integrating Satellite Remote Sensing, Altimetry, and Gravimetry

    Directory of Open Access Journals (Sweden)

    Kuo-Hsin Tseng

    2016-05-01

    Full Text Available The Tibetan Plateau (TP has been observed by satellite optical remote sensing, altimetry, and gravimetry for a variety of geophysical parameters, including water storage change. However, each of these sensors has its respective limitation in the parameters observed, accuracy and spatial-temporal resolution. Here, we utilized an integrated approach to combine remote sensing imagery, digital elevation model, and satellite radar and laser altimetry data, to quantify freshwater storage change in a twin lake system named Chibuzhang Co and Dorsoidong Co in the central TP, and compared that with independent observations including mass changes from the Gravity Recovery and Climate Experiment (GRACE data. Our results show that this twin lake, located within the Tanggula glacier system, remained almost steady during 1973–2000. However, Dorsoidong Co has experienced a significant lake level rise since 2000, especially during 2000–2005, that resulted in the plausible connection between the two lakes. The contemporary increasing lake level signal at a rate of 0.89 ± 0.05 cm·yr−1, in a 2° by 2° grid equivalent water height since 2002, is higher than the GRACE observed trend at 0.41 ± 0.17 cm·yr−1 during the same time span. Finally, a down-turning trend or inter-annual variability shown in the GRACE signal is observed after 2012, while the lake level is still rising at a consistent rate.

  13. Ultra-high-performance liquid chromatography-Time-of-flight high resolution mass spectrometry to quantify acidic drugs in wastewater.

    Science.gov (United States)

    Becerra-Herrera, Mercedes; Honda, Luis; Richter, Pablo

    2015-12-04

    A novel analytical approach involving an improved rotating-disk sorptive extraction (RDSE) procedure and ultra-high-performance liquid chromatography (UHPLC) coupled to an ultraspray electrospray ionization source (UESI) and time-of-flight mass spectrometry (TOF/MS), in trap mode, was developed to identify and quantify four non-steroidal anti-inflammatory drugs (NSAIDs) (naproxen, ibuprofen, ketoprofen and diclofenac) and two anti-cholesterol drugs (ACDs) (clofibric acid and gemfibrozil) that are widely used and typically found in water samples. The method reduced the amount of both sample and reagents used and also the time required for the whole analysis, resulting in a reliable and green analytical strategy. The analytical eco-scale was calculated, showing that this methodology is an excellent green analysis, increasing its ecological worth. The detection limits (LOD) and precision (%RSD) were lower than 90ng/L and 10%, respectively. Matrix effects and recoveries were studied using samples from the influent of a wastewater treatment plant (WWTP). All the compounds exhibited suppression of their signals due to matrix effects, and the recoveries were approximately 100%. The applicability and reliability of this methodology were confirmed through the analysis of influent and effluent samples from a WWTP in Santiago, Chile, obtaining concentrations ranging from 1.1 to 20.5μg/L and from 0.5 to 8.6μg/L, respectively. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Quantifying Contaminant Mass for the Feasibility Study of the DuPont Chambers Works FUSRAP Site - 13510

    Energy Technology Data Exchange (ETDEWEB)

    Young, Carl; Rahman, Mahmudur; Johnson, Ann; Owe, Stephan [Cabrera Services Inc., 1106 N. Charles St., Ste. 300, Baltimore, MD 21201 (United States)

    2013-07-01

    The U.S. Army Corps of Engineers (USACE) - Philadelphia District is conducting an environmental restoration at the DuPont Chambers Works in Deepwater, New Jersey under the Formerly Utilized Sites Remedial Action Program (FUSRAP). Discrete locations are contaminated with natural uranium, thorium-230 and radium-226. The USACE is proposing a preferred remedial alternative consisting of excavation and offsite disposal to address soil contamination followed by monitored natural attenuation to address residual groundwater contamination. Methods were developed to quantify the error associated with contaminant volume estimates and use mass balance calculations of the uranium plume to estimate the removal efficiency of the proposed alternative. During the remedial investigation, the USACE collected approximately 500 soil samples at various depths. As the first step of contaminant mass estimation, soil analytical data was segmented into several depth intervals. Second, using contouring software, analytical data for each depth interval was contoured to determine lateral extent of contamination. Six different contouring algorithms were used to generate alternative interpretations of the lateral extent of the soil contamination. Finally, geographical information system software was used to produce a three dimensional model in order to present both lateral and vertical extent of the soil contamination and to estimate the volume of impacted soil for each depth interval. The average soil volume from all six contouring methods was used to determine the estimated volume of impacted soil. This method also allowed an estimate of a standard deviation of the waste volume estimate. It was determined that the margin of error for the method was plus or minus 17% of the waste volume, which is within the acceptable construction contingency for cost estimation. USACE collected approximately 190 groundwater samples from 40 monitor wells. It is expected that excavation and disposal of

  15. Non-intrusive measurement of tritium activity in waste drums by modelling a 3He leak quantified by mass spectrometry

    International Nuclear Information System (INIS)

    Demange, D.

    2002-01-01

    This study deals with a new method that makes it possible to measure very low tritium quantities inside radioactive waste drums. This indirect method is based on measuring the decaying product, 3 He, and requires a study of its behaviour inside the drum. Our model considers 3 He as totally free and its leak through the polymeric joint of the drum as two distinct phenomena: permeation and laminar flow. The numerical simulations show that a pseudo-stationary state takes place. Thus, the 3 He leak corresponds to the tritium activity inside the drum but it appears, however, that the leak peaks when the atmospheric pressure variations induce an overpressure in the drum. Nevertheless, the confinement of a drum in a tight chamber makes it possible to quantify the 3 He leak. This is a non-intrusive measurement of its activity, which was experimentally checked by using reduced models, representing the drum and its confinement chamber. The drum's confinement was optimised to obtain a reproducible 3 He leak measurement. The gaseous samples taken from the chamber were purified using selective adsorption onto activated charcoals at 77 K to remove the tritium and pre-concentrate the 3 He. The samples were measured using a leak detector mass spectrometer. The adaptation of the signal acquisition and the optimisation of the analysis parameters made it possible to reach the stability of the external calibrations using standard gases with a 3 He detection limit of 0.05 ppb. Repeated confinement of the reference drums demonstrated the accuracy of this method. The uncertainty of this non-intrusive measurement of the tritium activity in 200-liter drums is 15% and the detection limit is about 1 GBq after a 24 h confinement. These results led to the definition of an automated tool able to systematically measure the tritium activity of all storage waste drums. (authors)

  16. Rumen degradation characteristics of ryegrass herbage and ryegrass silage are affected by interactions between stage of maturity and nitrogen fertilisation rate

    NARCIS (Netherlands)

    Heeren, J.A.H.; Podesta, S.C.; Hatew, B.; Klop, G.; Laar, van H.; Bannink, A.; Warner, D.; Jonge, de L.H.; Dijkstra, J.

    2014-01-01

    The objective of this experiment was to evaluate interaction effects between stage of maturity and N fertilization rate on rumen degradation characteristics determined with nylon bag incubations of ryegrass herbages and ryegrass silage. Grass herbage (n = 4) was cut after 3 or 5 weeks of regrowth

  17. Effects of green manure herbage management and its digestate from biogas production on barley yield, N recovery, soil structure and earthworm populations

    DEFF Research Database (Denmark)

    Frøseth, Randi Berland; Bakken, Anne Kjersti; Bleken, Marina Azzaroli

    2014-01-01

    management on the yield and N recovery of a subsequent spring barley crop, and their short term effects on soil structure and earthworm populations. A field trial was run from 2008 to 2011 at four sites with contrasting soils under cold climate conditions. We compared several options for on-site herbage......In repeatedly mown and mulched green manure leys, the mulched herbage contains substantial amounts of nitrogen (N), which may only slightly contribute to the following crops’ nutrient demand. The objective of the present work was to evaluate the effect of alternative strategies for green manure...... management and the application of anaerobically digested green manure herbage. Depending on the site, removal of green manure herbage reduced the barley grain yield by 0% to 33% compared to leaving it on-site. Applying digestate, containing 45% of the N in harvested herbage, as fertilizer for barley gave...

  18. Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.

    Science.gov (United States)

    Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J

    2015-06-15

    Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance

  19. STANDING HERBAGE BIOMASS UNDER DIFFERENT TREE SPECIES DISPERSED IN PASTURES OF CATTLE FARMS

    Directory of Open Access Journals (Sweden)

    Humberto Esquivel-Mimenza

    2013-08-01

    Full Text Available The study conducted in a tropical dry ecosystem at Cañas, Guanacaste, Costa Rica (10o 11´ N and 84o15´W measure the standing herbage biomass (SHB availability and quality under six isolated tree species of different canopy architecture dispersed in active Brachiaria brizantha pastures and compare it to that growing at full sun light. Standing herbage biomass (HB harvesting and Photosynthetic active radiation (PAR readings were taken at three different periods in a paired sample scheme. Of the six tree species studied, Enterolobium cyclocarpum had the largest mean crown cover while Acrocomia aculeata had the smallest. Significant differences were observed between species (P = 0.0002 and seasons (P<0.008 for the percentage of PAR transmitted under the canopy but PAR levels obtained under all species were consistent throughout seasons since the interaction between species and season was not significantly different (P=0.98. Lower PAR readings (<50% were taken under the canopies E. cyclocarpum and Guazuma ulmifolia (21.7 and 33.7 % respectively. Standing herbage biomass (SHB harvested under the crown of isolated mature individual tree species was significantly lower (P<0.001 than in open pasture areas for all tree species except that of A. aculeate but SHB crude protein content, was higher underneath all tree canopies. It can conclude that light reduction caused by tree canopies reduces SHB availability and increases the quality underneath tree canopies compared to areas of full sun but these varies accordingly to tree species and seasons.

  20. [Jaundice after Herbage Walking Tour of a 44 Year Old Man].

    Science.gov (United States)

    Sawatzki, Mikael; Haller, Christoph; Henz, Samuel

    2015-06-03

    We report about a 44-year old patient with severe acute hepatitis E after herbage walking-to ur. Transmission occurred with ingestion of contaminated herbs. Symptoms were jaundice, dark urine, rheumatic pains and distinctive fatigue. We could document a benign self-limiting course under regular clinical controls. Hepatitis Eisa worldwide common cause for acute hepatitis with jaundice. In Switzerland contamination of this autochthonic infection is aquired by consumption of pork and venison (seroprevalence up to 22%). Infection can be without symptoms but also can result in acute liver failure. Extrahepatic symptoms are not uncommon.

  1. Effect of supplement level on herbage intake and feeding behaviour of Italian Brown cows grazing on Alpine pasture

    Directory of Open Access Journals (Sweden)

    D. Villa

    2010-01-01

    Full Text Available Summer grazing of dairy cows on mountain pastures often leads to a fall in production or in body condition when the pasture is not adequately supplemented with concentrate feeds (Malossini et al., 1992; Bovolenta et al., 1998. An abundant use of concentrates may result into a reduction of herbage intake according to a substitution rate mechanism (Faverdin et al., 1991. The aim of this trial was to evaluate the effect of the supplementation level on herbage intake, milk yield and feeding behaviour (time spent grazing and ruminating of dairy cows at pasture, combining the use of an electronic bitemeter and a double marker method for the estimation of intake.

  2. Contributions of natural and anthropogenic radiative forcing to mass loss of Northern Hemisphere mountain glaciers and quantifying their uncertainties.

    Science.gov (United States)

    Hirabayashi, Yukiko; Nakano, Kazunari; Zhang, Yong; Watanabe, Satoshi; Tanoue, Masahiro; Kanae, Shinjiro

    2016-07-20

    Observational evidence indicates that a number of glaciers have lost mass in the past. Given that glaciers are highly impacted by the surrounding climate, human-influenced global warming may be partly responsible for mass loss. However, previous research studies have been limited to analyzing the past several decades, and it remains unclear whether past glacier mass losses are within the range of natural internal climate variability. Here, we apply an optimal fingerprinting technique to observed and reconstructed mass losses as well as multi-model general circulation model (GCM) simulations of mountain glacier mass to detect and attribute past glacier mass changes. An 8,800-year control simulation of glaciers enabled us to evaluate detectability. The results indicate that human-induced increases in greenhouse gases have contributed to the decreased area-weighted average masses of 85 analyzed glaciers. The effect was larger than the mass increase caused by natural forcing, although the contributions of natural and anthropogenic forcing to decreases in mass varied at the local scale. We also showed that the detection of anthropogenic or natural influences could not be fully attributed when natural internal climate variability was taken into account.

  3. Contributions of natural and anthropogenic radiative forcing to mass loss of Northern Hemisphere mountain glaciers and quantifying their uncertainties

    Science.gov (United States)

    Hirabayashi, Yukiko; Nakano, Kazunari; Zhang, Yong; Watanabe, Satoshi; Tanoue, Masahiro; Kanae, Shinjiro

    2016-07-01

    Observational evidence indicates that a number of glaciers have lost mass in the past. Given that glaciers are highly impacted by the surrounding climate, human-influenced global warming may be partly responsible for mass loss. However, previous research studies have been limited to analyzing the past several decades, and it remains unclear whether past glacier mass losses are within the range of natural internal climate variability. Here, we apply an optimal fingerprinting technique to observed and reconstructed mass losses as well as multi-model general circulation model (GCM) simulations of mountain glacier mass to detect and attribute past glacier mass changes. An 8,800-year control simulation of glaciers enabled us to evaluate detectability. The results indicate that human-induced increases in greenhouse gases have contributed to the decreased area-weighted average masses of 85 analyzed glaciers. The effect was larger than the mass increase caused by natural forcing, although the contributions of natural and anthropogenic forcing to decreases in mass varied at the local scale. We also showed that the detection of anthropogenic or natural influences could not be fully attributed when natural internal climate variability was taken into account.

  4. Phyto-oestrogens in herbage and milk from cows grazing whiteclover, red clover, lucerne or chicory-rich pastures

    DEFF Research Database (Denmark)

    Andersen, C; Nielsen, T S; Purup, S

    2009-01-01

    A grazing experiment was carried out to study the concentration of phyto-oestrogens in herbage for cattle and in milk during two periods (May and June). Forty-eight Danish Holstein cows were divided into four groups with four treatment diets; white clover, red clover, lucerne and chicory-rich pas...

  5. Soil restoration under pasture after lignite mining - management effects on soil biochemical properties and their relationships with herbage yields

    Energy Technology Data Exchange (ETDEWEB)

    Ross, D.J.; Speir, T.W.; Cowling, J.C.; Feltham, C.W. (DSIR, Lower Hutt (New Zealand))

    1992-01-01

    The recovery of soil biochemical properties under grazed, grass-clover pasture after simulated lignite mining was studied over a 5-year period in a mesic Typic Dystrochrept soil at Waimumu, Southland, New Zealand. Restoration procedures involved four replacement treatments, after A,B and C horizon materials had been separately removed, from all except the control, and stockpiled for 2-3 weeks. Replacement treatment markedly influenced the recovery of herbage production and soil organic C and total N contents, N mineralization, microbial biomass (as indicated by mineral-N flush) and invertase and sulphatase activities. The effectiveness of replacement treatments decreased in the order: 1. control (no stripping or replacement). 2. A,B and C horizon materials replaced in the same order. 3. A,B and C horizon materials each mixed with an equal amount of siltstone overburden and replaced in order, 4. A and B horizon materials mixed before replacing over C horizon materials. Ripping increased herbage production, net N mineralization and microbial biomass. Fertilizer N also stimulated herbage production but depressed clover growth. Increases in soil invertase and, to a lesser extent, sulphatase activity were closely related to changes in herbage production. Microbial biomass increased more rapidly than soil organic C in early stages in the trial. Rates of net N mineralization suggest that N availability would have limited pasture growth.

  6. Effect of time of maize silage supplementation on herbage intake, milk production, and nitrogen excretion of grazing dairy cows.

    Science.gov (United States)

    Al-Marashdeh, O; Gregorini, P; Edwards, G R

    2016-09-01

    The objective of this study was to evaluate the effect of feeding maize silage at different times before a short grazing bout on dry matter (DM) intake, milk production, and N excretion of dairy cows. Thirty-six Friesian × Jersey crossbred lactating dairy cows were blocked in 9groups of 4 cows by milk solids (sum of protein and fat) production (1.26±0.25kg/d), body weight (466±65kg), body condition score (4±0.48), and days in milk (197±15). Groups were then randomly assigned to 1 of 3 replicates of 3 treatments: control; herbage only, supplemented with 3kg of DM/cow of maize silage after morning milking approximately 9h before pasture allocation (9BH); and supplemented with 3kg of DM/cow of maize silage before afternoon milking approximately 2h before pasture allocation (2BH). Herbage allowance (above the ground level) was 22kg of DM/cow per day for all groups of cows. Cows were allocated to pasture from 1530 to 2030 h. Maize silage DM intake did not differ between treatments, averaging 3kg of DM/cow per day. Herbage DM intake was greater for control than 2BH and 9BH, and greater for 9BH than 2BH (11.1, 10.1, and 10.9kg of DM/cow per day for control, 2BH, and 9BH, respectively). The substitution rate (kilograms of herbage DM per kilograms of maize silage DM) was greater for 2BH (0.47) than 9BH (0.19). Milk solids production was similar between treatments (overall mean 1.2kg/cow per day). Body weight loss tended to be less for supplemented than control cows (-0.95, -0.44, and -0.58kg/cow per day for control, 2BH, and 9BH, respectively). Nitrogen concentration in urine was not affected by supplementation or time of supplementation, but estimated urinary N excretion tended to be greater for control than supplemented cows when urinary N excretion estimated using plasma or milk urea N. At the time of herbage meal, nonesterified fatty acid concentration was greater for control than supplemented cows and greater for 9BH than 2BH (0.58, 0.14, and 0.26mmol/L for

  7. The role of Monte Carlo burnup calculations in quantifying plutonium mass in spent fuel assemblies with non-destructive assay

    Energy Technology Data Exchange (ETDEWEB)

    Galloway, Jack D.; Tobin, Stephen J.; Trellue, Holly R.; Fensin, Michael L. [Los Alamos National Laboratory, Los Alamos, (United States)

    2011-12-15

    The Next Generation Safeguards Initiate (NGSI) of the United States Department of Energy has funded a multi-laboratory/university collaboration to quantify plutonium content in spent fuel (SF) with non-destructive assay (NDA) techniques and quantify the capability of these NDA techniques to detect pin diversions from SF assemblies. The first Monte Carlo based spent fuel library (SFL) developed for the NGSI program contained information for 64 different types of SF assemblies (four initial enrichments, burnups, and cooling times). The maximum amount of fission products allowed to still model a 17x17 Westinghouse pressurized water reactor (PWR) fuel assembly with four regions per fuel pin was modelled. The number of fission products tracked was limited by the available memory. Studies have since indicated that additional fission product inclusion and asymmetric burning of the assembly is desired. Thus, an updated SFL has been developed using an enhanced version of MCNPX, more powerful computing resources, and the Monte Carlo-based burnup code Monteburns, which links MCNPX to a depletion code and models a representative 1 Division-Slash 8 core geometry containing one region per fuel pin in the assemblies of interest, including a majority of the fission products with available cross sections. Often in safeguards, the limiting factor in the accuracy of NDA instruments is the quality of the working standard used in calibration. In the case of SF this is anticipated to also be true, particularly for several of the neutron techniques. The fissile isotopes of interest are co-mingled with neutron absorbers that alter the measured count rate. This paper will quantify how well working standards can be generated for PWR spent fuel assemblies and also describe the spatial plutonium distribution across an assembly. More specifically we will demonstrate how Monte Carlo gamma measurement simulations and a Monte Carlo burnup code can be used to characterize the emitted gamma

  8. Effects of herbage intake on goat performance in the mediterranean type natural pastures.

    Science.gov (United States)

    Hakyemez, Basri H; Gokkus, Ahmet; Savas, Turker; Yurtman, Ismail Y

    2009-02-01

    This study aimed at identifying changes in natural pastures during the grazing season and investigating the effects of these changes on pasture feeding potential for high yielding dairy goats. During the study, 12 dairy goats were grazed on a 1.5 ha natural pasture for three months from April to June in 2003, 2004 and 2005. The goats were fed 0.5 kg/day of concentrate as a supplement during the grazing season. Botanical composition, herbage production and intake, crude protein (CP), neutral detergent fiber (NDF) and acid detergent fiber (ADF) contents of the pasture were determined. Live weight, milk yield, milk dry matter (DM) and fat content of the goats were monitored. The data were analyzed using a linear model, which evaluated the effects of grazing seasons in each year. Based on the three-year average, 87% of pasture was herbaceous plants and the remaining was shrubs in DM basis with Cistus creticus, Quercus ithaburensis, Pistacia atlantica and Asparagus acutifolius being the major shrub species. The herbage yield in June was significantly lower than in other months in all years (P = 0.001). In all experimental years, the CP content of the pasture decreased but the structural carbohydrates increased as the grazing season proceeded. While live weight was not affected by grazing periods except for 2004 (P = 0.001), milk yield significantly decreased with advancing grazing period (P = 0.001). The results of the present study indicate that natural pasture has a supportive effect in April and May on the milk yield of lactating goats which are in mid-lactation, and suggested that supplementary feeding is required in consecutive grazing periods.

  9. Quantifying Energy and Mass Fluxes Controlling Godthåbsfjord Freshwater Input in a 5-km Simulation (1991–2012)

    DEFF Research Database (Denmark)

    Langen, P.L.; Mottram, R.H.; Christensen, J.H.

    2015-01-01

    Freshwater runoff to fjords with marine-terminating glaciers along the Greenland Ice Sheet margin has an impact on fjord circulation and potentially ice sheet mass balance through increasing heat transport to the glacier front. Here, the authors use the high-resolution (5.5 km) HIRHAM5 regional...... with observations (typically .0.9), there are biases that impact the results. In particular, overestimated albedo leads to underestimation of melt and runoff at low elevations. In the model simulation (1991–2012), the ice sheet experiences increasing energy input from the surface turbulent heat flux (up...... to elevations of 2000m) and shortwave radiation (at all elevations). Southerly wind anomalies and declining cloudiness due to an increase in atmospheric pressure over north Greenland contribute to increased summer melt. This results in declining surface mass balance (SMB), increasing surface runoff, and upward...

  10. The African American Women and Mass Media (AAMM) campaign in Georgia: quantifying community response to a CDC pilot campaign.

    Science.gov (United States)

    Hall, Ingrid J; Johnson-Turbes, Ashani; Berkowitz, Zahava; Zavahir, Yasmine

    2015-05-01

    To evaluate whether a culturally appropriate campaign using "Black radio" and print media increased awareness and utilization of local mammography screening services provided by the Centers for Disease Control and Prevention's National Breast and Cervical Cancer Early Detection Program among African American women. The evaluation used a quasi-experimental design involving data collection during and after campaign implementation in two intervention sites in GA (Savannah with radio and print media and Macon with radio only) and one comparison site (Columbus, GA). We used descriptive statistics to compare mammography uptake for African American women during the initial months of the campaign (8/08-1/09) with the latter months (2/09-8/09) and a post-campaign (9/09-12/09) period in each of the study sites. Comparisons of monthly mammogram uptake between cities were performed with multinomial logistic regression. We assumed a p value campaign to the later period. However, the increase did not persist in the post-campaign period. Analysis comparing monthly mammogram uptake in Savannah and Macon with Columbus showed a significant increase in uptake from the first to the second period in Savannah only (OR 1.269, 95 % CI (1.005-1.602), p = 0.0449). Dissemination of health promotion messages via a culturally appropriate, multicomponent campaign using Black radio and print media was effective in increasing mammogram uptake in Savannah among low-income, African American women. Additional research is needed to quantify the relative contribution of campaign radio, print media, and community components to sustain increased mammography uptake.

  11. Quantifying Contribution of Synthrophic Acetate Oxidation to Methane Production in Thermophilic Anaerobic Reactors by Membrane Inlet Mass Spectrometry

    DEFF Research Database (Denmark)

    Mulat, Daniel Girma; Ward, Alastair James; Adamsen, Anders Peter S.

    2014-01-01

    A unique method was developed and applied for monitoring methanogenesis pathways based on isotope labeled substrates combined with online membrane inlet quadrupole mass spectrometry (MIMS). In our study, a fermentation sample from a full-scale biogas plant fed with pig and cattle manure, maize...... silage, and deep litter was incubated with 100 mM of [2-13C] sodium acetate under thermophilic anaerobic conditions. MIMS was used to measure the isotopic distribution of dissolved CO2 and CH4 during the degradation of acetate, while excluding interference from water by applying a cold trap. After 6 days...... a new approach for online quantification of the relative contribution of methanogenesis pathways to methane production with a time resolution shorter than one minute. The observed contribution of SAO-HM to methane production under the tested conditions challenges the current widely accepted anaerobic...

  12. Quantifying the Role of Circulating Unconjugated Estradiol in Mediating the Body Mass Index-Breast Cancer Association.

    Science.gov (United States)

    Schairer, Catherine; Fuhrman, Barbara J; Boyd-Morin, Jennifer; Genkinger, Jeanine M; Gail, Mitchell H; Hoover, Robert N; Ziegler, Regina G

    2016-01-01

    Higher body mass index (BMI) and circulating estrogen levels each increase postmenopausal breast cancer risk, particularly estrogen receptor-positive (ER(+)) tumors. Higher BMI also increases estrogen production. We estimated the proportion of the BMI-ER(+) breast cancer association mediated through estrogen in a case-control study nested within the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial. Participants included 143 women with invasive ER(+) breast cancer and 268 matched controls, all postmenopausal and never having used hormone therapy at baseline. We used liquid chromatography-tandem mass spectrometry to measure 15 estrogens and estrogen metabolites in baseline serum. We calculated BMI from self-reported height and weight at baseline. We estimated the mediating effect of unconjugated estradiol on the BMI-ER(+) breast cancer association using Aalen additive hazards and Cox regression models. All estrogens and estrogen metabolites were statistically significantly correlated with BMI, with unconjugated estradiol most strongly correlated [Pearson correlation (r) = 0.45]. Approximately 7% to 10% of the effect of overweight, 12% to 15% of the effect of obesity, and 19% to 20% of the effect of a 5 kg/m(2) BMI increase on ER(+) breast cancer risk was mediated through unconjugated estradiol. The BMI-breast cancer association, once adjusted for unconjugated estradiol, was not modified by further adjustment for two metabolic ratios statistically significantly associated with both breast cancer and BMI. Circulating unconjugated estradiol levels partially mediate the BMI-breast cancer association, but other potentially important estrogen mediators (e.g., bioavailable estradiol) were not evaluated. Further research is required to identify mechanisms underlying the BMI-breast cancer association. ©2015 American Association for Cancer Research.

  13. Proteome remodelling during development from blood to insect-form Trypanosoma brucei quantified by SILAC and mass spectrometry

    Directory of Open Access Journals (Sweden)

    Gunasekera Kapila

    2012-10-01

    Full Text Available Abstract Background Trypanosoma brucei is the causative agent of human African sleeping sickness and Nagana in cattle. In addition to being an important pathogen T. brucei has developed into a model system in cell biology. Results Using Stable Isotope Labelling of Amino acids in Cell culture (SILAC in combination with mass spectrometry we determined the abundance of >1600 proteins in the long slender (LS, short stumpy (SS mammalian bloodstream form stages relative to the procyclic (PC insect-form stage. In total we identified 2645 proteins, corresponding to ~30% of the total proteome and for the first time present a comprehensive overview of relative protein levels in three life stages of the parasite. Conclusions We can show the extent of pre-adaptation in the SS cells, especially at the level of the mitochondrial proteome. The comparison to a previously published report on monomorphic in vitro grown bloodstream and procyclic T. brucei indicates a loss of stringent regulation particularly of mitochondrial proteins in these cells when compared to the pleomorphic in vivo situation. In order to better understand the different levels of gene expression regulation in this organism we compared mRNA steady state abundance with the relative protein abundance-changes and detected moderate but significant correlation indicating that trypanosomes possess a significant repertoire of translational and posttranslational mechanisms to regulate protein abundance.

  14. Quantifying Uranium Isotope Ratios Using Resonance Ionization Mass Spectrometry: The Influence of Laser Parameters on Relative Ionization Probability

    Energy Technology Data Exchange (ETDEWEB)

    Isselhardt, Brett H. [Univ. of California, Berkeley, CA (United States)

    2011-09-01

    Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure relative uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process to provide a distinction between uranium atoms and potential isobars without the aid of chemical purification and separation. We explore the laser parameters critical to the ionization process and their effects on the measured isotope ratio. Specifically, the use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of 235U/238U ratios to decrease laser-induced isotopic fractionation. By broadening the bandwidth of the first laser in a 3-color, 3-photon ionization process from a bandwidth of 1.8 GHz to about 10 GHz, the variation in sequential relative isotope abundance measurements decreased from >10% to less than 0.5%. This procedure was demonstrated for the direct interrogation of uranium oxide targets with essentially no sample preparation. A rate equation model for predicting the relative ionization probability has been developed to study the effect of variation in laser parameters on the measured isotope ratio. This work demonstrates that RIMS can be used for the robust measurement of uranium isotope ratios.

  15. Herbage intake, methane emissions and animal performance of steers grazing dwarf elephant grass v. dwarf elephant grass and peanut pastures.

    Science.gov (United States)

    Andrade, E A; Almeida, E X; Raupp, G T; Miguel, M F; de Liz, D M; Carvalho, P C F; Bayer, C; Ribeiro-Filho, H M N

    2016-10-01

    Management strategies for increasing ruminant legume consumption and mitigating methane emissions from tropical livestock production systems require further study. The aim of this work was to evaluate the herbage intake, animal performance and enteric methane emissions of cattle grazing dwarf elephant grass (DEG) (Pennisetum purpureum cv. BRS Kurumi) alone or DEG with peanut (Arachis pintoi cv. Amarillo). The experimental treatments were the following: DEG pastures receiving nitrogen fertilization (150 kg N/ha as ammonium nitrate) and DEG intercropped with peanut plus an adjacent area of peanut that was accessible to grazing animals for 5 h/day (from 0700 to 1200 h). The animals grazing legume pastures showed greater average daily gain and herbage intake, and shorter morning and total grazing times. Daily methane emissions were greater from the animals grazing legume pastures, whereas methane emissions per unit of herbage intake did not differ between treatments. Allowing animals access to an exclusive area of legumes in a tropical grass-pasture-based system can improve animal performance without increasing methane production per kg of dry matter intake.

  16. Avoiding the pitfalls when quantifying thyroid hormones and their metabolites using mass spectrometric methods: The role of quality assurance.

    Science.gov (United States)

    Richards, Keith; Rijntjes, Eddy; Rathmann, Daniel; Köhrle, Josef

    2017-12-15

    This short review aims to assess the application of basic quality assurance (QA) principles in published thyroid hormone bioanalytical methods using mass spectrometry (MS). The use of tandem MS, in particular linked to liquid chromatography has become an essential bioanalytical tool for the thyroid hormone research community. Although basic research laboratories do not usually work within the constraints of a quality management system and regulated environment, all of the reviewed publications, to a lesser or greater extent, document the application of QA principles to the MS methods described. After a brief description of the history of MS in thyroid hormone analysis, the article reviews the application of QA to published bioanalytical methods from the perspective of selectivity, accuracy, precision, recovery, instrument calibration, matrix effects, sensitivity and sample stability. During the last decade the emphasis has shifted from developing methods for the determination of L-thyroxine (T 4 ) and 3,3',5-triiodo-L-thyronine (T 3 ), present in blood serum/plasma in the 1-100 nM concentration range, to metabolites such as 3-iodo-L-thyronamine (3-T 1 AM), 3,5-diiodo-L-thyronine (3,5-T 2 ) and 3,3'-diiodo-L-thyronine (3,3'-T 2 ). These metabolites seem likely to be present in the low pM concentrations; consequently, QA parameters such as selectivity and sensitivity become more critical. The authors conclude that improvements, particularly in the areas of analyte selectivity, matrix effect measurement/documentation and analyte recovery would be beneficial. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Educational differences in postmenopausal breast cancer--quantifying indirect effects through health behaviors, body mass index and reproductive patterns.

    Directory of Open Access Journals (Sweden)

    Ulla Arthur Hvidtfeldt

    Full Text Available Studying mechanisms underlying social inequality in postmenopausal breast cancer is important in order to develop prevention strategies. Standard methods for investigating indirect effects, by comparing crude models to adjusted, are often biased. We applied a new method enabling the decomposition of the effect of educational level on breast cancer incidence into indirect effects through reproductive patterns (parity and age at first birth, body mass index and health behavior (alcohol consumption, physical inactivity, and hormone therapy use. The study was based on a pooled cohort of 6 studies from the Copenhagen area including 33,562 women (1,733 breast cancer cases aged 50-70 years at baseline. The crude absolute rate of breast cancer was 399 cases per 100,000 person-years. A high educational level compared to low was associated with 74 (95% CI 22-125 extra breast cancer cases per 100,000 person-years at risk. Of these, 26% (95% CI 14%-69% could be attributed to alcohol consumption. Similar effects were observed for age at first birth (32%; 95% CI 10%-257%, parity (19%; 95%CI 10%-45%, and hormone therapy use (10%; 95% CI 6%-18%. Educational level modified the effect of physical activity on breast cancer. In conclusion, this analysis suggests that a substantial number of the excess postmenopausal breast cancer events among women with a high educational level compared to a low can be attributed to differences in alcohol consumption, use of hormone therapy, and reproductive patterns. Women of high educational level may be more vulnerable to physical inactivity compared to women of low educational level.

  18. Educational Differences in Postmenopausal Breast Cancer – Quantifying Indirect Effects through Health Behaviors, Body Mass Index and Reproductive Patterns

    Science.gov (United States)

    Hvidtfeldt, Ulla Arthur; Lange, Theis; Andersen, Ingelise; Diderichsen, Finn; Keiding, Niels; Prescott, Eva; Sørensen, Thorkild I. A.; Tjønneland, Anne; Rod, Naja Hulvej

    2013-01-01

    Studying mechanisms underlying social inequality in postmenopausal breast cancer is important in order to develop prevention strategies. Standard methods for investigating indirect effects, by comparing crude models to adjusted, are often biased. We applied a new method enabling the decomposition of the effect of educational level on breast cancer incidence into indirect effects through reproductive patterns (parity and age at first birth), body mass index and health behavior (alcohol consumption, physical inactivity, and hormone therapy use). The study was based on a pooled cohort of 6 studies from the Copenhagen area including 33,562 women (1,733 breast cancer cases) aged 50–70 years at baseline. The crude absolute rate of breast cancer was 399 cases per 100,000 person-years. A high educational level compared to low was associated with 74 (95% CI 22–125) extra breast cancer cases per 100,000 person-years at risk. Of these, 26% (95% CI 14%–69%) could be attributed to alcohol consumption. Similar effects were observed for age at first birth (32%; 95% CI 10%–257%), parity (19%; 95%CI 10%–45%), and hormone therapy use (10%; 95% CI 6%–18%). Educational level modified the effect of physical activity on breast cancer. In conclusion, this analysis suggests that a substantial number of the excess postmenopausal breast cancer events among women with a high educational level compared to a low can be attributed to differences in alcohol consumption, use of hormone therapy, and reproductive patterns. Women of high educational level may be more vulnerable to physical inactivity compared to women of low educational level. PMID:24205296

  19. Gradual Accumulation of Heavy Metals in an Industrial Wheat Crop from Uranium Mine Soil and the Potential Use of the Herbage

    Directory of Open Access Journals (Sweden)

    Gerhard Gramss

    2016-10-01

    Full Text Available Testing the quality of heavy-metal (HM excluder plants from non-remediable metalliferous soils could help to meet the growing demands for food, forage, and industrial crops. Field cultures of the winter wheat cv. JB Asano were therefore established on re-cultivated uranium mine soil (A and the adjacent non-contaminated soil (C. Twenty elements were determined by Inductively Coupled Plasma Mass Spectrometry (ICP-MS from soils and plant sections of post-winter seedlings, anthesis-state, and mature plants to record within-plant levels of essential and toxic minerals during ripening and to estimate the (reuse of the soil-A herbage in husbandry and in HM-sensitive fermentations. Non-permissible HM loads (mg∙kg−1∙DW of soil A in Cd, Cu, and Zn of 40.4, 261, and 2890, respectively, initiated the corresponding phytotoxic concentrations in roots and of Zn in shoots from the seedling state to maturity as well as of Cd in the foliage of seedlings. At anthesis, shoot concentrations in Ca, Cd, Fe, Mg, Mn, and Zn and in As, Cr, Pb, and U had fallen to a mean of 20% to increase to 46% during maturation. The respective shoot concentrations in C-grown plants diminished from anthesis (50% to maturity (27%. They were drastically up/down-regulated at the rachis-grain interface to compose the genetically determined metallome of the grain during mineral relocations from adjacent sink tissues. Soil A caused yield losses of straw and grain down to 47.7% and 39.5%, respectively. Nevertheless, pronounced HM excluder properties made Cd concentrations of 1.6–3.08 in straw and 1.2 in grains the only factors that violated hygiene guidelines of forage (1. It is estimated that grains and the less-contaminated green herbage from soil A may serve as forage supplement. Applying soil A grains up to 3 and 12 in Cd and Cu, respectively, and the mature straw as bioenergy feedstock could impair the efficacy of ethanol fermentation by Saccharomyces cerevisiae.

  20. Net herbage accumulation rate and crude protein content of Urochloa brizantha cultivars under shade intensities

    Directory of Open Access Journals (Sweden)

    Paulo Roberto de Lima Meirelles

    2013-12-01

    Full Text Available The use of silvopastoral systems is a sustainable alternative for animal production in various regions of the Brazil. However to obtain satisfactory results in these systems, the selection of forage species that grows well in the shade should be done. The tolerance of plants to light restriction and the correctly choice of species, considering good nutritional values for these conditions has great importance. The study of artificial shading for forage production helps the clarification of issues related to the behavior of plants under reduced light prior to use in integrations with forests. The aim of the study was to evaluate the net herbage accumulation rate of forage (HAR and crude protein (CP of Urochloa brizantha cultivars (Marandu and Piatã under natural light and shading of 30 and 60%. The experiment was conducted at FMVZ - UNESP, Botucatu. The experimental design was a randomized block in factorial arrangement 3 x 2 (three shading levels: 0, 30 and 60%, two cultivars: Marandu and Piatã with three replications and repeated measures (3 cuts. Sample collection occurred when the cultivars reached 35 cm in height. The treatments with shading showed lower cutting intervals as compared to those subjected to full sunlight, because they have reached in a shorter time to time as determined cut-off criterion (mean of 37, 45 and 61 days for reduction of 60%, reduction of 30% and full sun. Significant effects (P<0.05 interaction cultivar x shade x cut on the net herbage accumulation rate (HAR. Most HAR (P<0.05 was observed for cv. Marandu 60% reduction in lightness (127 kg/ha/day due to increased production of stem during the first growing cycle. The lower HAR also occurred to Marandu, but under natural light in the third cut (34 kg/ha/day due to adverse weather conditions during the growth interval. The shadow effect and the cutting (P<0.05 affected CP. The percentage of CP on cultivars showed the highest values (average value of 9.27% in 60

  1. Comparison of two techniques used for the recovery of third-stage strongylid nematode larvae from herbage.

    Science.gov (United States)

    Krecek, R C; Maingi, N

    2004-07-14

    A laboratory trial to determine the efficacy of two methods in recovering known numbers of third-stage (L3) strongylid nematode larvae from herbage was carried out. Herbage samples consisting almost entirely of star grass (Cynodon aethiopicus) that had no L3 nematode parasitic larvae were collected at Onderstepoort, South Africa. Two hundred grams samples were placed in fibreglass fly gauze bags and seeded with third-stage strongylid nematode larvae at 11 different levels of herbage infectivity ranging from 50 to 8000 L3/kg. Eight replicates were prepared for each of the 11 levels of herbage infectivity. Four of these were processed using a modified automatic Speed Queen heavy-duty washing machine at a regular normal cycle, followed by isolation of larvae through centrifugation-flotation in saturated sugar solution. Larvae in the other four samples were recovered after soaking the herbage in water overnight and the larvae isolated with the Baermann technique of the washing. There was a strong correlation between the number of larvae recovered using both methods and the number of larvae in the seeded samples, indicating that the two methods give a good indication of changes in the numbers of larvae on pasture if applied in epidemiological studies. The washing machine method recovered higher numbers of larvae than the soaking and Baermann method at all levels of pasture seeding, probably because the machine washed the samples more thoroughly and a sugar centrifugation-flotation step was used. Larval suspensions obtained using the washing machine method were therefore cleaner and thus easier to examine under the microscope. In contrast, the soaking and Baermann method may be more suitable in field-work, especially in places where resources and equipment are scarce, as it is less costly in equipment and less labour intensive. Neither method recovered all the larvae from the seeded samples. The recovery rates for the washing machine method ranged from 18 to 41% while

  2. Toward Quantifying the Mass-Based Hygroscopicity of Individual Submicron Atmospheric Aerosol Particles with STXM/NEXAFS and SEM/EDX

    Science.gov (United States)

    Yancey Piens, D.; Kelly, S. T.; OBrien, R. E.; Wang, B.; Petters, M. D.; Laskin, A.; Gilles, M. K.

    2014-12-01

    The hygroscopic behavior of atmospheric aerosols influences their optical and cloud-nucleation properties, and therefore affects climate. Although changes in particle size as a function of relative humidity have often been used to quantify the hygroscopic behavior of submicron aerosol particles, it has been noted that calculations of hygroscopicity based on size contain error due to particle porosity, non-ideal volume additivity and changes in surface tension. We will present a method to quantify the hygroscopic behavior of submicron aerosol particles based on changes in mass, rather than size, as a function of relative humidity. This method results from a novel experimental approach combining scanning transmission x-ray microscopy with near-edge x-ray absorption fine spectroscopy (STXM/NEXAFS), as well as scanning electron microscopy with energy dispersive x-ray spectroscopy (SEM/EDX) on the same individual particles. First, using STXM/NEXAFS, our methods are applied to aerosol particles of known composition ‒ for instance ammonium sulfate, sodium bromide and levoglucosan ‒ and validated by theory. Then, using STXM/NEXAFS and SEM/EDX, these methods are extended to mixed atmospheric aerosol particles collected in the field at the DOE Atmospheric Radiation Measurement (ARM) Climate Research Facility at the Southern Great Planes sampling site in Oklahoma, USA. We have observed and quantified a range of hygroscopic behaviors which are correlated to the composition and morphology of individual aerosol particles. These methods will have implications for parameterizing aerosol mixing state and cloud-nucleation activity in atmospheric models.

  3. Development of QuEChERS-based extraction and liquid chromatography-tandem mass spectrometry method for quantifying flumethasone residues in beef muscle.

    Science.gov (United States)

    Park, Ki Hun; Choi, Jeong-Heui; Abd El-Aty, A M; Cho, Soon-Kil; Park, Jong-Hyouk; Kwon, Ki Sung; Park, Hee Ra; Kim, Hyung Soo; Shin, Ho-Chul; Kim, Mi Ra; Shim, Jae-Han

    2012-12-01

    A rapid, specific, and sensitive method based on liquid chromatography-electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS) in the positive ion mode using multiple reaction monitoring (MRM) was developed and validated to quantify flumethasone residues in beef muscle. Methods were compared between the original as well as the EN quick, easy, cheap, effective, rugged, and safe (QuEChERS)-based extraction. Good linearity was achieved at concentration levels of 5-30 μg/kg. Estimated recovery rates at spiking levels of 5 and 10 μg/kg ranged from 72.1 to 84.6%, with relative standard deviations (RSDs)noise ratios (S/Ns) of 3 and 10, respectively. The method was successfully applied to analyze real samples obtained from large markets throughout the Korean Peninsula. The method proved to be sensitive and reliable and, thus, rendered an appropriate means for residue analysis studies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. A simple and sensitive approach to quantify methyl farnesoate in whole arthropods by matrix-solid phase dispersion and gas chromatography-mass spectrometry.

    Science.gov (United States)

    Montes, Rosa; Rodil, Rosario; Neuparth, Teresa; Santos, Miguel M; Cela, Rafael; Quintana, José Benito

    2017-07-28

    Methyl farnesoate (MF) is an arthropod hormone that plays a key role in the physiology of several arthropods' classes being implicated in biological processes such as molting and reproduction. The development of an analytical technique to quantify the levels of this compound in biological tissues can be of major importance for the field of aquaculture/apiculture conservation and in endocrine disruption studies. Therefore, the aim of this study was to develop a simple and sensitive method to measure native levels of MF in the tissue of three representative species from different arthropods classes with environmental and/or economic importance. Thus, a new approach using whole organisms and the combination of matrix solid-phase dispersion with gas chromatography coupled to mass spectrometry was developed. This method allows quantifying endogenous MF at low levels (LOQs in the 1.2-3.1ng/g range) in three arthropod species, and could be expanded to additional arthropod classes. The found levels ranged between 2 and 12ng/g depending on the studied species and gender. The overall recovery of the method was evaluated and ranged between 69 and 96%. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu [University of Michigan, Department of Astronomy, 1085 South University Avenue, Ann Arbor, MI 48109-1107 (United States)

    2016-11-20

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  6. Quantifying Matter

    CERN Document Server

    Angelo, Joseph A

    2011-01-01

    Quantifying Matter explains how scientists learned to measure matter and quantify some of its most fascinating and useful properties. It presents many of the most important intellectual achievements and technical developments that led to the scientific interpretation of substance. Complete with full-color photographs, this exciting new volume describes the basic characteristics and properties of matter. Chapters include:. -Exploring the Nature of Matter. -The Origin of Matter. -The Search for Substance. -Quantifying Matter During the Scientific Revolution. -Understanding Matter's Electromagnet

  7. Effects of species diversity on seasonal variation in herbage yield and nutritive value of seven binary grass-legume mixtures and pure grass under cutting

    DEFF Research Database (Denmark)

    Elgersma, Anjo; Søegaard, Karen

    2016-01-01

    Intensively managed sown temperate grasslands are generally of low species diversity, although swards based on grass-legume mixtures may have superior productivity and herbage quality than grass-only swards. We conducted a cutting experiment over two years to test the effect of species composition...... and diversity on herbage yield, contents of N, neutral detergent fibre (NDF) and in vitro organic matter digestibility (IVOMD). Perennial ryegrass (PR, Lolium perenne) was sown alone and with each of four forage legumes: red clover (RC, Trifolium pratense), lucerne (LU, Medicago sativa), birdsfoot trefoil (BT......, Lotus corniculatus) and white clover (WC, Trifolium repens); WC was also sown with hybrid ryegrass (HR, Lolium × boucheanum), meadow fescue (MF, Festuca pratensis) and timothy (TI, Phleum pratense). Herbage productivity was lowest in pure PR followed by PR/BT, and highest in PR/RC; this mixture had...

  8. Effect of various copper supplements to feed of laying hens on cu content in eggs, liver, excreta, soil, and herbage.

    Science.gov (United States)

    Skrivan, M; Skrivanová, V; Marounek, M

    2006-02-01

    Copper is often added to poultry diets as an antimicrobial agent at doses greatly exceeding the nutritional requirement. In this study, the basal diet of laying hens containing 9.2 mg Cu/kg was supplemented with CuSO(4) x 5H(2)O at 0, 25, 65, 115, and 240 mg Cu/kg dry matter (DM). At Cu dietary concentration just below the level permitted by the European Union (35 mg/kg), the Cu content in the egg yolk was significantly (p eggshell, and liver, respectively. When Cu concentration in the diet was doubled, the effect of Cu on Cu content in eggshell and liver was statistically significant as well. In no liver sample was the hygienic limit of Cu content (80 mg/kg) exceeded. Supplementation of diets with Cu increased Cu concentration in excreta linearly from 25.3 to 396.8 mg/kg DM. Dried excreta were used for fertilization of grassland at 21 g N/m(2). Three months later, soil and herbage were sampled and analyzed. The Cu concentration in soil increased from 25.3 to only 46.4 mg/kg DM when dietary Cu concentration rose from 9.2 to 243.7 mg Cu/kg DM. Corresponding Cu concentrations in herbage were 6.8 and 19.2 mg/kg DM. It can be concluded that the deposition of Cu in eggs and liver of hens fed Cu-supplemented diets does not represent a hygienic risk. The accumulation of Cu in soil fertilized with excreta of Cu-fed hens and in herbage was limited.

  9. Validation of an ultra-high-performance liquid chromatography-tandem mass spectrometry method to quantify illicit drug and pharmaceutical residues in wastewater using accuracy profile approach.

    Science.gov (United States)

    Hubert, Cécile; Roosen, Martin; Levi, Yves; Karolak, Sara

    2017-06-02

    The analysis of biomarkers in wastewater has become a common approach to assess community behavior. This method is an interesting way to estimate illicit drug consumption in a given population: by using a back calculation method, it is therefore possible to quantify the amount of a specific drug used in a community and to assess the consumption variation at different times and locations. Such a method needs reliable analytical data since the determination of a concentration in the ngL -1 range in a complex matrix is difficult and not easily reproducible. The best analytical method is liquid chromatography - mass spectrometry coupling after solid-phase extraction or on-line pre-concentration. Quality criteria are not specially defined for this kind of determination. In this context, it was decided to develop an UHPLC-MS/MS method to analyze 10 illicit drugs and pharmaceuticals in wastewater treatment plant influent or effluent using a pre-concentration on-line system. A validation process was then carried out using the accuracy profile concept as an innovative tool to estimate the probability of getting prospective results within specified acceptance limits. Influent and effluent samples were spiked with known amounts of the 10 compounds and analyzed three times a day for three days in order to estimate intra-day and inter-day variations. The matrix effect was estimated for each compound. The developed method can provide at least 80% of results within ±25% limits except for compounds that are degraded in influent. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Quantifying the mediating effect of body mass index in the relationship between a Mediterranean diet and development of maternal pregnancy complications: Australian Longitudinal Study on Women’s Health

    NARCIS (Netherlands)

    Schoenaker, D.A.J.M.; Soedamah-Muthu, S.S.; Mishra, G.D.

    2016-01-01

    Background: The contribution of body mass index (BMI) to the observed associations between dietary patterns and risk of gestational diabetes mellitus (GDM) and hypertensive disorders of pregnancy (HDP) remains unclear. Objective: The objective of this study was to formally quantify the mediating

  11. e-Cow: an animal model that predicts herbage intake, milk yield and live weight change in dairy cows grazing temperate pastures, with and without supplementary feeding.

    Science.gov (United States)

    Baudracco, J; Lopez-Villalobos, N; Holmes, C W; Comeron, E A; Macdonald, K A; Barry, T N; Friggens, N C

    2012-06-01

    This animal simulation model, named e-Cow, represents a single dairy cow at grazing. The model integrates algorithms from three previously published models: a model that predicts herbage dry matter (DM) intake by grazing dairy cows, a mammary gland model that predicts potential milk yield and a body lipid model that predicts genetically driven live weight (LW) and body condition score (BCS). Both nutritional and genetic drives are accounted for in the prediction of energy intake and its partitioning. The main inputs are herbage allowance (HA; kg DM offered/cow per day), metabolisable energy and NDF concentrations in herbage and supplements, supplements offered (kg DM/cow per day), type of pasture (ryegrass or lucerne), days in milk, days pregnant, lactation number, BCS and LW at calving, breed or strain of cow and genetic merit, that is, potential yields of milk, fat and protein. Separate equations are used to predict herbage intake, depending on the cutting heights at which HA is expressed. The e-Cow model is written in Visual Basic programming language within Microsoft Excel®. The model predicts whole-lactation performance of dairy cows on a daily basis, and the main outputs are the daily and annual DM intake, milk yield and changes in BCS and LW. In the e-Cow model, neither herbage DM intake nor milk yield or LW change are needed as inputs; instead, they are predicted by the e-Cow model. The e-Cow model was validated against experimental data for Holstein-Friesian cows with both North American (NA) and New Zealand (NZ) genetics grazing ryegrass-based pastures, with or without supplementary feeding and for three complete lactations, divided into weekly periods. The model was able to predict animal performance with satisfactory accuracy, with concordance correlation coefficients of 0.81, 0.76 and 0.62 for herbage DM intake, milk yield and LW change, respectively. Simulations performed with the model showed that it is sensitive to genotype by feeding environment

  12. Quantifying Transmission.

    Science.gov (United States)

    Woolhouse, Mark

    2017-07-01

    Transmissibility is the defining characteristic of infectious diseases. Quantifying transmission matters for understanding infectious disease epidemiology and designing evidence-based disease control programs. Tracing individual transmission events can be achieved by epidemiological investigation coupled with pathogen typing or genome sequencing. Individual infectiousness can be estimated by measuring pathogen loads, but few studies have directly estimated the ability of infected hosts to transmit to uninfected hosts. Individuals' opportunities to transmit infection are dependent on behavioral and other risk factors relevant given the transmission route of the pathogen concerned. Transmission at the population level can be quantified through knowledge of risk factors in the population or phylogeographic analysis of pathogen sequence data. Mathematical model-based approaches require estimation of the per capita transmission rate and basic reproduction number, obtained by fitting models to case data and/or analysis of pathogen sequence data. Heterogeneities in infectiousness, contact behavior, and susceptibility can have substantial effects on the epidemiology of an infectious disease, so estimates of only mean values may be insufficient. For some pathogens, super-shedders (infected individuals who are highly infectious) and super-spreaders (individuals with more opportunities to transmit infection) may be important. Future work on quantifying transmission should involve integrated analyses of multiple data sources.

  13. Mass

    International Nuclear Information System (INIS)

    Quigg, Chris

    2007-01-01

    In the classical physics we inherited from Isaac Newton, mass does not arise, it simply is. The mass of a classical object is the sum of the masses of its parts. Albert Einstein showed that the mass of a body is a measure of its energy content, inviting us to consider the origins of mass. The protons we accelerate at Fermilab are prime examples of Einsteinian matter: nearly all of their mass arises from stored energy. Missing mass led to the discovery of the noble gases, and a new form of missing mass leads us to the notion of dark matter. Starting with a brief guided tour of the meanings of mass, the colloquium will explore the multiple origins of mass. We will see how far we have come toward understanding mass, and survey the issues that guide our research today.

  14. Effect of four plant species on soil 15N-access and herbage yield in temporary agricultural grasslands

    DEFF Research Database (Denmark)

    Pirhofter-Walzl, Karin; Eriksen, Jørgen; Rasmussen, Jim

    2013-01-01

    access to greater amounts of soil 15N compared with a shallow-rooting binary mixture, and if leguminous plants affect herbage yield and soil 15N-access. Methods 15N-enriched ammonium-sulphate was placed at three different soil depths (0.4, 0.8 and 1.2 m) to determine the depth dependent soil 15N....... This positive plant diversity effect could not be explained by complementary soil 15N-access of the different plant species from 0.4, 0.8 and 1.2 m soil depths, even though deep-rooting chicory acquired relatively large amounts of deep soil 15N and shallow-rooting perennial ryegrass when grown in a mixture...

  15. Effects of Grazing Abandoned Grassland on Herbage Production and Utilization, and Sheep Preference and Performance

    Directory of Open Access Journals (Sweden)

    Håvard Steinshamn

    2018-05-01

    Full Text Available Large areas of farmland are abandoned in Norway, which for various reasons are regarded as undesirable. Loss of farmland may have negative implications for biodiversity and ecosystem function and food production potential. The objectives of this study were to assess forage mass production and utilization, botanical composition, lamb performance, and grazing distribution pattern when reintroducing livestock grazing to an abandoned grassland. The study area was located in Central Norway, unmanaged for 12 years. Sheep grazed the area for 10 weeks in 2013 and 4 weeks in spring and autumn, respectively, in 2014 and 2015. During the summer of 2014 and 2015, the area was subjected to the following replicated treatments: (1 No grazing, (2 grazing with heifers, and (3 grazing with ewes and their offspring. The stocking rate was similar in the grazed treatments. Forage biomass production and animal intake were estimated using grazing exclosure cages and botanical composition by visual assessment. Effect on lamb performance was evaluated by live weight gain and slaughter traits in sheep subjected to three treatments: (1 Common farm procedure with summer range pasturing, (2 spring grazing period extended by 1 month on the abandoned grassland before summer range pasturing, and (3 spring and summer grazing on the abandoned grassland. Grazing distribution patterns were studied using GPS position collars on ewes. Total annual biomass production was on average 72% higher with summer grazing than without. Annual consumption and utilization was on average 218 g DM/m2 and 70% when summer grazed, and 25 g DM/m2 and 18% without grazing, respectively. Botanical composition did not differ between treatments. Live weight gain was higher in lambs subjected to an extended spring grazing period (255 g/d compared to common farm practice (228 g/d and spring and summer grazing on the abandoned grassland (203 g/d, and carcass value was 14% higher in lambs on extended spring

  16. Herbage Production, Nutritive Value and Grazing Preference of Diploid and Tetraploid Perennial Ryegrass Cultivars (Lolium perenne L. Producción de Fitomasa, Calidad Nutritiva y Preferencia de Pastoreo de Cultivares Diploides y Tetraploides de Ballica Perenne (Lolium perenne L.

    Directory of Open Access Journals (Sweden)

    Oscar A Balocchi

    2009-09-01

    Full Text Available The objective of this study was to determine, under the soil and climatic conditions of Southern Chile, the effect of the ploidy of perennial ryegrass (Lolium perenne L. cultivars on herbage production, nutritive value, grazing preference and utilization of pasture produced. This study was conducted in southern Chile, Valdivia Province, and was evaluated for 3 years. The tetraploid cultivars used were Quartet (4n, Gwendal (4n, Pastoral (4n and Napoleon (4n. The diploid cultivars were Anita (2n, Jumbo (2n, Aries (2n, and Yatsyn 1 (2n.When the average sward height reached 20 cm, all plots were simultaneously grazed by dairy cows for a period of 24 h. Before and after grazing, sward height, dry matter availability and nutritive value were evaluated. Grazing preference was visually assessed every 5 min for a period of 2.5 h after the afternoon milking. During the 3-year period 20 grazing events were evaluated. A randomized complete block design, with eight cultivars and three replicates, was used. Diploid cultivars showed greater herbage mass accumulation than tetraploid cultivars (P ≤ 0.05. No significant differences were obtained in the annual average crude protein content. Nevertheless, tetraploid cultivars showed a greater D value than diploid cultivars, except during the third year when the difference was not statistically significant. Dairy cows grazed more time on tetraploid cultivars. Considering, additionally, the residual herbage mass after grazing and the percentage of pasture utilization, diploid cultivars were less intensively grazed, suggesting a lower consumption by the cows.El objetivo de este estudio fue determinar, bajo las condiciones edafoclimáticas del sur de Chile, el efecto de la ploidía de cultivares de ballica perenne (Lolium perenne L. sobre el rendimiento de fitomasa, calidad nutricional, preferencia de pastoreo y porcentaje de utilización del forraje producido. El ensayo se realizó en el sur de Chile, provincia de

  17. A novel approach to quantifying the sensitivity of current and future cosmological datasets to the neutrino mass ordering through Bayesian hierarchical modeling

    Science.gov (United States)

    Gerbino, Martina; Lattanzi, Massimiliano; Mena, Olga; Freese, Katherine

    2017-12-01

    We present a novel approach to derive constraints on neutrino masses, as well as on other cosmological parameters, from cosmological data, while taking into account our ignorance of the neutrino mass ordering. We derive constraints from a combination of current as well as future cosmological datasets on the total neutrino mass Mν and on the mass fractions fν,i =mi /Mν (where the index i = 1 , 2 , 3 indicates the three mass eigenstates) carried by each of the mass eigenstates mi, after marginalizing over the (unknown) neutrino mass ordering, either normal ordering (NH) or inverted ordering (IH). The bounds on all the cosmological parameters, including those on the total neutrino mass, take therefore into account the uncertainty related to our ignorance of the mass hierarchy that is actually realized in nature. This novel approach is carried out in the framework of Bayesian analysis of a typical hierarchical problem, where the distribution of the parameters of the model depends on further parameters, the hyperparameters. In this context, the choice of the neutrino mass ordering is modeled via the discrete hyperparameterhtype, which we introduce in the usual Markov chain analysis. The preference from cosmological data for either the NH or the IH scenarios is then simply encoded in the posterior distribution of the hyperparameter itself. Current cosmic microwave background (CMB) measurements assign equal odds to the two hierarchies, and are thus unable to distinguish between them. However, after the addition of baryon acoustic oscillation (BAO) measurements, a weak preference for the normal hierarchical scenario appears, with odds of 4 : 3 from Planck temperature and large-scale polarization in combination with BAO (3 : 2 if small-scale polarization is also included). Concerning next-generation cosmological experiments, forecasts suggest that the combination of upcoming CMB (COrE) and BAO surveys (DESI) may determine the neutrino mass hierarchy at a high statistical

  18. Effect of sprouted barley grain supplementation of an herbage-based or haylage-based diet on ruminal fermentation and methane output in continuous culture.

    Science.gov (United States)

    Hafla, A N; Soder, K J; Brito, A F; Rubano, M D; Dell, C J

    2014-12-01

    A 4-unit dual-flow continuous-culture fermentor system was used to assess the effect of supplementing 7-d sprouted barley (SB) or barley grain (BG) with an herbage-based or haylage-based diet on nutrient digestibility, volatile fatty acid (VFA) profiles, bacterial protein synthesis, and methane (CH4) output. Treatments were randomly assigned to fermentors in a 4 × 4 Latin square design with a 2 × 2 factorial arrangement using 7 d for diet adaptation and 3 d for sample collection. Experimental diets were (1) 55.5 g of herbage dry matter (DM) + 4.5 g of SB DM, (2) 56.0 g of herbage DM + 4.0 g of BG DM, (3) 55.5 g of haylage DM + 4.5 g of SB DM, and (4) 56.0 g of haylage DM + 4.0 g of BG DM. Forages were fed at 0730, 1030, 1400, and 1900 h, whereas SB and BG were fed at 0730 and 1400 h. Gas samples for CH₄ analysis were collected at 0725, 0900, 1000, 1355, 1530, and 1630 h on d 8, 9, and 10. Fluid samples were taken once daily on d 8, 9, and 10 for pH measurements and for ammonia-N and VFA analysis and analyzed for DM, organic matter, crude protein, neutral detergent fiber, and acid detergent fiber for determination of nutrient digestibilities and estimation of bacterial protein synthesis. Orthogonal contrasts were used to compare the effect of forage source (haylage vs. herbage), supplement (BG vs. SB), and the forage × supplement interaction. Apparent and true DM and organic matter digestibilities as well as apparent crude protein digestibility were not affected by forage source. However, true DM digestibility was greatest for diets supplemented with SB. Apparent neutral and acid detergent fiber digestibilities of herbage-based diets were higher than haylage-based diets but fiber digestibility was not affected by supplement. Diets supplemented with SB had higher mean and minimum pH than BG; however, maximum pH was not affected by diet. Supplementation with BG produced a greater concentration of total VFA compared with diets supplemented with SB. Haylage

  19. Prognosis of accumulation value of 137SCs and 90Sr in the herbages of the main types of the Belarus Polesje meadows utilizing agrochemical properties

    International Nuclear Information System (INIS)

    Podolyak, A.G.; Bogdevich, I.M.; Ivashkova, I.I.

    2007-01-01

    On the basis of long-term stationary experience it was established that the minimum accumulation of 137Cs and 90Sr in the herbage of the waterless valley, marshed and flood types of the Belarus Polesje meadows contaminated by Chernobyl radionuclides is seen when the optimum basic agrochemical soil properties are achieved with the application of the scientifically reasonable protective measures. It was demonstrated that in the remote period of the accident for the prognosis of radionuclides contents in natural and cultural meadows herbage it is advisable to use of transfer factors (TFa, Bq/kg : kBq/square m) based on the complex agrochemical parameters - basic saturation degree (V, %) and agrochemical cultivation soils index (Iac), which take into account several soil characteristics simultaneously. This article provides the equations of linear and multiple regressions that can be used to calculate the transfer factors for 137Cs and 90Sr uptake and the herbage contamination degree for the main types of meadows of the region, which will allow one to reduce the volume of forages production (hay, green bulk) that is not adequate to the established permissible levels: Republican allowable levels of the contents of cesium-137 and strontium-90 in agricultural raw material and forages

  20. Forecasting of accumulation of Cs 137 and Sr 90 in the herbage of the main types of the Belarus Palessje meadows utilizing agrochemical soil properties

    International Nuclear Information System (INIS)

    Podolyak, A.G.; Bogdevich, I.M.; Ivashkova, I.I.

    2007-01-01

    On the basis of long-term stationary experience it was established that the minimum accumulation of Cs 137 and Sr 90 in the herbage of the waterless valley, marshed and flood types of the Belarus Palessje meadows contaminated by Chernobyl radionuclides is seen when the optimum basic agrochemical soil properties are achieved with the application of the scientifically reasonable protective measures. It was demonstrated that in the remote period of the accident for the prognosis of radionuclides contents in natural and cultural meadows herbage it is advisable to use of transfer factors (TFa, Bq/kg : kBq/m2) based on the complex agrochemical parameters - basic saturation degree (V, %) and agrochemical cultivation soils index (Iac), which take into account several soil characteristics simultaneously. This article provides the equations of linear and multiple regressions that can be used to calculate the transfer factors for Cs 137 and Sr 90 uptake and the herbage contamination degree for the main types of meadows of the region, which will allow one to reduce the volume of forages production (hay, green bulk) that is not adequate to the established permissible levels: Republican allowable levels of the contents of cesium-137 and strontium-90 in agricultural raw material and forages. (authors)

  1. A novel approach to quantifying the sensitivity of current and future cosmological datasets to the neutrino mass ordering through Bayesian hierarchical modeling

    Directory of Open Access Journals (Sweden)

    Martina Gerbino

    2017-12-01

    Full Text Available We present a novel approach to derive constraints on neutrino masses, as well as on other cosmological parameters, from cosmological data, while taking into account our ignorance of the neutrino mass ordering. We derive constraints from a combination of current as well as future cosmological datasets on the total neutrino mass Mν and on the mass fractions fν,i=mi/Mν (where the index i=1,2,3 indicates the three mass eigenstates carried by each of the mass eigenstates mi, after marginalizing over the (unknown neutrino mass ordering, either normal ordering (NH or inverted ordering (IH. The bounds on all the cosmological parameters, including those on the total neutrino mass, take therefore into account the uncertainty related to our ignorance of the mass hierarchy that is actually realized in nature. This novel approach is carried out in the framework of Bayesian analysis of a typical hierarchical problem, where the distribution of the parameters of the model depends on further parameters, the hyperparameters. In this context, the choice of the neutrino mass ordering is modeled via the discrete hyperparameter htype, which we introduce in the usual Markov chain analysis. The preference from cosmological data for either the NH or the IH scenarios is then simply encoded in the posterior distribution of the hyperparameter itself. Current cosmic microwave background (CMB measurements assign equal odds to the two hierarchies, and are thus unable to distinguish between them. However, after the addition of baryon acoustic oscillation (BAO measurements, a weak preference for the normal hierarchical scenario appears, with odds of 4:3 from Planck temperature and large-scale polarization in combination with BAO (3:2 if small-scale polarization is also included. Concerning next-generation cosmological experiments, forecasts suggest that the combination of upcoming CMB (COrE and BAO surveys (DESI may determine the neutrino mass hierarchy at a high

  2. Methodology to detect and quantify the presence of recycled PET in bottle-grade PET blends: mass spectrometry (MALDI-TOF) and X-ray fluorescence

    International Nuclear Information System (INIS)

    Romao, Wanderson; Franco, Marcos F.; Gozzo, Fabio C.; Iglesias, Amadeu H.; Sanvido, Gustavo B.; Eberlin, Marcos N.; Bueno, Maria I.M.S.; Maretto, Danilo A.; Poppi, Ronei J.; Paoli, Marco-Aurelio de

    2009-01-01

    New methodologies were developed to detect and to quantify the presence of the bottle-grade post-consumption PET (PET pc -btg) in the bottle-grade virgin PET (PET v -btg), preventing frauds and illegal uses of recycled PET pc -btg. MALDI-MS results together with PCA (principal component analysis) was used to classify the samples into several groups: intrinsic viscosity changes; processed and not submitted to some industrial process; wt % PET pc -btg in the PET v -btg; synthesis process change (manufacturer). From these results, it was possible to create a calibration model, that differentiated between PET v -btg and PET pc -btg resins. XRF results show that some manufacturers use one or more catalysts for PET v -btg synthesis, where our prediction model is valid only when the studied resin is known. We observed also that the Fe concentration in PET increase in as a function of the recycling process. Therefore, this variable could be used, in the future work, to create chemometric models including a higher number of variables. (author)

  3. Supercritical fluid chromatography coupled with tandem mass spectrometry: A high-efficiency detection technique to quantify Taxane drugs in whole-blood samples.

    Science.gov (United States)

    Jin, Chan; Guan, Jibin; Zhang, Dong; Li, Bing; Liu, Hongzhuo; He, Zhonggui

    2017-10-01

    We present a technique to rapid determine taxane in blood samples by supercritical fluid chromatography together with mass spectrometry. The aim of this study was to develop a supercritical fluid chromatography with mass spectrometry method for the analysis of paclitaxel, cabazitaxel, and docetaxel in whole-blood samples of rats. Liquid-dry matrix spot extraction was selected in sample preparation procedure. Supercritical fluid chromatography separation of paclitaxel, cabazitaxel, docetaxel, and glyburide (internal standard) was accomplished within 3 min by using the gradient mobile phase consisted of methanol as the compensation solvent and carbon dioxide at a flow rate of 1.0 mL/min. The method was validated regarding specificity, the lower limit of quantification, repeatability, and reproducibility of quantification, extraction recovery, and matrix effects. The lower limit of quantification was found to be 10 ng/mL since it exhibited acceptable precision and accuracy at the corresponding level. All interday accuracies and precisions were within the accepted criteria of ±15% of the nominal value and within ±20% at the lower limit of quantification, implying that the method was reliable and reproducible. In conclusion, this method is a promising tool to support and improve preclinical or clinical pharmacokinetic studies with the taxanes anticancer drugs. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. In sacco dry matter disappearance of herbage and maize meal from ...

    African Journals Online (AJOL)

    protein (fish meal or non-protein nitrogen) via rumen fistulae evinced no consistent trends in diet selection, feed intake or body mass changes of Dorper and Merino wethers grazed on nativc pasture (veld) at Glen. Subsequently, De Waal & Biel (1989a; 1989b; 1989c) studied tlre effects of supplementary energy and/or crude ...

  5. A Bayesian approach to quantifying the effects of mass poultry vaccination upon the spatial and temporal dynamics of H5N1 in Northern Vietnam.

    Directory of Open Access Journals (Sweden)

    Patrick G T Walker

    2010-02-01

    Full Text Available Outbreaks of H5N1 in poultry in Vietnam continue to threaten the livelihoods of those reliant on poultry production whilst simultaneously posing a severe public health risk given the high mortality associated with human infection. Authorities have invested significant resources in order to control these outbreaks. Of particular interest is the decision, following a second wave of outbreaks, to move from a "stamping out" approach to the implementation of a nationwide mass vaccination campaign. Outbreaks which occurred around this shift in policy provide a unique opportunity to evaluate the relative effectiveness of these approaches and to help other countries make informed judgements when developing control strategies. Here we use Bayesian Markov Chain Monte Carlo (MCMC data augmentation techniques to derive the first quantitative estimates of the impact of the vaccination campaign on the spread of outbreaks of H5N1 in northern Vietnam. We find a substantial decrease in the transmissibility of infection between communes following vaccination. This was coupled with a significant increase in the time from infection to detection of the outbreak. Using a cladistic approach we estimated that, according to the posterior mean effect of pruning the reconstructed epidemic tree, two thirds of the outbreaks in 2007 could be attributed to this decrease in the rate of reporting. The net impact of these two effects was a less intense but longer-lasting wave and, whilst not sufficient to prevent the sustained spread of outbreaks, an overall reduction in the likelihood of the transmission of infection between communes. These findings highlight the need for more effectively targeted surveillance in order to help ensure that the effective coverage achieved by mass vaccination is converted into a reduction in the likelihood of outbreaks occurring which is sufficient to control the spread of H5N1 in Vietnam.

  6. H0LiCOW - III. Quantifying the effect of mass along the line of sight to the gravitational lens HE 0435-1223 through weighted galaxy counts★

    Science.gov (United States)

    Rusu, Cristian E.; Fassnacht, Christopher D.; Sluse, Dominique; Hilbert, Stefan; Wong, Kenneth C.; Huang, Kuang-Han; Suyu, Sherry H.; Collett, Thomas E.; Marshall, Philip J.; Treu, Tommaso; Koopmans, Leon V. E.

    2017-06-01

    Based on spectroscopy and multiband wide-field observations of the gravitationally lensed quasar HE 0435-1223, we determine the probability distribution function of the external convergence κext for this system. We measure the under/overdensity of the line of sight towards the lens system and compare it to the average line of sight throughout the Universe, determined by using the CFHTLenS (The Canada France Hawaii Lensing Survey) as a control field. Aiming to constrain κext as tightly as possible, we determine under/overdensities using various combinations of relevant informative weighting schemes for the galaxy counts, such as projected distance to the lens, redshift and stellar mass. We then convert the measured under/overdensities into a κext distribution, using ray-tracing through the Millennium Simulation. We explore several limiting magnitudes and apertures, and account for systematic and statistical uncertainties relevant to the quality of the observational data, which we further test through simulations. Our most robust estimate of κext has a median value κ^med_ext = 0.004 and a standard deviation σκ = 0.025. The measured σκ corresponds to 2.5 per cent relative uncertainty on the time delay distance, and hence the Hubble constant H0 inferred from this system. The median κ^med_ext value varies by ˜0.005 with the adopted aperture radius, limiting magnitude and weighting scheme, as long as the latter incorporates galaxy number counts, the projected distance to the main lens and a prior on the external shear obtained from mass modelling. This corresponds to just ˜0.5 per cent systematic impact on H0. The availability of a well-constrained κext makes HE 0435-1223 a valuable system for measuring cosmological parameters using strong gravitational lens time delays.

  7. Isotopic evidence and mass balance approach for quantifying paleorecharge condition to the pleistocene aquifer system of Wadi El assiuti basin,Egypt

    International Nuclear Information System (INIS)

    Elewa, H.H.; Abd EI Samie, S.G.

    2007-01-01

    Revaluation of the groundwater resources of the Pleistocene aquifer in Wadi EI Assiuti area by the integration of the hydrogeological information with stable and radioactive isotopes, ions concentration, and the mass balance program, could change the old hypotheses of the renewability of the aquifers water from the River Nile. The new data obtained confirm that; paleogroundwater constitutes the main bulk of the aquifer water. The chemical constituents (ion species, ion ratios, saturation indices) indicate the marine origin of water at the center of the basin due to the presence of MgCl 2 ; whereas the meteoric water origin prevails at the boundary of the basin (Na 2 S0 4 ). Saturation indices indicate that water is saturated with respect to calcite and dolomite whereas anhydrite, gypsum and halite are below saturation level. The ions distribution constrained, to give a chemical evolution trend along the flow path from the NE to the SW direction due to the local variability's in each well. The isotopic results of δ 18 O and δD showed high depletion close to the isotopic signature of the Western Desert Nubian Sandstone water in most water samples extracted from the center of the basin. In the northeastern part of the basin it acquires slight enrichment by about 2.5%0 in δ 18 O. On the other hand water in the northwest direction showed gradual enrichment close to the value of the Nile water. Carbon-14 radioactive isotope affirmed the long age of water in the center of the basin (about 25,000 yBP) and about 10,000 yBP age of water in northeastern part of the basin near the highly mountainous front. The difference in water age between the center and the eastern boundary of the basin indicates :l relative recharge from the floodwater over the high altitude area. Based on the isotopic mass balance equations through the Net path model, the estimated percentage of paleowater in the center of the basin reaches about 80% and about 72% in NE direction. Variable amounts of

  8. Updated stomatal flux and flux-effect models for wheat for quantifying effects of ozone on grain yield, grain mass and protein yield.

    Science.gov (United States)

    Grünhage, Ludger; Pleijel, Håkan; Mills, Gina; Bender, Jürgen; Danielsson, Helena; Lehmann, Yvonne; Castell, Jean-Francois; Bethenod, Olivier

    2012-06-01

    Field measurements and open-top chamber experiments using nine current European winter wheat cultivars provided a data set that was used to revise and improve the parameterisation of a stomatal conductance model for wheat, including a revised value for maximum stomatal conductance and new functions for phenology and soil moisture. For the calculation of stomatal conductance for ozone a diffusivity ratio between O(3) and H(2)O in air of 0.663 was applied, based on a critical review of the literature. By applying the improved parameterisation for stomatal conductance, new flux-effect relationships for grain yield, grain mass and protein yield were developed for use in ozone risk assessments including effects on food security. An example of application of the flux model at the local scale in Germany shows that negative effects of ozone on wheat grain yield were likely each year and on protein yield in most years since the mid 1980s. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Herbage intake and milk yield in Comisana ewes as effect of 4 vs 7 h of grazing during late lactation.

    Science.gov (United States)

    Valenti, Bernardo; Marletta, Donata; De Angelis, Anna; Di Paola, Fortunato; Bordonaro, Salvatore; Avondo, Marcella

    2017-06-01

    Thirty-two Comisana ewes at late lactation were used in two trials carried out during late spring in 2 consecutive years, with the aim to evaluate the effect of the duration of grazing on herbage intake and performance. In each trial, 16 pluriparous Comisana lactating ewes were equally divided into two groups which grazed in two separate areas of natural pasture from 11:00 to 15:00 h (group 4H) or from 10:00 to 17:00 (group 7H). A concentrate mixture (500 g/day) was also offered to each ewe. The mean maximum temperature was, respectively, 23.5 ± 3.8 °C during experiment 1 and 27.0 ± 3.1 °C during experiment 2. Probably as a consequence of the differences in climatic conditions, the results on herbage intake and milk production were different during the two trials. Herbage dry matter intake was not affected by the duration of grazing during trial 1, whereas it was significantly lower in 4H group compared to that in 7H group (0.67 vs 1.02 kg/day; P ewes were able to reach good intake levels despite grazing during the hottest hours; with higher temperatures throughout the trial (trial 2), the 4H ewes reduced ingestion. Milk production was higher in 4H group during trial 1 (778 vs 707 g/day; P = 0.006), whereas it was not affected by the number of hours of grazing during trial 2, despite the higher intake levels reached by the 7H group. In conclusion, 3 extra hours of grazing for ewes at late lactation on a low quality pasture could be nullified in terms of yield response.

  10. Metabolic load in dairy cows kept in herbage-based feeding systems and suitability of potential markers for compromised well-being.

    Science.gov (United States)

    Zbinden, R S; Falk, M; Münger, A; Dohme-Meier, F; van Dorland, H A; Bruckmaier, R M; Gross, J J

    2017-08-01

    Herbage feeding with only little input of concentrates plays an important role in milk production in grassland dominated countries like Switzerland. The objective was to investigate the effects of a solely herbage-based diet and level of milk production on performance, and variables related to the metabolic, endocrine and inflammatory status to estimate the stress imposed on dairy cows. Twenty-five multiparous Holstein cows were divided into a control (C+, n = 13) and a treatment group (C-, n = 12), according to their previous lactation yield (4679-10 808 kg) from week 3 ante partum until week 8 post-partum (p.p.). While C+ received fresh herbage plus additional concentrate, no concentrate was fed to C- throughout the experiment. Within C+ and C-, the median of the preceding lactation yields (7752 kg) was used to split cows into a high (HYC+, HYC-)- and low-yielding (LYC+, LYC-) groups. Throughout the study, HYC+ had a higher milk yield (35.9 kg/d) compared to the other subgroups (27.2-31.7 kg/d, p cows (p cows without supplementary concentrate experienced a high metabolic load resulting in a reduced performance compared to cows of similar potential fed accordingly. Low-yielding cows performed well without concentrate supplementation. Interestingly, the selected markers for inflammation and stress such as cortisol, Hp, SAA, BE and AP gave no indication for the metabolic load being translated into compromised well-being. Journal of Animal Physiology and Animal Nutrition © 2016 Blackwell Verlag GmbH.

  11. Quantifying canal leakage rates using a mass-balance approach and heat-based hydraulic conductivity estimates in selected irrigation canals, western Nebraska, 2007 through 2009

    Science.gov (United States)

    Hobza, Christopher M.; Andersen, Michael J.

    2010-01-01

    The water supply in areas of the North Platte River Basin in the Nebraska Panhandle has been designated as fully appropriated or overappropriated by the Nebraska Department of Natural Resources (NDNR). Enacted legislation (Legislative Bill 962) requires the North Platte Natural Resources District (NPNRD) and the NDNR to develop an Integrated Management Plan (IMP) to balance groundwater and surface-water supply and demand in the NPNRD. A clear understanding of the groundwater and surface-water systems is critical for the development of a successful IMP. The primary source of groundwater recharge in parts of the NPNRD is from irrigation canal leakage. Because canal leakage constitutes a large part of the hydrologic budget, spatially distributing canal leakage to the groundwater system is important to any management strategy. Surface geophysical data collected along selected reaches of irrigation canals has allowed for the spatial distribution of leakage on a relative basis; however, the actual magnitude of leakage remains poorly defined. To address this need, the U.S. Geological Survey, in cooperation with the NPNRD, established streamflow-gaging stations at upstream and downstream ends from two selected canal reaches to allow a mass-balance approach to be used to calculate daily leakage rates. Water-level and sediment temperature data were collected and simulated at three temperature monitoring sites to allow the use of heat as a tracer to estimate the hydraulic conductivity of canal bed sediment. Canal-leakage rates were estimated by applying Darcy's Law to modeled vertical hydraulic conductivity and either the estimated or measured hydraulic gradient. This approach will improve the understanding of the spatial and temporal variability of canal leakage in varying geologic settings identified in capacitively coupled resistivity surveys. The high-leakage potential study reach of the Tri-State Canal had two streamflow-gaging stations and two temperature monitoring

  12. Effect of Repeated Application of Manure on Herbage Yield, Quality and Wintering Ability during Cropping of Dwarf Napiergrass with Italian Ryegrass in Hilly Southern Kyushu, Japan

    Directory of Open Access Journals (Sweden)

    Renny Fatmyah Utamy

    2018-03-01

    Full Text Available The effects of two levels of manure application (184 and 275 kg N ha−1 year−1 on herbage yield, quality, and wintering ability during the cropping of a dwarf genotype of late-heading (DL Napiergrass (Pennisetum purpureum Schumach oversown with Italian ryegrass (IR; Lolium multiflorum Lam. were examined and compared with chemical fertilizer application (234 kg N ha−1 year−1 for 4 years to determine a sustainable and environmentally harmonized herbage production in a hilly area (340 m above sea level. No significant (p > 0.05 differences in growth attributes of plant height, tiller density, percentage of leaf blade, or dry matter yield appeared in either DL Napiergrass or IR among moderate levels (184–275 kg N ha−1 year−1 of manure and chemical fertilizer treatments. IR exhibited no significant detrimental effect on spring regrowth of DL Napiergrass, which showed a high wintering ability in all treatments. In vitro dry matter digestibility of DL Napiergrass tended to increase with increasing manure application, especially at the first defoliation in the first three years. Manure application improved soil chemical properties and total nitrogen and carbon content. The results suggested that the lower rate of manure application of 184 kg nitrogen ha−1 year−1 would be suitable, which would be a good substitute for chemical fertilizer application with an equilibrium nitrogen budget for sustainable DL Napiergrass and IR cropping in the hilly region of southern Kyushu.

  13. Quantifiers and working memory

    NARCIS (Netherlands)

    Szymanik, J.; Zajenkowski, M.

    2010-01-01

    The paper presents a study examining the role of working memory in quantifier verification. We created situations similar to the span task to compare numerical quantifiers of low and high rank, parity quantifiers and proportional quantifiers. The results enrich and support the data obtained

  14. Quantifiers and working memory

    NARCIS (Netherlands)

    Szymanik, J.; Zajenkowski, M.

    2009-01-01

    The paper presents a study examining the role of working memory in quantifier verification. We created situations similar to the span task to compare numerical quantifiers of low and high rank, parity quantifiers and proportional quantifiers. The results enrich and support the data obtained

  15. The transfer of 137Cs and 90Sr to dairy cattle fed fresh herbage collected 3.5 km from the Chernobyl nuclear power plant

    International Nuclear Information System (INIS)

    Beresford, N.A.; Gashchak, S.; Lasarev, N; Arkhipov, A.; Chyorny, Y.; Astasheva, N.; Arkhipov, N.; Mayes, R.W.; Howard, B.J.; Baglay, G.; Loginova, L.; Burov, N.

    2000-01-01

    A study conducted during summer 1993 to determine the bioavailability and transfer of 137 Cs and 90 Sr to dairy cattle from herbage collected from a pasture contaminated by particulate fallout is described. The study pasture was located 3.5 km from the Chernobyl nuclear power plant. The true absorption coefficient (A t ) determined for 137 Cs (0.23) was considerably lower than previous estimates for radiocaesium incorporated into vegetation by root uptake. It is likely that the low dry matter digestibility of the diet and the potential presence of 137 Cs associated with adherent soil-associated fuel particles contributed to this low bioavailability. The A t value determined for 90 Sr (0.27) did not indicate a reduced bioavailability. It is suggested that the current and previous calcium status of the animals was the controlling influence on the transfer of 90 Sr from the diet to milk

  16. The use of remotely-sensed wildland fire radiation to infer the fates of carbon during biomass combustion - the need to understand and quantify a fire's mass and energy budget

    Science.gov (United States)

    Dickinson, M. B.; Dietenberger, M.; Ellicott, E. A.; Hardy, C.; Hudak, A. T.; Kremens, R.; Mathews, W.; Schroeder, W.; Smith, A. M.; Strand, E. K.

    2016-12-01

    Few measurement techniques offer broad-scale insight on the extent and characteristics of biomass combustion during wildland fires. Remotely-sensed radiation is one of these techniques but its measurement suffers from several limitations and, when quantified, its use to derive variables of real interest depends on an understanding of the fire's mass and energy budget. In this talk, we will review certain assumptions of wildland fire radiation measurement and explore the use of those measurements to infer the fates of biomass and the dissipation of combustion energy. Recent measurements show that the perspective of the sensor (nadir vs oblique) matters relative to estimates of fire radiated power. Other considerations for producing accurate estimates of fire radiation from remote sensing include obscuration by an intervening forest canopy and to what extent measurements that are based on the assumption of graybody/blackbody behavior underestimate fire radiation. Fire radiation measurements are generally a means of quantifying other variables and are often not of interest in and of themselves. Use of fire radiation measurements as a means of inference currently relies on correlations with variables of interest such as biomass consumption and sensible and latent heat and emissions fluxes. Radiation is an imperfect basis for these correlations in that it accounts for a minority of combustion energy ( 15-30%) and is not a constant as is often assumed. Measurements suggest that fire convective energy accounts for the majority of combustion energy and (after radiation) is followed by latent energy, soil heating, and pyrolysis energy, more or less in that order. Combustion energy in and of itself is not its potential maximum, but is reduced to an effective heat of combustion by combustion inefficiency and by work done to pyrolyze fuel (important in char production) and in moisture vaporization. The effective heat of combustion is often on the order of 65% of its potential

  17. Quantifiers for quantum logic

    OpenAIRE

    Heunen, Chris

    2008-01-01

    We consider categorical logic on the category of Hilbert spaces. More generally, in fact, any pre-Hilbert category suffices. We characterise closed subobjects, and prove that they form orthomodular lattices. This shows that quantum logic is just an incarnation of categorical logic, enabling us to establish an existential quantifier for quantum logic, and conclude that there cannot be a universal quantifier.

  18. Repurposing Mass-produced Internal Combustion Engines Quantifying the Value and Use of Low-cost Internal Combustion Piston Engines for Modular Applications in Energy and Chemical Engineering Industries

    Science.gov (United States)

    L'Heureux, Zara E.

    This thesis proposes that internal combustion piston engines can help clear the way for a transformation in the energy, chemical, and refining industries that is akin to the transition computer technology experienced with the shift from large mainframes to small personal computers and large farms of individually small, modular processing units. This thesis provides a mathematical foundation, multi-dimensional optimizations, experimental results, an engine model, and a techno-economic assessment, all working towards quantifying the value of repurposing internal combustion piston engines for new applications in modular, small-scale technologies, particularly for energy and chemical engineering systems. Many chemical engineering and power generation industries have focused on increasing individual unit sizes and centralizing production. This "bigger is better" concept makes it difficult to evolve and incorporate change. Large systems are often designed with long lifetimes, incorporate innovation slowly, and necessitate high upfront investment costs. Breaking away from this cycle is essential for promoting change, especially change happening quickly in the energy and chemical engineering industries. The ability to evolve during a system's lifetime provides a competitive advantage in a field dominated by large and often very old equipment that cannot respond to technology change. This thesis specifically highlights the value of small, mass-manufactured internal combustion piston engines retrofitted to participate in non-automotive system designs. The applications are unconventional and stem first from the observation that, when normalized by power output, internal combustion engines are one hundred times less expensive than conventional, large power plants. This cost disparity motivated a look at scaling laws to determine if scaling across both individual unit size and number of units produced would predict the two order of magnitude difference seen here. For the first

  19. Connected Car: Quantified Self becomes Quantified Car

    Directory of Open Access Journals (Sweden)

    Melanie Swan

    2015-02-01

    Full Text Available The automotive industry could be facing a situation of profound change and opportunity in the coming decades. There are a number of influencing factors such as increasing urban and aging populations, self-driving cars, 3D parts printing, energy innovation, and new models of transportation service delivery (Zipcar, Uber. The connected car means that vehicles are now part of the connected world, continuously Internet-connected, generating and transmitting data, which on the one hand can be helpfully integrated into applications, like real-time traffic alerts broadcast to smartwatches, but also raises security and privacy concerns. This paper explores the automotive connected world, and describes five killer QS (Quantified Self-auto sensor applications that link quantified-self sensors (sensors that measure the personal biometrics of individuals like heart rate and automotive sensors (sensors that measure driver and passenger biometrics or quantitative automotive performance metrics like speed and braking activity. The applications are fatigue detection, real-time assistance for parking and accidents, anger management and stress reduction, keyless authentication and digital identity verification, and DIY diagnostics. These kinds of applications help to demonstrate the benefit of connected world data streams in the automotive industry and beyond where, more fundamentally for human progress, the automation of both physical and now cognitive tasks is underway.

  20. Fatty acid profile in the ruminal fluid and in the m. longissimus dorsi of lambs fed herbage or concentrate with or without tannins

    Directory of Open Access Journals (Sweden)

    Alessandro Priolo

    2010-01-01

    Full Text Available Twenty-eight male lambs were divided into two groups at age 45 d. Fourteen lambs were given fresh herbage (vetch; the remaining lambs were fed a concentrate-based diet. Within each treatment, seven lambs received a supplementation of quebracho tannins. At slaughter (age 105 d the ruminal content and the muscle longissimus dorsi (LD were collected. Ruminal fluid and LD fatty acid composition was determined by gas chromatography. Among the concentrates-fed lambs, tannins supplementation reduced (P < 0.05 the concentration of C18:0 (- 49 % and increased vaccenic acid (VA; + 69 % in the ruminal fluid. When tannins were included into the concentrate, the LD contained double levels of rumenic acid (RA as compared to the LD of the lambs fed the tannins-free concentrate (0.96 vs. 0.46 % of total extracted fatty acids, respectively; P < 0.05. The concentration of PUFA was higher (P < 0.05 and SFA (P < 0.01 lower in the LD from lambs fed the tannin diets as compared to the animals receiving the tannin-free diets. In conclusion, tannins reduce the biohydrogenation of the PUFA in the rumen. This implies that tannins supplementation could be a strategy to increase the RA and PUFA content and to reduce the SFA into ruminant meats.

  1. Effect of timing and type of supplementary grain on herbage intake, nitrogen utilization and milk production in dairy cows grazed on perennial ryegrass pasture from evening to morning.

    Science.gov (United States)

    Ueda, Koichiro; Mitani, Tomohiro; Kondo, Seiji

    2017-01-01

    The present study aimed to clarify the effect of timing and type of supplementary grain in grazing dairy cows on herbage dry matter intake (HDMI), nitrogen utilization and milk production. Eight lactating cows were allowed to graze from evening to morning during three seasonal periods (spring, summer, autumn). They were randomly allocated to four treatments (timing: pre- (Pre) or post-grazing (Post), for large grain allotments consisting of 75% of daily grain offered; grain type: barley or corn) in 4 × 4 Latin square designs in each period. In the spring period, HDMI was greater for cows fed corn than those fed barley (P = 0.005), whereas cows in the Pre treatment had a similar HDMI, higher (P = 0.049) urinary purine derivative concentration and greater (P = 0.004) milk yield compared with cows in the Post treatment. In the summer and autumn periods, timing treatments did not affect HDMI, nitrogen utilization or milk production, but cows supplemented with barley had higher urinary purine derivatives concentration (P production without reducing HDMI regardless of grain type. © 2016 Japanese Society of Animal Science.

  2. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst......-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only...... compare the worst-case execution time bounds of different architectures....

  3. Thermosensory reversal effect quantified

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2008-01-01

    At room temperature, some materials feel colder than others due to differences in thermal conductivity, heat capacity and geometry. When the ambient temperature is well above skin temperature, the roles of 'cold' and 'warm' materials are reversed. In this paper, this effect is quantified by

  4. Thermosensory reversal effect quantified

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2008-01-01

    At room temperature, some materials feel colder than others due to differences in thermal conductivity, heat capacity and geometry. When the ambient temperature is well above skin temperature, the roles of ‘cold’ and ‘warm’ materials are reversed. In this paper, this effect is quantified by

  5. Quantifying requirements volatility effects

    NARCIS (Netherlands)

    Kulk, G.P.; Verhoef, C.

    2008-01-01

    In an organization operating in the bancassurance sector we identified a low-risk IT subportfolio of 84 IT projects comprising together 16,500 function points, each project varying in size and duration, for which we were able to quantify its requirements volatility. This representative portfolio

  6. The quantified relationship

    NARCIS (Netherlands)

    Danaher, J.; Nyholm, S.R.; Earp, B.

    2018-01-01

    The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this

  7. Quantifying IT estimation risks

    NARCIS (Netherlands)

    Kulk, G.P.; Peters, R.J.; Verhoef, C.

    2009-01-01

    A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be

  8. Quantifying drug-protein binding in vivo

    International Nuclear Information System (INIS)

    Buchholz, B; Bench, G; Keating III, G; Palmblad, M; Vogel, J; Grant, P G; Hillegonds, D

    2004-01-01

    Accelerator mass spectrometry (AMS) provides precise quantitation of isotope labeled compounds that are bound to biological macromolecules such as DNA or proteins. The sensitivity is high enough to allow for sub-pharmacological (''micro-'') dosing to determine macromolecular targets without inducing toxicities or altering the system under study, whether it is healthy or diseased. We demonstrated an application of AMS in quantifying the physiologic effects of one dosed chemical compound upon the binding level of another compound in vivo at sub-toxic doses [4].We are using tissues left from this study to develop protocols for quantifying specific binding to isolated and identified proteins. We also developed a new technique to quantify nanogram to milligram amounts of isolated protein at precisions that are comparable to those for quantifying the bound compound by AMS

  9. Quantifying light pollution

    International Nuclear Information System (INIS)

    Cinzano, P.; Falchi, F.

    2014-01-01

    In this paper we review new available indicators useful to quantify and monitor light pollution, defined as the alteration of the natural quantity of light in the night environment due to introduction of manmade light. With the introduction of recent radiative transfer methods for the computation of light pollution propagation, several new indicators become available. These indicators represent a primary step in light pollution quantification, beyond the bare evaluation of the night sky brightness, which is an observational effect integrated along the line of sight and thus lacking the three-dimensional information. - Highlights: • We review new available indicators useful to quantify and monitor light pollution. • These indicators are a primary step in light pollution quantification. • These indicators allow to improve light pollution mapping from a 2D to a 3D grid. • These indicators allow carrying out a tomography of light pollution. • We show an application of this technique to an Italian region

  10. Monitoring PCDD/Fs in soil and herbage samples collected in the neighborhood of a hazardous waste incinerator after five years of operation

    Energy Technology Data Exchange (ETDEWEB)

    Nadal, M.; Bocio, A.; Schuhmacher, M.; Liobet, J.M.; Domingo, J.L. [Rovira i Virgili Univ., Reus (Spain); Diaz-Ferrero, J. [Inst. Quimic de Sarria, Barcelona (Spain)

    2004-09-15

    Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) are among the most dangerous environmental pollutants, usually generated during combustion processes. Until recently, waste incineration was widely referenced as one of the most important sources of PCDD/F release to the atmosphere. In 1999, the only hazardous waste incinerator (HWI) in Spain began regular operations. This facility is placed in Tarragona, Catalonia. The presence of this HWI, as well as that of a municipal solid waste incinerator (MSWI) at a few kilometers, increased the concern of the public opinion in relation to the potential toxic emissions, especially those of metals and PCDD/Fs, which could affect the health of the population living in the area. Previously to regular operations (1996) the baseline levels of PCDD/Fs in soil and vegetation samples collected near the HWI were determined. A second survey was carried out two years later (1998) in order to establish the temporal variation in PCDD/F concentrations in soil and vegetation samples taken at the same sampling points. Vegetation is considered an adequate short-term environmental monitor for PCDD/Fs. Therefore, in the surveillance program of the facility (1999-2003), herbage samples (40) were annually collected at the same sampling points in which baseline samples had been taken. Moreover, considering soil as a suitable long-term monitor for PCDD/Fs, 40 soil samples in this matrix were again collected in 2001 and 2003 to examine the temporal variations of PCDD/F levels in the area. In the present study, we present the concentrations of PCDD/Fs in soil and vegetation samples collected in the vicinity of the HWI after 5 years of regular operations.

  11. Quantifying linguistic coordination

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Tylén, Kristian

    task (Bahrami et al 2010, Fusaroli et al. 2012) we extend to linguistic coordination dynamical measures of recurrence employed in the analysis of sensorimotor coordination (such as heart-rate (Konvalinka et al 2011), postural sway (Shockley 2005) and eye-movements (Dale, Richardson and Kirkham 2012......). We employ nominal recurrence analysis (Orsucci et al 2005, Dale et al 2011) on the decision-making conversations between the participants. We report strong correlations between various indexes of recurrence and collective performance. We argue this method allows us to quantify the qualities...

  12. Non-intrusive measurement of tritium activity in waste drums by modelling a {sup 3}He leak quantified by mass spectrometry; Mesure non intrusive de l'activite de futs de dechets trities par modelisation d'une fuite {sup 3}He et sa quantification par spectrometrie de masse

    Energy Technology Data Exchange (ETDEWEB)

    Demange, D

    2002-07-03

    This study deals with a new method that makes it possible to measure very low tritium quantities inside radioactive waste drums. This indirect method is based on measuring the decaying product, {sup 3}He, and requires a study of its behaviour inside the drum. Our model considers {sup 3}He as totally free and its leak through the polymeric joint of the drum as two distinct phenomena: permeation and laminar flow. The numerical simulations show that a pseudo-stationary state takes place. Thus, the {sup 3}He leak corresponds to the tritium activity inside the drum but it appears, however, that the leak peaks when the atmospheric pressure variations induce an overpressure in the drum. Nevertheless, the confinement of a drum in a tight chamber makes it possible to quantify the {sup 3}He leak. This is a non-intrusive measurement of its activity, which was experimentally checked by using reduced models, representing the drum and its confinement chamber. The drum's confinement was optimised to obtain a reproducible {sup 3}He leak measurement. The gaseous samples taken from the chamber were purified using selective adsorption onto activated charcoals at 77 K to remove the tritium and pre-concentrate the {sup 3}He. The samples were measured using a leak detector mass spectrometer. The adaptation of the signal acquisition and the optimisation of the analysis parameters made it possible to reach the stability of the external calibrations using standard gases with a {sup 3}He detection limit of 0.05 ppb. Repeated confinement of the reference drums demonstrated the accuracy of this method. The uncertainty of this non-intrusive measurement of the tritium activity in 200-liter drums is 15% and the detection limit is about 1 GBq after a 24 h confinement. These results led to the definition of an automated tool able to systematically measure the tritium activity of all storage waste drums. (authors)

  13. Non-intrusive measurement of tritium activity in waste drums by modelling a {sup 3}He leak quantified by mass spectrometry; Mesure non intrusive de l'activite de futs de dechets trities par modelisation d'une fuite {sup 3}He et sa quantification par spectrometrie de masse

    Energy Technology Data Exchange (ETDEWEB)

    Demange, D

    2002-07-03

    This study deals with a new method that makes it possible to measure very low tritium quantities inside radioactive waste drums. This indirect method is based on measuring the decaying product, {sup 3}He, and requires a study of its behaviour inside the drum. Our model considers {sup 3}He as totally free and its leak through the polymeric joint of the drum as two distinct phenomena: permeation and laminar flow. The numerical simulations show that a pseudo-stationary state takes place. Thus, the {sup 3}He leak corresponds to the tritium activity inside the drum but it appears, however, that the leak peaks when the atmospheric pressure variations induce an overpressure in the drum. Nevertheless, the confinement of a drum in a tight chamber makes it possible to quantify the {sup 3}He leak. This is a non-intrusive measurement of its activity, which was experimentally checked by using reduced models, representing the drum and its confinement chamber. The drum's confinement was optimised to obtain a reproducible {sup 3}He leak measurement. The gaseous samples taken from the chamber were purified using selective adsorption onto activated charcoals at 77 K to remove the tritium and pre-concentrate the {sup 3}He. The samples were measured using a leak detector mass spectrometer. The adaptation of the signal acquisition and the optimisation of the analysis parameters made it possible to reach the stability of the external calibrations using standard gases with a {sup 3}He detection limit of 0.05 ppb. Repeated confinement of the reference drums demonstrated the accuracy of this method. The uncertainty of this non-intrusive measurement of the tritium activity in 200-liter drums is 15% and the detection limit is about 1 GBq after a 24 h confinement. These results led to the definition of an automated tool able to systematically measure the tritium activity of all storage waste drums. (authors)

  14. Quantifying global exergy resources

    International Nuclear Information System (INIS)

    Hermann, Weston A.

    2006-01-01

    Exergy is used as a common currency to assess and compare the reservoirs of theoretically extractable work we call energy resources. Resources consist of matter or energy with properties different from the predominant conditions in the environment. These differences can be classified as physical, chemical, or nuclear exergy. This paper identifies the primary exergy reservoirs that supply exergy to the biosphere and quantifies the intensive and extensive exergy of their derivative secondary reservoirs, or resources. The interconnecting accumulations and flows among these reservoirs are illustrated to show the path of exergy through the terrestrial system from input to its eventual natural or anthropogenic destruction. The results are intended to assist in evaluation of current resource utilization, help guide fundamental research to enable promising new energy technologies, and provide a basis for comparing the resource potential of future energy options that is independent of technology and cost

  15. Quantifying the Adaptive Cycle.

    Directory of Open Access Journals (Sweden)

    David G Angeler

    Full Text Available The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994-2011 data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  16. Quantifying Anthropogenic Dust Emissions

    Science.gov (United States)

    Webb, Nicholas P.; Pierre, Caroline

    2018-02-01

    Anthropogenic land use and land cover change, including local environmental disturbances, moderate rates of wind-driven soil erosion and dust emission. These human-dust cycle interactions impact ecosystems and agricultural production, air quality, human health, biogeochemical cycles, and climate. While the impacts of land use activities and land management on aeolian processes can be profound, the interactions are often complex and assessments of anthropogenic dust loads at all scales remain highly uncertain. Here, we critically review the drivers of anthropogenic dust emission and current evaluation approaches. We then identify and describe opportunities to: (1) develop new conceptual frameworks and interdisciplinary approaches that draw on ecological state-and-transition models to improve the accuracy and relevance of assessments of anthropogenic dust emissions; (2) improve model fidelity and capacity for change detection to quantify anthropogenic impacts on aeolian processes; and (3) enhance field research and monitoring networks to support dust model applications to evaluate the impacts of disturbance processes on local to global-scale wind erosion and dust emissions.

  17. Quantifying loopy network architectures.

    Directory of Open Access Journals (Sweden)

    Eleni Katifori

    Full Text Available Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes from the metric topology (connectivity and edge weight and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

  18. Quantifying the vitamin D economy.

    Science.gov (United States)

    Heaney, Robert P; Armas, Laura A G

    2015-01-01

    Vitamin D enters the body through multiple routes and in a variety of chemical forms. Utilization varies with input, demand, and genetics. Vitamin D and its metabolites are carried in the blood on a Gc protein that has three principal alleles with differing binding affinities and ethnic prevalences. Three major metabolites are produced, which act via two routes, endocrine and autocrine/paracrine, and in two compartments, extracellular and intracellular. Metabolic consumption is influenced by physiological controls, noxious stimuli, and tissue demand. When administered as a supplement, varying dosing schedules produce major differences in serum metabolite profiles. To understand vitamin D's role in human physiology, it is necessary both to identify the foregoing entities, mechanisms, and pathways and, specifically, to quantify them. This review was performed to delineate the principal entities and transitions involved in the vitamin D economy, summarize the status of present knowledge of the applicable rates and masses, draw inferences about functions that are implicit in these quantifications, and point out implications for the determination of adequacy. © The Author(s) 2014. Published by Oxford University Press on behalf of the International Life Sciences Institute. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Integrated cosmological probes: concordance quantified

    Energy Technology Data Exchange (ETDEWEB)

    Nicola, Andrina; Amara, Adam; Refregier, Alexandre, E-mail: andrina.nicola@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch [Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, CH-8093 Zürich (Switzerland)

    2017-10-01

    Assessing the consistency of parameter constraints derived from different cosmological probes is an important way to test the validity of the underlying cosmological model. In an earlier work [1], we computed constraints on cosmological parameters for ΛCDM from an integrated analysis of CMB temperature anisotropies and CMB lensing from Planck, galaxy clustering and weak lensing from SDSS, weak lensing from DES SV as well as Type Ia supernovae and Hubble parameter measurements. In this work, we extend this analysis and quantify the concordance between the derived constraints and those derived by the Planck Collaboration as well as WMAP9, SPT and ACT. As a measure for consistency, we use the Surprise statistic [2], which is based on the relative entropy. In the framework of a flat ΛCDM cosmological model, we find all data sets to be consistent with one another at a level of less than 1σ. We highlight that the relative entropy is sensitive to inconsistencies in the models that are used in different parts of the analysis. In particular, inconsistent assumptions for the neutrino mass break its invariance on the parameter choice. When consistent model assumptions are used, the data sets considered in this work all agree with each other and ΛCDM, without evidence for tensions.

  20. The Fallacy of Quantifying Risk

    Science.gov (United States)

    2012-09-01

    Defense AT&L: September–October 2012 18 The Fallacy of Quantifying Risk David E. Frick, Ph.D. Frick is a 35-year veteran of the Department of...a key to risk analysis was “choosing the right technique” of quantifying risk . The weakness in this argument stems not from the assertion that one...of information about the enemy), yet achiev- ing great outcomes. Attempts at quantifying risk are not, in and of themselves, objectionable. Prudence

  1. Multidominance, ellipsis, and quantifier scope

    NARCIS (Netherlands)

    Temmerman, Tanja Maria Hugo

    2012-01-01

    This dissertation provides a novel perspective on the interaction between quantifier scope and ellipsis. It presents a detailed investigation of the scopal interaction between English negative indefinites, modals, and quantified phrases in ellipsis. One of the crucial observations is that a negative

  2. Quantifiers in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.; Paperno, D.; Keenan, E.L.

    2017-01-01

    After presenting some basic genetic, historical and typological information about Russian Sign Language, this chapter outlines the quantification patterns it expresses. It illustrates various semantic types of quantifiers, such as generalized existential, generalized universal, proportional,

  3. Quantified Self in de huisartsenpraktijk

    NARCIS (Netherlands)

    de Groot, Martijn; Timmers, Bart; Kooiman, Thea; van Ittersum, Miriam

    2015-01-01

    Quantified Self staat voor de zelfmetende mens. Het aantal mensen dat met zelf gegeneerde gezondheidsgegevens het zorgproces binnenwandelt gaat de komende jaren groeien. Verschillende soorten activity trackers en gezondheidsapplicaties voor de smartphone maken het relatief eenvoudig om persoonlijke

  4. Quantifying Stellar Mass Loss with High Angular Resolution Imaging

    Science.gov (United States)

    2009-02-19

    Howell (NOAO), Don Hutter (USNO) Margarita Karovska (Harvard-Smithsonian CfA), Sam Ragland (Keck Observatory), Ed Wishnow (U California Berkeley...notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a ...NUMBER OF PAGES 8 19a. NAME OF RESPONSIBLE PERSON a . REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298

  5. Avaliação de pastagem diferida de Brachiaria decumbens Stapf. 2. Disponibilidade de forragem e desempenho animal durante a seca Evaluation of a signalgrass (Brachiaria decumbens Stapf postponed pasture. 2. Availability of herbage and animal performance, during the dry season

    Directory of Open Access Journals (Sweden)

    Eduardo Destéfani Guimarães Santos

    2004-02-01

    Full Text Available Verificaram-se as disponibilidades de forragem total, forragem verde e morta e dos componentes folha verde, caule verde, folha seca e caule seco em pastagem diferida de Brachiaria decumbens. A pastagem foi vedada à entrada dos animais de dezembro de 1996 a junho de 1997 e avaliada, sob pastejo contínuo, durante a estação seca, nos meses de julho a outubro de 1997. Também foram estudadas correlações entre características do relvado e ganho de peso de tourinhos Limousin-Nelore com 19 meses e 374 kg de peso. O diferimento da pastagem de Brachiaria decumbens proporcionou disponibilidade média de forragem (DMST de 7.568, forragem verde (DMSV de 3.834 e morta (DMSM de 3.734 kg de matéria seca (MS/ha em julho, antes do período de pastejo. A utilização contínua da pastagem diferida durante o período seco, com lotação animal de 0,75 UA/ha, não afetou a DMST, média de 7.902, e DMSM, média de 4.637 kg/ha, mas afetou a DMSV e a disponibilidade de folhas verdes (DMSFV. A DMSV e a DMSFV apresentaram taxas crescentes em julho e outubro e diminuíram a taxas crescentes em agosto e setembro. No final de setembro, as pastagens apresentaram a menor DMSV, 2.540 kg/ha. A DMSFV e a proporção de folhas verdes no relvado foram maiores no início de agosto, respectivamente, 1.517 kg/ha e 18,5%, e menores no final de setembro, 480 kg/ha e 5,7%, respectivamente. O diferimento da pastagem permitiu a manutenção dos animais e apenas pequeno ganho de peso durante a seca, média de 104 g/dia. Em setembro, os animais nas pastagens perderam peso. O ganho de peso vivo médio diário correlacionou-se linear e negativamente com DMSM e linear e positivamente com as relações [DMSV/DMSM] e [DMSFV/(DMSM + DMSCV], em que DMSCV é a disponibilidade de matéria seca de caule verde. Não foram verificadas correlações entre ganho de peso e as variáveis DMSV, DMSFV, pressão de pastejo e oferta diária de forragem.The availability of herbage, green herbage, dead

  6. Quantifying the uncertainty in heritability.

    Science.gov (United States)

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  7. Choice of grazed herbage or maize silage by lactating dairy cows: influence of sward height and concentrate level Preferência por pastagem ou silagem de milho por vacas leiteiras em lactação: influência da altura do pasto e do nível de concentrado

    Directory of Open Access Journals (Sweden)

    O. Hernandez-Mendo

    2010-10-01

    Full Text Available The preference of lactating dairy cows for grazed herbage or maize silage (MS, simultaneously offered ad libitum in the field, was examined at two sward heights (SH 4-6 and 8-10cm and two concentrate levels (CL 0 and 6kg day-1 in a 2x2 factorial arrangement within a completely randomised experimental design. The experiment lasted 35 days and was carried out in spring using 24 multiparous Holstein Friesian cows. On average, the cows proportionately spent more time grazing than eating MS (0.85:0.15 and even though the higher rate of intake (RI of dry matter (DM of MS compared with grazed herbage (76 versus 26g DM min-1, the proportion of total DM intake as herbage was higher compared to that of MS (0.56:0.44. The higher crude protein and low fibre content of grazed herbage appeared to have a higher priority of choice than RI, as the cows chose to graze for longer (grazing time 385 min, MS feeding time 67min despite the lower RI of herbage. The low proportion MS intake indicated that RI was a secondary factor of choice. Concentrate supplementation had a greater depressing effect on herbage intake than on MS intake. These results suggest that the animals reduce the intake of feed with lower RI when the labor associated to eat is decreased. The factors influencing the choice for herbage over maize silage remain unclear.A preferência de vacas leiteiras em lactação por pasto ou silagem de milho (SM oferecidas ad libitum simultaneamente no pasto foi avaliada quanto a duas alturas de pastagem (4-6 e 8-10cm e dois níveis de concentrado (0 e 6kg dia-1, em um arranjo fatorial 2x2 dentro de um delineamento inteiramente ao acaso. O experimento, com duração de 35 dias, foi executado na primavera utilizando 24 vacas multíparas da raça Holandesa. As vacas passaram, em média, proporcionalmente mais tempo pastando do que comendo SM (0,85:0,15 e, mesmo considerando a maior taxa de consumo (TC de matéria seca (MS de SM comparada com a da pastagem (76

  8. The intake of lead and associated metals by sheep grazing mining-contaminated floodplain pastures in mid-Wales, UK: I. Soil ingestion, soil-metal partitioning and potential availability to pasture herbage and livestock

    International Nuclear Information System (INIS)

    Smith, K.M.; Abrahams, P.W.; Dagleish, M.P.; Steigmajer, J.

    2009-01-01

    This paper first evaluates the relative importance of the soil-plant-animal and soil-animal pathways of Zn, Cu and (especially) Pb investigated over a 15-month study period at 12 floodplain sites located within and downstream of the mineralised and historic mining area of mid-Wales, and secondly considers the implications of a sequential extraction procedure (SEP) undertaken on soils of varying particle size sampled from the study locations. Generally, very good agreement was found between the chemical partitioning of the three metals for each of the physical soil fractions subjected to the SEP. The availability of Pb to pasture vegetation, especially at the contaminated sites, is indicated with its associations with the more soluble (i.e. exchangeable and Fe/Mn oxide) soil phases, yet soil and/or plant barriers effectively restrict above-ground herbage concentrations of this metal. Consequently, with sheep ingesting soil at rates varying according to season from 0.1% to 44% or more of dry matter intake, the soil-animal pathway accounts for the majority of Pb consumption through most of the year, and at moderately and highly contaminated sites significant quantities of relatively soluble soil-Pb can be ingested at rates exceeding safety threshold limits.

  9. Quantifying and simulating human sensation

    DEFF Research Database (Denmark)

    Quantifying and simulating human sensation – relating science and technology of indoor climate research Abstract In his doctoral thesis from 1970 civil engineer Povl Ole Fanger proposed that the understanding of indoor climate should focus on the comfort of the individual rather than averaged...... this understanding of human sensation was adjusted to technology. I will look into the construction of the equipment, what it measures and the relationship between theory, equipment and tradition....

  10. Quantifying emissions from spontaneous combustion

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-09-01

    Spontaneous combustion can be a significant problem in the coal industry, not only due to the obvious safety hazard and the potential loss of valuable assets, but also with respect to the release of gaseous pollutants, especially CO2, from uncontrolled coal fires. This report reviews methodologies for measuring emissions from spontaneous combustion and discusses methods for quantifying, estimating and accounting for the purpose of preparing emission inventories.

  11. Consumo e tempo diário de pastejo por novilhos Nelore em pastagem de capim-tanzânia sob diferentes ofertas de forragem Effects of herbage allowance on the intake and grazing time of Nellore steers grazing tanzâniagrass pasture

    Directory of Open Access Journals (Sweden)

    Miguel Marques Gontijo Neto

    2006-02-01

    allowance (HA (kg of leaf blade /100 kg animal live weight/day, % were: 6.1 ± 0.59; 11.1 ± 0.77; 18.0 ± 1.24 and 23.9 ± 1.15%. Eight Nelore animals averaging 229.0 and 249.5 kg grazed each paddock in the first and second sampling periods, respectively. A completely randomized block design was used. Grazing time, leaf dry matter availability, leaf:stem ratio and canopy height were highly correlated with forage intake and can be used to develop prediction models of forage intake and performance of the grazing animal. Studies on intake and grazing animal performance in relation to forage allowances should consider the pasture structural traits for data interpretation and comparison. Tanzaniagrass forage allowances induced changes in the pasture structural characteristics and had quadratic effect on the daily grazing time and on the forage intake by Nelore steers. Shortest grazing time and highest forage intake were observed on pasture with herbage allowance of about 22.5 kg leaf blade/100 kg BW, which corresponded to a post-grazing mass of 4323.2 kg/ha dry matter, 2887.6 kg/ha dry green matter and average canopy height of 64 cm.

  12. Quantifying Quantum-Mechanical Processes.

    Science.gov (United States)

    Hsieh, Jen-Hsiang; Chen, Shih-Hsuan; Li, Che-Ming

    2017-10-19

    The act of describing how a physical process changes a system is the basis for understanding observed phenomena. For quantum-mechanical processes in particular, the affect of processes on quantum states profoundly advances our knowledge of the natural world, from understanding counter-intuitive concepts to the development of wholly quantum-mechanical technology. Here, we show that quantum-mechanical processes can be quantified using a generic classical-process model through which any classical strategies of mimicry can be ruled out. We demonstrate the success of this formalism using fundamental processes postulated in quantum mechanics, the dynamics of open quantum systems, quantum-information processing, the fusion of entangled photon pairs, and the energy transfer in a photosynthetic pigment-protein complex. Since our framework does not depend on any specifics of the states being processed, it reveals a new class of correlations in the hierarchy between entanglement and Einstein-Podolsky-Rosen steering and paves the way for the elaboration of a generic method for quantifying physical processes.

  13. Quantifying Evaporation in a Permeable Pavement System

    Science.gov (United States)

    Studies quantifying evaporation from permeable pavement systems are limited to a few laboratory studies and one field application. This research quantifies evaporation for a larger-scale field application by measuring the water balance from lined permeable pavement sections. Th...

  14. Quantifying sound quality in loudspeaker reproduction

    NARCIS (Netherlands)

    Beerends, John G.; van Nieuwenhuizen, Kevin; van den Broek, E.L.

    2016-01-01

    We present PREQUEL: Perceptual Reproduction Quality Evaluation for Loudspeakers. Instead of quantifying the loudspeaker system itself, PREQUEL quantifies the overall loudspeakers' perceived sound quality by assessing their acoustic output using a set of music signals. This approach introduces a

  15. Quantifier Scope in Categorical Compositional Distributional Semantics

    Directory of Open Access Journals (Sweden)

    Mehrnoosh Sadrzadeh

    2016-08-01

    Full Text Available In previous work with J. Hedges, we formalised a generalised quantifiers theory of natural language in categorical compositional distributional semantics with the help of bialgebras. In this paper, we show how quantifier scope ambiguity can be represented in that setting and how this representation can be generalised to branching quantifiers.

  16. Quantify the complexity of turbulence

    Science.gov (United States)

    Tao, Xingtian; Wu, Huixuan

    2017-11-01

    Many researchers have used Reynolds stress, power spectrum and Shannon entropy to characterize a turbulent flow, but few of them have measured the complexity of turbulence. Yet as this study shows, conventional turbulence statistics and Shannon entropy have limits when quantifying the flow complexity. Thus, it is necessary to introduce new complexity measures- such as topology complexity and excess information-to describe turbulence. Our test flow is a classic turbulent cylinder wake at Reynolds number 8100. Along the stream-wise direction, the flow becomes more isotropic and the magnitudes of normal Reynolds stresses decrease monotonically. These seem to indicate the flow dynamics becomes simpler downstream. However, the Shannon entropy keeps increasing along the flow direction and the dynamics seems to be more complex, because the large-scale vortices cascade to small eddies, the flow is less correlated and more unpredictable. In fact, these two contradictory observations partially describe the complexity of a turbulent wake. Our measurements (up to 40 diameters downstream the cylinder) show that the flow's degree-of-complexity actually increases firstly and then becomes a constant (or drops slightly) along the stream-wise direction. University of Kansas General Research Fund.

  17. Quantifying Cancer Risk from Radiation.

    Science.gov (United States)

    Keil, Alexander P; Richardson, David B

    2017-12-06

    Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation-related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation-related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions. © 2017 Society for Risk Analysis.

  18. Quantifying China's regional economic complexity

    Science.gov (United States)

    Gao, Jian; Zhou, Tao

    2018-02-01

    China has experienced an outstanding economic expansion during the past decades, however, literature on non-monetary metrics that reveal the status of China's regional economic development are still lacking. In this paper, we fill this gap by quantifying the economic complexity of China's provinces through analyzing 25 years' firm data. First, we estimate the regional economic complexity index (ECI), and show that the overall time evolution of provinces' ECI is relatively stable and slow. Then, after linking ECI to the economic development and the income inequality, we find that the explanatory power of ECI is positive for the former but negative for the latter. Next, we compare different measures of economic diversity and explore their relationships with monetary macroeconomic indicators. Results show that the ECI index and the non-linear iteration based Fitness index are comparative, and they both have stronger explanatory power than other benchmark measures. Further multivariate regressions suggest the robustness of our results after controlling other socioeconomic factors. Our work moves forward a step towards better understanding China's regional economic development and non-monetary macroeconomic indicators.

  19. Quantifying and Reducing Light Pollution

    Science.gov (United States)

    Gokhale, Vayujeet; Caples, David; Goins, Jordan; Herdman, Ashley; Pankey, Steven; Wren, Emily

    2018-06-01

    We describe the current level of light pollution in and around Kirksville, Missouri and around Anderson Mesa near Flagstaff, Arizona. We quantify the amount of light that is projected up towards the sky, instead of the ground, using Unihedron sky quality meters installed at various locations. We also present results from DSLR photometry of several standard stars, and compare the photometric quality of the data collected at locations with varying levels of light pollution. Presently, light fixture shields and ‘warm-colored’ lights are being installed on Truman State University’s campus in order to reduce light pollution. We discuss the experimental procedure we use to test the effectiveness of the different light fixtures shields in a controlled setting inside the Del and Norma Robison Planetarium.Apart from negatively affecting the quality of the night sky for astronomers, light pollution adversely affects migratory patterns of some animals and sleep-patterns in humans, increases our carbon footprint, and wastes resources and money. This problem threatens to get particularly acute with the increasing use of outdoor LED lamps. We conclude with a call to action to all professional and amateur astronomers to act against the growing nuisance of light pollution.

  20. Quantifying meniscal kinematics in dogs.

    Science.gov (United States)

    Park, Brian H; Banks, Scott A; Pozzi, Antonio

    2017-11-06

    The dog has been used extensively as an experimental model to study meniscal treatments such as meniscectomy, meniscal repair, transplantation, and regeneration. However, there is very little information on meniscal kinematics in the dog. This study used MR imaging to quantify in vitro meniscal kinematics in loaded dog knees in four distinct poses: extension, flexion, internal, and external rotation. A new method was used to track the meniscal poses along the convex and posteriorly tilted tibial plateau. Meniscal displacements were large, displacing 13.5 and 13.7 mm posteriorly on average for the lateral and medial menisci during flexion (p = 0.90). The medial anterior horn and lateral posterior horns were the most mobile structures, showing average translations of 15.9 and 15.1 mm, respectively. Canine menisci are highly mobile and exhibit movements that correlate closely with the relative tibiofemoral positions. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  1. Quantifying the invasiveness of species

    Directory of Open Access Journals (Sweden)

    Robert Colautti

    2014-04-01

    Full Text Available The success of invasive species has been explained by two contrasting but non-exclusive views: (i intrinsic factors make some species inherently good invaders; (ii species become invasive as a result of extrinsic ecological and genetic influences such as release from natural enemies, hybridization or other novel ecological and evolutionary interactions. These viewpoints are rarely distinguished but hinge on distinct mechanisms leading to different management scenarios. To improve tests of these hypotheses of invasion success we introduce a simple mathematical framework to quantify the invasiveness of species along two axes: (i interspecific differences in performance among native and introduced species within a region, and (ii intraspecific differences between populations of a species in its native and introduced ranges. Applying these equations to a sample dataset of occurrences of 1,416 plant species across Europe, Argentina, and South Africa, we found that many species are common in their native range but become rare following introduction; only a few introduced species become more common. Biogeographical factors limiting spread (e.g. biotic resistance, time of invasion therefore appear more common than those promoting invasion (e.g. enemy release. Invasiveness, as measured by occurrence data, is better explained by inter-specific variation in invasion potential than biogeographical changes in performance. We discuss how applying these comparisons to more detailed performance data would improve hypothesis testing in invasion biology and potentially lead to more efficient management strategies.

  2. QUANTIFYING LIFE STYLE IMPACT ON LIFESPAN

    Directory of Open Access Journals (Sweden)

    Antonello Lorenzini

    2012-12-01

    Full Text Available A healthy diet, physical activity and avoiding dangerous habits such as smoking are effective ways of increasing health and lifespan. Although a significant portion of the world's population still suffers from malnutrition, especially children, the most common cause of death in the world today is non-communicable diseases. Overweight and obesity significantly increase the relative risk for the most relevant non communicable diseases: cardiovascular disease, type II diabetes and some cancers. Childhood overweight also seems to increase the likelihood of disease in adulthood through epigenetic mechanisms. This worrisome trend now termed "globesity" will deeply impact society unless preventive strategies are put into effect. Researchers of the basic biology of aging have clearly established that animals with short lifespans live longer when their diet is calorie restricted. Although similar experiments carried on rhesus monkeys, a longer-lived species more closely related to humans, yielded mixed results, overall the available scientific data suggest keeping the body mass index in the "normal" range increases the chances of living a longer, healthier life. This can be successfully achieved both by maintaining a healthy diet and by engaging in physical activity. In this review we will try to quantify the relative impact of life style choices on lifespan.

  3. Neural basis for generalized quantifier comprehension.

    Science.gov (United States)

    McMillan, Corey T; Clark, Robin; Moore, Peachie; Devita, Christian; Grossman, Murray

    2005-01-01

    Generalized quantifiers like "all cars" are semantically well understood, yet we know little about their neural representation. Our model of quantifier processing includes a numerosity device, operations that combine number elements and working memory. Semantic theory posits two types of quantifiers: first-order quantifiers identify a number state (e.g. "at least 3") and higher-order quantifiers additionally require maintaining a number state actively in working memory for comparison with another state (e.g. "less than half"). We used BOLD fMRI to test the hypothesis that all quantifiers recruit inferior parietal cortex associated with numerosity, while only higher-order quantifiers recruit prefrontal cortex associated with executive resources like working memory. Our findings showed that first-order and higher-order quantifiers both recruit right inferior parietal cortex, suggesting that a numerosity component contributes to quantifier comprehension. Moreover, only probes of higher-order quantifiers recruited right dorsolateral prefrontal cortex, suggesting involvement of executive resources like working memory. We also observed activation of thalamus and anterior cingulate that may be associated with selective attention. Our findings are consistent with a large-scale neural network centered in frontal and parietal cortex that supports comprehension of generalized quantifiers.

  4. Acúmulo de forragem durante a rebrotação de capim-xaraés submetido a três estratégias de desfolhação Herbage accumulation during regrowth of Xaraés palisadegrass submitted to rotational stocking strategies

    Directory of Open Access Journals (Sweden)

    Bruno Carneiro e Pedreira

    2009-04-01

    Full Text Available Objetivou-se comparar a dinâmica do acúmulo de forragem em pastos de capim-xaraés [Brachiaria brizantha (A. Rich. Stapf. cv. Xaraés] submetidos a três estratégias de desfolhação intermitente, uma baseada no calendário (pastejo a cada 28 dias e duas na interceptação luminosa (IL, aos 95 ou 100% de interceptação luminosa. A massa de forragem (MF pré-pastejo foi maior na estratégia de desfolhação aos 100% de interceptação de luz. Os piquetes pastejados aos 95% de interceptação de luz e a cada 28 dias apresentaram menores massas de forragem e não diferiram entre si na primavera. O pastejo aos 95% de interceptação de luz aumentou a proporção de folhas, apesar das menores massas pré-pastejo. O pastejo aos 100% de interceptação de luz resultou na menor porcentagem de folhas na massa de forragem, indicando que a maior produção total de forragem foi ocasionada pelo maior alongamento de colmos, o que está associado à competição por luz entre as plantas no interior do dossel. O prolongamento do período de descanso para além dos 95% de interceptação da luz incidente aumenta a massa de forragem na entrada dos animais no momento do pastejo (100% IL ou a cada 28 dias durante o verão, porém, esse aumento é resultado do acúmulo de colmos e material morto e pode afetar negativamente o valor nutritivo da forragem produzida e o desempenho animal.The objective of this research was to describe comparatively the dynamics of herbage accumulation in Xaraés palisadegrass pastures [Brachiaria brizantha (A. Rich. Stapf cv. Xaraés] submitted to rotational stocking managements, defined either by pre-graze light interception (LI by the canopy (95% or 100% LI or calendar days (28d. Pre-graze forage mass (FM was higher for 100% LI pastures. Pastures grazed at 95% LI and 28-d resulted in similar pre-graze FM in the spring, both lower than that of the 100%-LI treatment. Grazing at 95% LI resulted in higher leaf percentage in pre

  5. Quantifying Urban Groundwater in Environmental Field Observatories

    Science.gov (United States)

    Welty, C.; Miller, A. J.; Belt, K.; Smith, J. A.; Band, L. E.; Groffman, P.; Scanlon, T.; Warner, J.; Ryan, R. J.; Yeskis, D.; McGuire, M. P.

    2006-12-01

    Despite the growing footprint of urban landscapes and their impacts on hydrologic and biogeochemical cycles, comprehensive field studies of urban water budgets are few. The cumulative effects of urban infrastructure (buildings, roads, culverts, storm drains, detention ponds, leaking water supply and wastewater pipe networks) on temporal and spatial patterns of groundwater stores, fluxes, and flowpaths are poorly understood. The goal of this project is to develop expertise and analytical tools for urban groundwater systems that will inform future environmental observatory planning and that can be shared with research teams working in urban environments elsewhere. The work plan for this project draws on a robust set of information resources in Maryland provided by ongoing monitoring efforts of the Baltimore Ecosystem Study (BES), USGS, and the U.S. Forest Service working together with university scientists and engineers from multiple institutions. A key concern is to bridge the gap between small-scale intensive field studies and larger-scale and longer-term hydrologic patterns using synoptic field surveys, remote sensing, numerical modeling, data mining and visualization tools. Using the urban water budget as a unifying theme, we are working toward estimating the various elements of the budget in order to quantify the influence of urban infrastructure on groundwater. Efforts include: (1) comparison of base flow behavior from stream gauges in a nested set of watersheds at four different spatial scales from 0.8 to 171 km2, with diverse patterns of impervious cover and urban infrastructure; (2) synoptic survey of well water levels to characterize the regional water table; (3) use of airborne thermal infrared imagery to identify locations of groundwater seepage into streams across a range of urban development patterns; (4) use of seepage transects and tracer tests to quantify the spatial pattern of groundwater fluxes to the drainage network in selected subwatersheds; (5

  6. The Use of Gas Chromatography and Mass Spectrometry to Introduce General Chemistry Students to Percent Mass and Atomic Mass Calculations

    Science.gov (United States)

    Pfennig, Brian W.; Schaefer, Amy K.

    2011-01-01

    A general chemistry laboratory experiment is described that introduces students to instrumental analysis using gas chromatography-mass spectrometry (GC-MS), while simultaneously reinforcing the concepts of mass percent and the calculation of atomic mass. Working in small groups, students use the GC to separate and quantify the percent composition…

  7. Quantifying human vitamin kinetics using AMS

    Energy Technology Data Exchange (ETDEWEB)

    Hillegonds, D; Dueker, S; Ognibene, T; Buchholz, B; Lin, Y; Vogel, J; Clifford, A

    2004-02-19

    Tracing vitamin kinetics at physiologic concentrations has been hampered by a lack of quantitative sensitivity for chemically equivalent tracers that could be used safely in healthy people. Instead, elderly or ill volunteers were sought for studies involving pharmacologic doses with radioisotopic labels. These studies fail to be relevant in two ways: vitamins are inherently micronutrients, whose biochemical paths are saturated and distorted by pharmacological doses; and while vitamins remain important for health in the elderly or ill, their greatest effects may be in preventing slow and cumulative diseases by proper consumption throughout youth and adulthood. Neither the target dose nor the target population are available for nutrient metabolic studies through decay counting of radioisotopes at high levels. Stable isotopic labels are quantified by isotope ratio mass spectrometry at levels that trace physiologic vitamin doses, but the natural background of stable isotopes severely limits the time span over which the tracer is distinguishable. Indeed, study periods seldom ranged over a single biological mean life of the labeled nutrients, failing to provide data on the important final elimination phase of the compound. Kinetic data for the absorption phase is similarly rare in micronutrient research because the phase is rapid, requiring many consecutive plasma samples for accurate representation. However, repeated blood samples of sufficient volume for precise stable or radio-isotope quantitations consume an indefensible amount of the volunteer's blood over a short period. Thus, vitamin pharmacokinetics in humans has often relied on compartmental modeling based upon assumptions and tested only for the short period of maximal blood circulation, a period that poorly reflects absorption or final elimination kinetics except for the most simple models.

  8. Quantifying Sentiment and Influence in Blogspaces

    Energy Technology Data Exchange (ETDEWEB)

    Hui, Peter SY; Gregory, Michelle L.

    2010-07-25

    The weblog, or blog, has become a popular form of social media, through which authors can write posts, which can in turn generate feedback in the form of user comments. When considered in totality, a collection of blogs can thus be viewed as a sort of informal collection of mass sentiment and opinion. An obvious topic of interest might be to mine this collection to obtain some gauge of public sentiment over the wide variety of topics contained therein. However, the sheer size of the so-called blogosphere, combined with the fact that the subjects of posts can vary over a practically limitless number of topics poses some serious challenges when any meaningful analysis is attempted. Namely, the fact that largely anyone with access to the Internet can author their own blog, raises the serious issue of credibility— should some blogs be considered to be more influential than others, and consequently, when gauging sentiment with respect to a topic, should some blogs be weighted more heavily than others? In addition, as new posts and comments can be made on almost a constant basis, any blog analysis algorithm must be able to handle such updates efficiently. In this paper, we give a formalization of the blog model. We give formal methods of quantifying sentiment and influence with respect to a hierarchy of topics, with the specific aim of facilitating the computation of a per-topic, influence-weighted sentiment measure. Finally, as efficiency is a specific endgoal, we give upper bounds on the time required to update these values with new posts, showing that our analysis and algorithms are scalable.

  9. Quantifying forecast quality of IT business value

    NARCIS (Netherlands)

    Eveleens, J.L.; van der Pas, M.; Verhoef, C.

    2012-01-01

    This article discusses how to quantify the forecasting quality of IT business value. We address a common economic indicator often used to determine the business value of project proposals, the Net Present Value (NPV). To quantify the forecasting quality of IT business value, we develop a generalized

  10. QUANTIFYING THE BIASES OF SPECTROSCOPICALLY SELECTED GRAVITATIONAL LENSES

    International Nuclear Information System (INIS)

    Arneson, Ryan A.; Brownstein, Joel R.; Bolton, Adam S.

    2012-01-01

    Spectroscopic selection has been the most productive technique for the selection of galaxy-scale strong gravitational lens systems with known redshifts. Statistically significant samples of strong lenses provide a powerful method for measuring the mass-density parameters of the lensing population, but results can only be generalized to the parent population if the lensing selection biases are sufficiently understood. We perform controlled Monte Carlo simulations of spectroscopic lens surveys in order to quantify the bias of lenses relative to parent galaxies in velocity dispersion, mass axis ratio, and mass-density profile. For parameters typical of the SLACS and BELLS surveys, we find (1) no significant mass axis ratio detection bias of lenses relative to parent galaxies; (2) a very small detection bias toward shallow mass-density profiles, which is likely negligible compared to other sources of uncertainty in this parameter; (3) a detection bias toward smaller Einstein radius for systems drawn from parent populations with group- and cluster-scale lensing masses; and (4) a lens-modeling bias toward larger velocity dispersions for systems drawn from parent samples with sub-arcsecond mean Einstein radii. This last finding indicates that the incorporation of velocity-dispersion upper limits of non-lenses is an important ingredient for unbiased analyses of spectroscopically selected lens samples. In general, we find that the completeness of spectroscopic lens surveys in the plane of Einstein radius and mass-density profile power-law index is quite uniform, up to a sharp drop in the region of large Einstein radius and steep mass-density profile, and hence that such surveys are ideally suited to the study of massive field galaxies.

  11. Bare quantifier fronting as contrastive topicalization

    Directory of Open Access Journals (Sweden)

    Ion Giurgea

    2015-11-01

    Full Text Available I argue that indefinites (in particular bare quantifiers such as ‘something’, ‘somebody’, etc. which are neither existentially presupposed nor in the restriction of a quantifier over situations, can undergo topicalization in a number of Romance languages (Catalan, Italian, Romanian, Spanish, but only if the sentence contains “verum” focus, i.e. focus on a high degree of certainty of the sentence. I analyze these indefinites as contrastive topics, using Büring’s (1999 theory (where the term ‘S-topic’ is used for what I call ‘contrastive topic’. I propose that the topic is evaluated in relation to a scalar set including generalized quantifiers such as {lP $x P(x, lP MANYx P(x, lP MOSTx P(x, lP “xP(x} or {lP $xP(x, lP P(a, lP P(b …}, and that the contrastive topic is the weakest generalized quantifier in this set. The verum focus, which is part of the “comment” that co-occurs with the “Topic”, introduces a set of alternatives including degrees of certainty of the assertion. The speaker asserts that his claim is certainly true or highly probable, contrasting it with stronger claims for which the degree of probability is unknown. This explains the observation that in downward entailing contexts, the fronted quantified DPs are headed by ‘all’ or ‘many’, whereas ‘some’, small numbers or ‘at least n’ appear in upward entailing contexts. Unlike other cases of non-specific topics, which are property topics, these are quantifier topics: the topic part is a generalized quantifier, the comment is a property of generalized quantifiers. This explains the narrow scope of the fronted quantified DP.

  12. Quantifying solute transport processes: are chemically "conservative" tracers electrically conservative?

    Science.gov (United States)

    Singha, Kamini; Li, Li; Day-Lewis, Frederick D.; Regberg, Aaron B.

    2012-01-01

    The concept of a nonreactive or conservative tracer, commonly invoked in investigations of solute transport, requires additional study in the context of electrical geophysical monitoring. Tracers that are commonly considered conservative may undergo reactive processes, such as ion exchange, thus changing the aqueous composition of the system. As a result, the measured electrical conductivity may reflect not only solute transport but also reactive processes. We have evaluated the impacts of ion exchange reactions, rate-limited mass transfer, and surface conduction on quantifying tracer mass, mean arrival time, and temporal variance in laboratory-scale column experiments. Numerical examples showed that (1) ion exchange can lead to resistivity-estimated tracer mass, velocity, and dispersivity that may be inaccurate; (2) mass transfer leads to an overestimate in the mobile tracer mass and an underestimate in velocity when using electrical methods; and (3) surface conductance does not notably affect estimated moments when high-concentration tracers are used, although this phenomenon may be important at low concentrations or in sediments with high and/or spatially variable cation-exchange capacity. In all cases, colocated groundwater concentration measurements are of high importance for interpreting geophysical data with respect to the controlling transport processes of interest.

  13. Entropy generation method to quantify thermal comfort

    Science.gov (United States)

    Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.

    2001-01-01

    The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study

  14. Zero G Mass Measurement Device (ZGMMD), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The Zero G Mass Measurement Device (ZGMMD) will provide the ability to quantify the mass of objects up to 2,000 grams, including live animal specimens in a zero G...

  15. Quantify Risk to Manage Cost and Schedule

    National Research Council Canada - National Science Library

    Raymond, Fred

    1999-01-01

    Too many projects suffer from unachievable budget and schedule goals, caused by unrealistic estimates and the failure to quantify and communicate the uncertainty of these estimates to managers and sponsoring executives...

  16. New frontiers of quantified self 3

    DEFF Research Database (Denmark)

    Rapp, Amon; Cena, Federica; Kay, Judy

    2017-01-01

    Quantified Self (QS) field needs to start thinking of how situated needs may affect the use of self-tracking technologies. In this workshop we will focus on the idiosyncrasies of specific categories of users....

  17. Quantifying in-stream nitrate reaction rates using continuously-collected water quality data

    Science.gov (United States)

    Matthew Miller; Anthony Tesoriero; Paul Capel

    2016-01-01

    High frequency in situ nitrate data from three streams of varying hydrologic condition, land use, and watershed size were used to quantify the mass loading of nitrate to streams from two sources – groundwater discharge and event flow – at a daily time step for one year. These estimated loadings were used to quantify temporally-variable in-stream nitrate processing ...

  18. Mass discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Broeckman, A. [Rijksuniversiteit Utrecht (Netherlands)

    1978-12-15

    In thermal ionization mass spectrometry the phenomenon of mass discrimination has led to the use of a correction factor for isotope ratio-measurements. The correction factor is defined as the measured ratio divided by the true or accepted value of this ratio. In fact this factor corrects for systematic errors of the whole procedure; however mass discrimination is often associated just with the mass spectrometer.

  19. Negative mass

    International Nuclear Information System (INIS)

    Hammond, Richard T

    2015-01-01

    Some physical aspects of negative mass are examined. Several unusual properties, such as the ability of negative mass to penetrate any armor, are analysed. Other surprising effects include the bizarre system of negative mass chasing positive mass, naked singularities and the violation of cosmic censorship, wormholes, and quantum mechanical results as well. In addition, a brief look into the implications for strings is given. (paper)

  20. Nominal Mass?

    Science.gov (United States)

    Attygalle, Athula B; Pavlov, Julius

    2017-08-01

    The current IUPAC-recommended definition of the term "nominal mass," based on the most abundant naturally occurring stable isotope of an element, is flawed. We propose that Nominal mass should be defined as the sum of integer masses of protons and neutrons in any chemical species. In this way, all isotopes and isotopologues can be assigned a definitive identifier. Graphical Abstract ᅟ.

  1. Componentes da Produção de Forragem em Pastagens dos Capins Tanzânia e Mombaça Adubadas com Quatro Doses de NPK Components of Herbage Production of Tanzania and Mombaça Pastures Fertilized with Four Doses of NPK

    Directory of Open Access Journals (Sweden)

    Danilo Gusmão de Quadros

    2002-06-01

    height of residue of 30 cm. A complete randomized block design was used with treatments arranged in a 2 x 4 factorial with three field replications. The fertilization doses corresponded to the reduction of 30 % and the increase of 30 and 60% in relation to a standard dose of 145; 21.6; and 180 kg/ha of N, P2O5, and K2O, respectively (assuming the contents of 1.2, 0.08, and 1.2% of N, P, and K in DM, to reach an estimated DM production of 12000 kg/ha. There was a linear effect of fertilization doses on green DM (GDM before and after grazing. The cv. Mombaça exhibited higher herbage mass before and after grazing (9183 and 5279 kg/ha of GDM, respectively than the cv. Tanzania (6275 and 3808 kg/ha of GDM, respectively. The proportion of leaf blade in the GDM available was lower in the cv. Tanzania (51% than in the cv. Mombaça (54 %. The tiller density was not affected by the fertilization doses. However, the increase in tiller weight due to fertilizer doses was responsible for the higher GDM production. The senesced DM did not vary between cultivars, with a mean value of 3108 kg/ha. In general, higher rates of fertilization resulted in greater GDM accumulation rate and higher losses of GDM by trampling. The cv. Mombaça showed a greater response potential to fertilization than the cv. Tanzania with stocking rates of 6.2 and 4.0 UA/ha, respectively.

  2. Quantifying graininess of glossy food products

    DEFF Research Database (Denmark)

    Møller, Flemming; Carstensen, Jens Michael

    The sensory quality of yoghurt can be altered when changing the milk composition or processing conditions. Part of the sensory quality may be assessed visually. It is described how a non-contact method for quantifying surface gloss and grains in yoghurt can be made. It was found that the standard...

  3. Quantifying antimicrobial resistance at veal calf farms

    NARCIS (Netherlands)

    Bosman, A.B.; Wagenaar, J.A.; Stegeman, A.; Vernooij, H.; Mevius, D.J.

    2012-01-01

    This study was performed to determine a sampling strategy to quantify the prevalence of antimicrobial resistance on veal calf farms, based on the variation in antimicrobial resistance within and between calves on five farms. Faecal samples from 50 healthy calves (10 calves/farm) were collected. From

  4. QS Spiral: Visualizing Periodic Quantified Self Data

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Cuttone, Andrea; Jørgensen, Sune Lehmann

    2013-01-01

    In this paper we propose an interactive visualization technique QS Spiral that aims to capture the periodic properties of quantified self data and let the user explore those recurring patterns. The approach is based on time-series data visualized as a spiral structure. The interactivity includes ...

  5. Quantifying recontamination through factory environments - a review

    NARCIS (Netherlands)

    Asselt-den Aantrekker, van E.D.; Boom, R.M.; Zwietering, M.H.; Schothorst, van M.

    2003-01-01

    Recontamination of food products can be the origin of foodborne illnesses and should therefore be included in quantitative microbial risk assessment (MRA) studies. In order to do this, recontamination should be quantified using predictive models. This paper gives an overview of the relevant

  6. Quantifying quantum coherence with quantum Fisher information.

    Science.gov (United States)

    Feng, X N; Wei, L F

    2017-11-14

    Quantum coherence is one of the old but always important concepts in quantum mechanics, and now it has been regarded as a necessary resource for quantum information processing and quantum metrology. However, the question of how to quantify the quantum coherence has just been paid the attention recently (see, e.g., Baumgratz et al. PRL, 113. 140401 (2014)). In this paper we verify that the well-known quantum Fisher information (QFI) can be utilized to quantify the quantum coherence, as it satisfies the monotonicity under the typical incoherent operations and the convexity under the mixing of the quantum states. Differing from most of the pure axiomatic methods, quantifying quantum coherence by QFI could be experimentally testable, as the bound of the QFI is practically measurable. The validity of our proposal is specifically demonstrated with the typical phase-damping and depolarizing evolution processes of a generic single-qubit state, and also by comparing it with the other quantifying methods proposed previously.

  7. Interbank exposures: quantifying the risk of contagion

    OpenAIRE

    C. H. Furfine

    1999-01-01

    This paper examines the likelihood that failure of one bank would cause the subsequent collapse of a large number of other banks. Using unique data on interbank payment flows, the magnitude of bilateral federal funds exposures is quantified. These exposures are used to simulate the impact of various failure scenarios, and the risk of contagion is found to be economically small.

  8. Quantifying Productivity Gains from Foreign Investment

    NARCIS (Netherlands)

    C. Fons-Rosen (Christian); S. Kalemli-Ozcan (Sebnem); B.E. Sorensen (Bent); C. Villegas-Sanchez (Carolina)

    2013-01-01

    textabstractWe quantify the causal effect of foreign investment on total factor productivity (TFP) using a new global firm-level database. Our identification strategy relies on exploiting the difference in the amount of foreign investment by financial and industrial investors and simultaneously

  9. Power Curve Measurements, quantify the production increase

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Vesth, Allan

    The purpose of this report is to quantify the production increase on a given turbine with respect to another given turbine. The used methodology is the “side by side” comparison method, provided by the client. This method involves the use of two neighboring turbines and it is based...

  10. Quantifying capital goods for waste landfilling

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Stentsøe, Steen; Willumsen, Hans Christian

    2013-01-01

    Materials and energy used for construction of a hill-type landfill of 4 million m3 were quantified in detail. The landfill is engineered with a liner and leachate collections system, as well as a gas collection and control system. Gravel and clay were the most common materials used, amounting...

  11. Quantifying interspecific coagulation efficiency of phytoplankton

    DEFF Research Database (Denmark)

    Hansen, J.L.S.; Kiørboe, Thomas

    1997-01-01

    . nordenskjoeldii. Mutual coagulation between Skeletonema costatum and the non-sticky cel:ls of Ditylum brightwellii also proceeded with hall the efficiency of S. costatum alone. The latex beads were suitable to be used as 'standard particles' to quantify the ability of phytoplankton to prime aggregation...

  12. New frontiers of quantified self 2

    DEFF Research Database (Denmark)

    Rapp, Amon; Cena, Federica; Kay, Judy

    2016-01-01

    While the Quantified Self (QS) community is described in terms of "self-knowledge through numbers" people are increasingly demanding value and meaning. In this workshop we aim at refocusing the QS debate on the value of data for providing new services....

  13. Quantifying temporal ventriloquism in audiovisual synchrony perception

    NARCIS (Netherlands)

    Kuling, I.A.; Kohlrausch, A.G.; Juola, J.F.

    2013-01-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from

  14. Reliability-How to Quantify and Improve?

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 5. Reliability - How to Quantify and Improve? - Improving the Health of Products. N K Srinivasan. General Article Volume 5 Issue 5 May 2000 pp 55-63. Fulltext. Click here to view fulltext PDF. Permanent link:

  15. NEUTRINO MASS

    OpenAIRE

    Kayser, Boris

    1988-01-01

    This is a review article about the most recent developments on the field of neutrino mass. The first part of the review introduces the idea of neutrino masses and mixing angles, summarizes the most recent experimental data then discusses the experimental prospects and challenges in this area. The second part of the review discusses the implications of these results for particle physics and cosmology, including the origin of neutrino mass, the see-saw mechanism and sequential dominance, and la...

  16. Neutrino mass

    International Nuclear Information System (INIS)

    Robertson, R.G.H.

    1992-01-01

    Despite intensive experimental work since the neutrino's existence was proposed by Pauli 60 years ago, and its first observation by Reines and Cowan almost 40 years ago, the neutrino's fundamental properties remain elusive. Among those properties are the masses of the three known flavors, properties under charge conjugation, parity and time-reversal, and static and dynamic electromagnetic moments. Mass is perhaps the most fundamental, as it constrains the other properties. The present status of the search for neutrino mass is briefly reviewed

  17. Spatially quantifying the leadership effectiveness in collective behavior

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Haitao [State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074 (China); Wang Ning [Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Chen, Michael Z Q [Department of Mechanical Engineering, University of Hong Kong, Pok Fu Lam Road, Hong Kong (Hong Kong); Su Riqi; Zhou Tao [Department of Modern Physics, University of Science and Technology of China, Hefei 230026 (China); Zhou Changsong, E-mail: zht@mail.hust.edu.cn, E-mail: cszhou@hkbu.edu.hk, E-mail: zhutou@ustc.edu [Department of Physics, Centre for Nonlinear Studies, and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Hong Kong Baptist University, Kowloon Tong (Hong Kong)

    2010-12-15

    Among natural biological flocks/swarms or mass social activities, when the collective behavior of the followers has been dominated by the direction or opinion of one leader group, it seems difficult for later-coming leaders to reverse the orientation of the mass followers, especially when they are in quantitative minority. This paper, however, reports a counter-intuitive phenomenon, i.e. Following the Later-coming Minority, provided that the later-comers obey a favorable distribution pattern that enables them to spread their influence to as many followers as possible within a given time and to be dense enough to govern these local followers they can influence directly from the beginning. We introduce a discriminant index to quantify the whole group's orientation under competing leaderships, with which the eventual orientation of the mass followers can be predicted before launching the real dynamical procedure. From the application point of view, this leadership effectiveness index also helps us to design an economical way for the minority later-coming leaders to defeat the dominating majority leaders solely by optimizing their spatial distribution pattern provided that the premeditated goal is available. Our investigation provides insights into effective leadership in biological systems with meaningful implications for social and industrial applications.

  18. Spatially quantifying the leadership effectiveness in collective behavior

    International Nuclear Information System (INIS)

    Zhang Haitao; Wang Ning; Chen, Michael Z Q; Su Riqi; Zhou Tao; Zhou Changsong

    2010-01-01

    Among natural biological flocks/swarms or mass social activities, when the collective behavior of the followers has been dominated by the direction or opinion of one leader group, it seems difficult for later-coming leaders to reverse the orientation of the mass followers, especially when they are in quantitative minority. This paper, however, reports a counter-intuitive phenomenon, i.e. Following the Later-coming Minority, provided that the later-comers obey a favorable distribution pattern that enables them to spread their influence to as many followers as possible within a given time and to be dense enough to govern these local followers they can influence directly from the beginning. We introduce a discriminant index to quantify the whole group's orientation under competing leaderships, with which the eventual orientation of the mass followers can be predicted before launching the real dynamical procedure. From the application point of view, this leadership effectiveness index also helps us to design an economical way for the minority later-coming leaders to defeat the dominating majority leaders solely by optimizing their spatial distribution pattern provided that the premeditated goal is available. Our investigation provides insights into effective leadership in biological systems with meaningful implications for social and industrial applications.

  19. Mass Society

    DEFF Research Database (Denmark)

    Borch, Christian

    2017-01-01

    the negative features usually ascribed by late nineteenth-century crowd psychology to spontaneous crowds, and attributes these to the entire social fabric. However, in contrast to crowd psychology, theorists of mass society often place greater emphasis on how capitalism, technological advances, or demographic......Mass society is a societal diagnosis that emphasizes – usually in a pejorative, modernity critical manner – a series of traits allegedly associated with modern society, such as the leveling of individuality, moral decay, alienation, and isolation. As such, the notion of mass society generalizes...... developments condition such negative features, and some theorists argue that mass society produces a propensity to totalitarianism. Discussions of mass society culminated in the early and mid-twentieth century....

  20. Quantifying volume loss from ice cliffs on debris-covered glaciers using high-resolution terrestrial and aerial photogrammetry

    NARCIS (Netherlands)

    Brun, Fanny; Buri, Pascal; Miles, Evan S.; Wagnon, Patrick; Steiner, J.F.; Berthier, Etienne; Ragettli, S.; Kraaijenbrink, P.D.A.; Immerzeel, W.W.; Pellicciotti, Francesca

    Mass losses originating from supraglacial ice cliffs at the lower tongues of debris-covered glaciers are a potentially large component of the mass balance, but have rarely been quantified. In this study, we develop a method to estimate ice cliff volume losses based on high-resolution topographic

  1. Planck and the local Universe: quantifying the tension

    CERN Document Server

    Verde, Licia; Protopapas, Pavlos

    2013-01-01

    We use the latest Planck constraints, and in particular constraints on the derived parameters (Hubble constant and age of the Universe) for the local universe and compare them with local measurements of the same quantities. We propose a way to quantify whether cosmological parameters constraints from two different experiments are in tension or not. Our statistic, T, is an evidence ratio and therefore can be interpreted with the widely used Jeffrey's scale. We find that in the framework of the LCDM model, the Planck inferred two dimensional, joint, posterior distribution for the Hubble constant and age of the Universe is in "strong" tension with the local measurements; the odds being ~ 1:50. We explore several possibilities for explaining this tension and examine the consequences both in terms of unknown errors and deviations from the LCDM model. In some one-parameter LCDM model extensions, tension is reduced whereas in other extensions, tension is instead increased. In particular, small total neutrino masses ...

  2. Quantifying the global cellular thiol-disulfide status

    DEFF Research Database (Denmark)

    Hansen, Rosa E; Roth, Doris; Winther, Jakob R

    2009-01-01

    It is widely accepted that the redox status of protein thiols is of central importance to protein structure and folding and that glutathione is an important low-molecular-mass redox regulator. However, the total cellular pools of thiols and disulfides and their relative abundance have never been...... determined. In this study, we have assembled a global picture of the cellular thiol-disulfide status in cultured mammalian cells. We have quantified the absolute levels of protein thiols, protein disulfides, and glutathionylated protein (PSSG) in all cellular protein, including membrane proteins. These data...... cell types. However, when cells are exposed to a sublethal dose of the thiol-specific oxidant diamide, PSSG levels increase to >15% of all protein cysteine. Glutathione is typically characterized as the "cellular redox buffer"; nevertheless, our data show that protein thiols represent a larger active...

  3. Quantifying Stock Return Distributions in Financial Markets.

    Science.gov (United States)

    Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.

  4. Quantifying Short-Chain Chlorinated Paraffin Congener Groups.

    Science.gov (United States)

    Yuan, Bo; Bogdal, Christian; Berger, Urs; MacLeod, Matthew; Gebbink, Wouter A; Alsberg, Tomas; de Wit, Cynthia A

    2017-09-19

    Accurate quantification of short-chain chlorinated paraffins (SCCPs) poses an exceptional challenge to analytical chemists. SCCPs are complex mixtures of chlorinated alkanes with variable chain length and chlorination level; congeners with a fixed chain length (n) and number of chlorines (m) are referred to as a "congener group" C n Cl m . Recently, we resolved individual C n Cl m by mathematically deconvolving soft ionization high-resolution mass spectra of SCCP mixtures. Here we extend the method to quantifying C n Cl m by introducing C n Cl m specific response factors (RFs) that are calculated from 17 SCCP chain-length standards with a single carbon chain length and variable chlorination level. The signal pattern of each standard is measured on APCI-QTOF-MS. RFs of each C n Cl m are obtained by pairwise optimization of the normal distribution's fit to the signal patterns of the 17 chain-length standards. The method was verified by quantifying SCCP technical mixtures and spiked environmental samples with accuracies of 82-123% and 76-109%, respectively. The absolute differences between calculated and manufacturer-reported chlorination degrees were -0.9 to 1.0%Cl for SCCP mixtures of 49-71%Cl. The quantification method has been replicated with ECNI magnetic sector MS and ECNI-Q-Orbitrap-MS. C n Cl m concentrations determined with the three instruments were highly correlated (R 2 > 0.90) with each other.

  5. A masking index for quantifying hidden glitches

    OpenAIRE

    Berti-Equille, Laure; Loh, J. M.; Dasu, T.

    2015-01-01

    Data glitches are errors in a dataset. They are complex entities that often span multiple attributes and records. When they co-occur in data, the presence of one type of glitch can hinder the detection of another type of glitch. This phenomenon is called masking. In this paper, we define two important types of masking and propose a novel, statistically rigorous indicator called masking index for quantifying the hidden glitches. We outline four cases of masking: outliers masked by missing valu...

  6. How are the catastrophical risks quantifiable

    International Nuclear Information System (INIS)

    Chakraborty, S.

    1985-01-01

    For the assessment and evaluation of industrial risks the question must be asked how are the catastrophical risks quantifiable. Typical real catastrophical risks and risk assessment based on modelling assumptions have been placed against each other in order to put the risks into proper perspective. However, the society is risk averse when there is a catastrophic potential of severe accidents in a large scale industrial facility even though there is extremely low probability of occurence. (orig.) [de

  7. Quantifying Distributional Model Risk via Optimal Transport

    OpenAIRE

    Blanchet, Jose; Murthy, Karthyek R. A.

    2016-01-01

    This paper deals with the problem of quantifying the impact of model misspecification when computing general expected values of interest. The methodology that we propose is applicable in great generality, in particular, we provide examples involving path dependent expectations of stochastic processes. Our approach consists in computing bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured in terms ...

  8. Quantifying Anthropogenic Stress on Groundwater Resources

    OpenAIRE

    Ashraf, Batool; AghaKouchak, Amir; Alizadeh, Amin; Mousavi Baygi, Mohammad; R. Moftakhari, Hamed; Mirchi, Ali; Anjileli, Hassan; Madani, Kaveh

    2017-01-01

    This study explores a general framework for quantifying anthropogenic influences on groundwater budget based on normalized human outflow (hout) and inflow (hin). The framework is useful for sustainability assessment of groundwater systems and allows investigating the effects of different human water abstraction scenarios on the overall aquifer regime (e.g., depleted, natural flow-dominated, and human flow-dominated). We apply this approach to selected regions in the USA, Germany and Iran to e...

  9. Mass hysteria

    CERN Document Server

    Hellemans, Alexander

    2004-01-01

    Considerable research is being undertaken to identify the Higgs particle that is believed to give things their mass. According to the standard model, what we call mass is really an indication of how strongly particles interact with an invisible syrupy substance called the Higgs field. Quantum mechanics say that the mass-giving field can also be thought of as a sea of electrically neutral Higgs particles that should be dislodged in collisions between subatomic particles with high enough energies. Particle physicists expect the Higgs to exist only for a fleeting moment before decaying into other particles, which are caught in a detector. (Edited abstract).

  10. Quantifier spreading: children misled by ostensive cues

    Directory of Open Access Journals (Sweden)

    Katalin É. Kiss

    2017-04-01

    Full Text Available This paper calls attention to a methodological problem of acquisition experiments. It shows that the economy of the stimulus employed in child language experiments may lend an increased ostensive effect to the message communicated to the child. Thus, when the visual stimulus in a sentence-picture matching task is a minimal model abstracting away from the details of the situation, children often regard all the elements of the stimulus as ostensive clues to be represented in the corresponding sentence. The use of such minimal stimuli is mistaken when the experiment aims to test whether or not a certain element of the stimulus is relevant for the linguistic representation or interpretation. The paper illustrates this point by an experiment involving quantifier spreading. It is claimed that children find a universally quantified sentence like 'Every girl is riding a bicycle 'to be a false description of a picture showing three girls riding bicycles and a solo bicycle because they are misled to believe that all the elements in the visual stimulus are relevant, hence all of them are to be represented by the corresponding linguistic description. When the iconic drawings were replaced by photos taken in a natural environment rich in accidental details, the occurrence of quantifier spreading was radically reduced. It is shown that an extra object in the visual stimulus can lead to the rejection of the sentence also in the case of sentences involving no quantification, which gives further support to the claim that the source of the problem is not (or not only the grammatical or cognitive difficulty of quantification but the unintended ostensive effect of the extra object.  This article is part of the special collection: Acquisition of Quantification

  11. Can herbage nitrogen fractionation in Lolium perenne be improved by herbage management?

    NARCIS (Netherlands)

    Hoekstra, N.J.; Struik, P.C.; Lantinga, E.A.; Amburgh, van M.E.; Schulte, R.P.O.

    2008-01-01

    The high degradability of grass protein is an important factor in the low nitrogen (N) utilization of grazing bovines in intensive European grassland systems. We tested the hypothesis that protein degradability as measured by the Cornell Net Carbohydrate and Protein System (CNCPS) protein

  12. Quantifying information leakage of randomized protocols

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Malacaria, Pasquale

    2015-01-01

    The quantification of information leakage provides a quantitative evaluation of the security of a system. We propose the usage of Markovian processes to model deterministic and probabilistic systems. By using a methodology generalizing the lattice of information approach we model refined attackers...... capable to observe the internal behavior of the system, and quantify the information leakage of such systems. We also use our method to obtain an algorithm for the computation of channel capacity from our Markovian models. Finally, we show how to use the method to analyze timed and non-timed attacks...

  13. Characterization of autoregressive processes using entropic quantifiers

    Science.gov (United States)

    Traversaro, Francisco; Redelico, Francisco O.

    2018-01-01

    The aim of the contribution is to introduce a novel information plane, the causal-amplitude informational plane. As previous works seems to indicate, Bandt and Pompe methodology for estimating entropy does not allow to distinguish between probability distributions which could be fundamental for simulation or for probability analysis purposes. Once a time series is identified as stochastic by the causal complexity-entropy informational plane, the novel causal-amplitude gives a deeper understanding of the time series, quantifying both, the autocorrelation strength and the probability distribution of the data extracted from the generating processes. Two examples are presented, one from climate change model and the other from financial markets.

  14. Quantifying Heuristic Bias: Anchoring, Availability, and Representativeness.

    Science.gov (United States)

    Richie, Megan; Josephson, S Andrew

    2018-01-01

    Construct: Authors examined whether a new vignette-based instrument could isolate and quantify heuristic bias. Heuristics are cognitive shortcuts that may introduce bias and contribute to error. There is no standardized instrument available to quantify heuristic bias in clinical decision making, limiting future study of educational interventions designed to improve calibration of medical decisions. This study presents validity data to support a vignette-based instrument quantifying bias due to the anchoring, availability, and representativeness heuristics. Participants completed questionnaires requiring assignment of probabilities to potential outcomes of medical and nonmedical scenarios. The instrument randomly presented scenarios in one of two versions: Version A, encouraging heuristic bias, and Version B, worded neutrally. The primary outcome was the difference in probability judgments for Version A versus Version B scenario options. Of 167 participants recruited, 139 enrolled. Participants assigned significantly higher mean probability values to Version A scenario options (M = 9.56, SD = 3.75) than Version B (M = 8.98, SD = 3.76), t(1801) = 3.27, p = .001. This result remained significant analyzing medical scenarios alone (Version A, M = 9.41, SD = 3.92; Version B, M = 8.86, SD = 4.09), t(1204) = 2.36, p = .02. Analyzing medical scenarios by heuristic revealed a significant difference between Version A and B for availability (Version A, M = 6.52, SD = 3.32; Version B, M = 5.52, SD = 3.05), t(404) = 3.04, p = .003, and representativeness (Version A, M = 11.45, SD = 3.12; Version B, M = 10.67, SD = 3.71), t(396) = 2.28, p = .02, but not anchoring. Stratifying by training level, students maintained a significant difference between Version A and B medical scenarios (Version A, M = 9.83, SD = 3.75; Version B, M = 9.00, SD = 3.98), t(465) = 2.29, p = .02, but not residents or attendings. Stratifying by heuristic and training level, availability maintained

  15. An index for quantifying flocking behavior.

    Science.gov (United States)

    Quera, Vicenç; Herrando, Salvador; Beltran, Francesc S; Salas, Laura; Miñano, Meritxell

    2007-12-01

    One of the classic research topics in adaptive behavior is the collective displacement of groups of organisms such as flocks of birds, schools of fish, herds of mammals, and crowds of people. However, most agent-based simulations of group behavior do not provide a quantitative index for determining the point at which the flock emerges. An index was developed of the aggregation of moving individuals in a flock and an example was provided of how it can be used to quantify the degree to which a group of moving individuals actually forms a flock.

  16. A Generalizable Methodology for Quantifying User Satisfaction

    Science.gov (United States)

    Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung

    Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.

  17. Quantifying the efficiency of river regulation

    Directory of Open Access Journals (Sweden)

    R. Rödel

    2005-01-01

    Full Text Available Dam-affected hydrologic time series give rise to uncertainties when they are used for calibrating large-scale hydrologic models or for analysing runoff records. It is therefore necessary to identify and to quantify the impact of impoundments on runoff time series. Two different approaches were employed. The first, classic approach compares the volume of the dams that are located upstream from a station with the annual discharge. The catchment areas of the stations are calculated and then related to geo-referenced dam attributes. The paper introduces a data set of geo-referenced dams linked with 677 gauging stations in Europe. Second, the intensity of the impoundment impact on runoff times series can be quantified more exactly and directly when long-term runoff records are available. Dams cause a change in the variability of flow regimes. This effect can be measured using the model of linear single storage. The dam-caused storage change ΔS can be assessed through the volume of the emptying process between two flow regimes. As an example, the storage change ΔS is calculated for regulated long-term series of the Luleälven in northern Sweden.

  18. Quantifying meta-correlations in financial markets

    Science.gov (United States)

    Kenett, Dror Y.; Preis, Tobias; Gur-Gershgoren, Gitit; Ben-Jacob, Eshel

    2012-08-01

    Financial markets are modular multi-level systems, in which the relationships between the individual components are not constant in time. Sudden changes in these relationships significantly affect the stability of the entire system, and vice versa. Our analysis is based on historical daily closing prices of the 30 components of the Dow Jones Industrial Average (DJIA) from March 15th, 1939 until December 31st, 2010. We quantify the correlation among these components by determining Pearson correlation coefficients, to investigate whether mean correlation of the entire portfolio can be used as a precursor for changes in the index return. To this end, we quantify the meta-correlation - the correlation of mean correlation and index return. We find that changes in index returns are significantly correlated with changes in mean correlation. Furthermore, we study the relationship between the index return and correlation volatility - the standard deviation of correlations for a given time interval. This parameter provides further evidence of the effect of the index on market correlations and their fluctuations. Our empirical findings provide new information and quantification of the index leverage effect, and have implications to risk management, portfolio optimization, and to the increased stability of financial markets.

  19. Neutrino mass?

    International Nuclear Information System (INIS)

    Kayser, B.

    1992-01-01

    After arguing that we should be looking for evidence of neutrino mass, we illustrate the possible consequences of neutrino mass and mixing. We then turn to the question of whether neutrinos are their own antiparticles, and to the process which may answer this question: neutrinoless double beta decay. Next, we review the proposed Mikheyev-Smirnov-Wolfenstein solution to the solar neutrino problem, and discuss models which can generate neutrino electromagnetic moments large enough to play a role in the sun. Finally, we consider how the possible 17 keV neutrino, if real, would fit in with everything we know about neutrinos. (orig.)

  20. Mass metrology

    CERN Document Server

    Gupta, S V

    2012-01-01

    This book presents the practical aspects of mass measurements. Concepts of gravitational, inertial and conventional mass and details of the variation of acceleration of gravity are described. The Metric Convention and International Prototype Kilogram and BIPM standards are described. The effect of change of gravity on the indication of electronic balances is derived with respect of latitude, altitude and earth topography. The classification of weights by OIML is discussed. Maximum permissible errors in different categories of weights prescribed by national and international organizations are p

  1. Quantifying climate risk - the starting point

    International Nuclear Information System (INIS)

    Fairweather, Helen; Luo, Qunying; Liu, De Li; Wiles, Perry

    2007-01-01

    Full text: All natural systems have evolved to their current state as a result inter alia of the climate in which they developed. Similarly, man-made systems (such as agricultural production) have developed to suit the climate experienced over the last 100 or so years. The capacity of different systems to adapt to changes in climate that are outside those that have been experienced previously is largely unknown. This results in considerable uncertainty when predicting climate change impacts. However, it is possible to quantify the relative probabilities of a range of potential impacts of climate change. Quantifying current climate risks is an effective starting point for analysing the probable impacts of future climate change and guiding the selection of appropriate adaptation strategies. For a farming system to be viable within the current climate, its profitability must be sustained and, therefore, possible adaptation strategies need to be tested for continued viability in a changed climate. The methodology outlined in this paper examines historical patterns of key climate variables (rainfall and temperature) across the season and their influence on the productivity of wheat growing in NSW. This analysis is used to identify the time of year that the system is most vulnerable to climate variation, within the constraints of the current climate. Wheat yield is used as a measure of productivity, which is also assumed to be a surrogate for profitability. A time series of wheat yields is sorted into ascending order and categorised into five percentile groupings (i.e. 20th, 40th, 60th and 80th percentiles) for each shire across NSW (-100 years). Five time series of climate data (which are aggregated daily data from the years in each percentile) are analysed to determine the period that provides the greatest climate risk to the production system. Once this period has been determined, this risk is quantified in terms of the degree of separation of the time series

  2. Mass spectrometry

    DEFF Research Database (Denmark)

    Nyvang Hartmeyer, Gitte; Jensen, Anne Kvistholm; Böcher, Sidsel

    2010-01-01

    Matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) is currently being introduced for the rapid and accurate identification of bacteria. We describe 2 MALDI-TOF MS identification cases - 1 directly on spinal fluid and 1 on grown bacteria. Rapidly obtained...

  3. How to quantify conduits in wood?

    Science.gov (United States)

    Scholz, Alexander; Klepsch, Matthias; Karimi, Zohreh; Jansen, Steven

    2013-01-01

    Vessels and tracheids represent the most important xylem cells with respect to long distance water transport in plants. Wood anatomical studies frequently provide several quantitative details of these cells, such as vessel diameter, vessel density, vessel element length, and tracheid length, while important information on the three dimensional structure of the hydraulic network is not considered. This paper aims to provide an overview of various techniques, although there is no standard protocol to quantify conduits due to high anatomical variation and a wide range of techniques available. Despite recent progress in image analysis programs and automated methods for measuring cell dimensions, density, and spatial distribution, various characters remain time-consuming and tedious. Quantification of vessels and tracheids is not only important to better understand functional adaptations of tracheary elements to environment parameters, but will also be essential for linking wood anatomy with other fields such as wood development, xylem physiology, palaeobotany, and dendrochronology.

  4. Towards Quantifying a Wider Reality: Shannon Exonerata

    Directory of Open Access Journals (Sweden)

    Robert E. Ulanowicz

    2011-10-01

    Full Text Available In 1872 Ludwig von Boltzmann derived a statistical formula to represent the entropy (an apophasis of a highly simplistic system. In 1948 Claude Shannon independently formulated the same expression to capture the positivist essence of information. Such contradictory thrusts engendered decades of ambiguity concerning exactly what is conveyed by the expression. Resolution of widespread confusion is possible by invoking the third law of thermodynamics, which requires that entropy be treated in a relativistic fashion. Doing so parses the Boltzmann expression into separate terms that segregate apophatic entropy from positivist information. Possibly more importantly, the decomposition itself portrays a dialectic-like agonism between constraint and disorder that may provide a more appropriate description of the behavior of living systems than is possible using conventional dynamics. By quantifying the apophatic side of evolution, the Shannon approach to information achieves what no other treatment of the subject affords: It opens the window on a more encompassing perception of reality.

  5. Message passing for quantified Boolean formulas

    International Nuclear Information System (INIS)

    Zhang, Pan; Ramezanpour, Abolfazl; Zecchina, Riccardo; Zdeborová, Lenka

    2012-01-01

    We introduce two types of message passing algorithms for quantified Boolean formulas (QBF). The first type is a message passing based heuristics that can prove unsatisfiability of the QBF by assigning the universal variables in such a way that the remaining formula is unsatisfiable. In the second type, we use message passing to guide branching heuristics of a Davis–Putnam–Logemann–Loveland (DPLL) complete solver. Numerical experiments show that on random QBFs our branching heuristics give robust exponential efficiency gain with respect to state-of-the-art solvers. We also manage to solve some previously unsolved benchmarks from the QBFLIB library. Apart from this, our study sheds light on using message passing in small systems and as subroutines in complete solvers

  6. Quantifying decoherence in continuous variable systems

    Energy Technology Data Exchange (ETDEWEB)

    Serafini, A [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy); Paris, M G A [Dipartimento di Fisica and INFM, Universita di Milano, Milan (Italy); Illuminati, F [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy); De Siena, S [Dipartimento di Fisica ' ER Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, Gruppo Collegato Salerno, Via S Allende, 84081 Baronissi, SA (Italy)

    2005-04-01

    We present a detailed report on the decoherence of quantum states of continuous variable systems under the action of a quantum optical master equation resulting from the interaction with general Gaussian uncorrelated environments. The rate of decoherence is quantified by relating it to the decay rates of various, complementary measures of the quantum nature of a state, such as the purity, some non-classicality indicators in phase space, and, for two-mode states, entanglement measures and total correlations between the modes. Different sets of physically relevant initial configurations are considered, including one- and two-mode Gaussian states, number states, and coherent superpositions. Our analysis shows that, generally, the use of initially squeezed configurations does not help to preserve the coherence of Gaussian states, whereas it can be effective in protecting coherent superpositions of both number states and Gaussian wavepackets. (review article)

  7. Quantifying decoherence in continuous variable systems

    International Nuclear Information System (INIS)

    Serafini, A; Paris, M G A; Illuminati, F; De Siena, S

    2005-01-01

    We present a detailed report on the decoherence of quantum states of continuous variable systems under the action of a quantum optical master equation resulting from the interaction with general Gaussian uncorrelated environments. The rate of decoherence is quantified by relating it to the decay rates of various, complementary measures of the quantum nature of a state, such as the purity, some non-classicality indicators in phase space, and, for two-mode states, entanglement measures and total correlations between the modes. Different sets of physically relevant initial configurations are considered, including one- and two-mode Gaussian states, number states, and coherent superpositions. Our analysis shows that, generally, the use of initially squeezed configurations does not help to preserve the coherence of Gaussian states, whereas it can be effective in protecting coherent superpositions of both number states and Gaussian wavepackets. (review article)

  8. Crowdsourcing for quantifying transcripts: An exploratory study.

    Science.gov (United States)

    Azzam, Tarek; Harman, Elena

    2016-02-01

    This exploratory study attempts to demonstrate the potential utility of crowdsourcing as a supplemental technique for quantifying transcribed interviews. Crowdsourcing is the harnessing of the abilities of many people to complete a specific task or a set of tasks. In this study multiple samples of crowdsourced individuals were asked to rate and select supporting quotes from two different transcripts. The findings indicate that the different crowdsourced samples produced nearly identical ratings of the transcripts, and were able to consistently select the same supporting text from the transcripts. These findings suggest that crowdsourcing, with further development, can potentially be used as a mixed method tool to offer a supplemental perspective on transcribed interviews. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Animal biometrics: quantifying and detecting phenotypic appearance.

    Science.gov (United States)

    Kühl, Hjalmar S; Burghardt, Tilo

    2013-07-01

    Animal biometrics is an emerging field that develops quantified approaches for representing and detecting the phenotypic appearance of species, individuals, behaviors, and morphological traits. It operates at the intersection between pattern recognition, ecology, and information sciences, producing computerized systems for phenotypic measurement and interpretation. Animal biometrics can benefit a wide range of disciplines, including biogeography, population ecology, and behavioral research. Currently, real-world applications are gaining momentum, augmenting the quantity and quality of ecological data collection and processing. However, to advance animal biometrics will require integration of methodologies among the scientific disciplines involved. Such efforts will be worthwhile because the great potential of this approach rests with the formal abstraction of phenomics, to create tractable interfaces between different organizational levels of life. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Quantifying capital goods for waste incineration

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Riber, C.; Christensen, Thomas Højlund

    2013-01-01

    material used amounting to 19,000–26,000tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000–5000MWh. In terms of the environmental burden...... that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2–3% with respect to kg CO2 per tonne of waste combusted.......Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000–240,000tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main...

  11. Pendulum Underwater - An Approach for Quantifying Viscosity

    Science.gov (United States)

    Leme, José Costa; Oliveira, Agostinho

    2017-12-01

    The purpose of the experiment presented in this paper is to quantify the viscosity of a liquid. Viscous effects are important in the flow of fluids in pipes, in the bloodstream, in the lubrication of engine parts, and in many other situations. In the present paper, the authors explore the oscillations of a physical pendulum in the form of a long and lightweight wire that carries a ball at its lower end, which is totally immersed in water, so as to determine the water viscosity. The system used represents a viscous damped pendulum and we tried different theoretical models to describe it. The experimental part of the present paper is based on a very simple and low-cost image capturing apparatus that can easily be replicated in a physics classroom. Data on the pendulum's amplitude as a function of time were acquired using digital video analysis with the open source software Tracker.

  12. Quantifying gait patterns in Parkinson's disease

    Science.gov (United States)

    Romero, Mónica; Atehortúa, Angélica; Romero, Eduardo

    2017-11-01

    Parkinson's disease (PD) is constituted by a set of motor symptoms, namely tremor, rigidity, and bradykinesia, which are usually described but not quantified. This work proposes an objective characterization of PD gait patterns by approximating the single stance phase a single grounded pendulum. This model estimates the force generated by the gait during the single support from gait data. This force describes the motion pattern for different stages of the disease. The model was validated using recorded videos of 8 young control subjects, 10 old control subjects and 10 subjects with Parkinson's disease in different stages. The estimated force showed differences among stages of Parkinson disease, observing a decrease of the estimated force for the advanced stages of this illness.

  13. Quantifying brain microstructure with diffusion MRI

    DEFF Research Database (Denmark)

    Novikov, Dmitry S.; Jespersen, Sune N.; Kiselev, Valerij G.

    2016-01-01

    the potential to quantify the relevant length scales for neuronal tissue, such as the packing correlation length for neuronal fibers, the degree of neuronal beading, and compartment sizes. The second avenue corresponds to the long-time limit, when the observed signal can be approximated as a sum of multiple non......-exchanging anisotropic Gaussian components. Here the challenge lies in parameter estimation and in resolving its hidden degeneracies. The third avenue employs multiple diffusion encoding techniques, able to access information not contained in the conventional diffusion propagator. We conclude with our outlook...... on the future research directions which can open exciting possibilities for developing markers of pathology and development based on methods of studying mesoscopic transport in disordered systems....

  14. Quantifying Temporal Genomic Erosion in Endangered Species.

    Science.gov (United States)

    Díez-Del-Molino, David; Sánchez-Barreiro, Fatima; Barnes, Ian; Gilbert, M Thomas P; Dalén, Love

    2018-03-01

    Many species have undergone dramatic population size declines over the past centuries. Although stochastic genetic processes during and after such declines are thought to elevate the risk of extinction, comparative analyses of genomic data from several endangered species suggest little concordance between genome-wide diversity and current population sizes. This is likely because species-specific life-history traits and ancient bottlenecks overshadow the genetic effect of recent demographic declines. Therefore, we advocate that temporal sampling of genomic data provides a more accurate approach to quantify genetic threats in endangered species. Specifically, genomic data from predecline museum specimens will provide valuable baseline data that enable accurate estimation of recent decreases in genome-wide diversity, increases in inbreeding levels, and accumulation of deleterious genetic variation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Extraction of quantifiable information from complex systems

    CERN Document Server

    Dahmen, Wolfgang; Griebel, Michael; Hackbusch, Wolfgang; Ritter, Klaus; Schneider, Reinhold; Schwab, Christoph; Yserentant, Harry

    2014-01-01

    In April 2007, the  Deutsche Forschungsgemeinschaft (DFG) approved the  Priority Program 1324 “Mathematical Methods for Extracting Quantifiable Information from Complex Systems.” This volume presents a comprehensive overview of the most important results obtained over the course of the program.   Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance.  Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges.   Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as w...

  16. Quantifying the evolution of individual scientific impact.

    Science.gov (United States)

    Sinatra, Roberta; Wang, Dashun; Deville, Pierre; Song, Chaoming; Barabási, Albert-László

    2016-11-04

    Despite the frequent use of numerous quantitative indicators to gauge the professional impact of a scientist, little is known about how scientific impact emerges and evolves in time. Here, we quantify the changes in impact and productivity throughout a career in science, finding that impact, as measured by influential publications, is distributed randomly within a scientist's sequence of publications. This random-impact rule allows us to formulate a stochastic model that uncouples the effects of productivity, individual ability, and luck and unveils the existence of universal patterns governing the emergence of scientific success. The model assigns a unique individual parameter Q to each scientist, which is stable during a career, and it accurately predicts the evolution of a scientist's impact, from the h-index to cumulative citations, and independent recognitions, such as prizes. Copyright © 2016, American Association for the Advancement of Science.

  17. Quantifying creativity: can measures span the spectrum?

    Science.gov (United States)

    Simonton, Dean Keith

    2012-03-01

    Because the cognitive neuroscientists have become increasingly interested in the phenomenon of creativity, the issue arises of how creativity is to be optimally measured. Unlike intelligence, which can be assessed across the full range of intellectual ability creativity measures tend to concentrate on different sections of the overall spectrum. After first defining creativity in terms of the three criteria of novelty, usefulness, and surprise, this article provides an overview of the available measures. Not only do these instruments vary according to whether they focus on the creative process, person, or product, but they differ regarding whether they tap into "little-c" versus "Big-C" creativity; only productivity and eminence measures reach into genius-level manifestations of the phenomenon. The article closes by discussing whether various alternative assessment techniques can be integrated into a single measure that quantifies creativity across the full spectrum.

  18. Mass transport by groundwater

    International Nuclear Information System (INIS)

    Ledoux, E.; Goblet, P.; Jamet, Ph.; De Marsily, G.; Des Orres, P.E.; Lewi, J.

    1991-01-01

    The first analyses of the safety of radioactive waste disposal published in 1970s were mostly of a generic type using the models of radionuclide migration in the geosphere. These simply constructed models gave way to more sophisticated techniques in order to represent better the complexity and diversity of geological media. In this article, it is attempted to review the various concepts used to quantify radionuclide migration and the evolution of their incorporation into the models. First, it was examined how the type of discontinuity occurring in geological media affects the choice of a representative model. The principle of transport in the subsurface was reviewed, and the effect that coupled processes exert to groundwater flow and mass migration was discussed. The processes that act directly to cause groundwater flow were distinguished. The method of validating such models by comparing the results with the geochemical systems in nature was explained. (K.I.)

  19. A compact clinical instrument for quantifying suppression.

    Science.gov (United States)

    Black, Joanne M; Thompson, Benjamin; Maehara, Goro; Hess, Robert F

    2011-02-01

    We describe a compact and convenient clinical apparatus for the measurement of suppression based on a previously reported laboratory-based approach. In addition, we report and validate a novel, rapid psychophysical method for measuring suppression using this apparatus, which makes the technique more applicable to clinical practice. By using a Z800 dual pro head-mounted display driven by a MAC laptop, we provide dichoptic stimulation. Global motion stimuli composed of arrays of moving dots are presented to each eye. One set of dots move in a coherent direction (termed signal) whereas another set of dots move in a random direction (termed noise). To quantify performance, we measure the signal/noise ratio corresponding to a direction-discrimination threshold. Suppression is quantified by assessing the extent to which it matters which eye sees the signal and which eye sees the noise. A space-saving, head-mounted display using current video technology offers an ideal solution for clinical practice. In addition, our optimized psychophysical method provided results that were in agreement with those produced using the original technique. We made measures of suppression on a group of nine adult amblyopic participants using this apparatus with both the original and new psychophysical paradigms. All participants had measurable suppression ranging from mild to severe. The two different psychophysical methods gave a strong correlation for the strength of suppression (rho = -0.83, p = 0.006). Combining the new apparatus and new psychophysical method creates a convenient and rapid technique for parametric measurement of interocular suppression. In addition, this apparatus constitutes the ideal platform for suppressors to combine information between their eyes in a similar way to binocularly normal people. This provides a convenient way for clinicians to implement the newly proposed binocular treatment of amblyopia that is based on antisuppression training.

  20. Quantifying capital goods for waste incineration

    International Nuclear Information System (INIS)

    Brogaard, L.K.; Riber, C.; Christensen, T.H.

    2013-01-01

    Highlights: • Materials and energy used for the construction of waste incinerators were quantified. • The data was collected from five incineration plants in Scandinavia. • Included were six main materials, electronic systems, cables and all transportation. • The capital goods contributed 2–3% compared to the direct emissions impact on GW. - Abstract: Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000–240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000–26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000–5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7–14 kg CO 2 per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2–3% with respect to kg CO 2 per tonne of waste combusted

  1. Quantifying structural states of soft mudrocks

    Science.gov (United States)

    Li, B.; Wong, R. C. K.

    2016-05-01

    In this paper, a cm model is proposed to quantify structural states of soft mudrocks, which are dependent on clay fractions and porosities. Physical properties of natural and reconstituted soft mudrock samples are used to derive two parameters in the cm model. With the cm model, a simplified homogenization approach is proposed to estimate geomechanical properties and fabric orientation distributions of soft mudrocks based on the mixture theory. Soft mudrocks are treated as a mixture of nonclay minerals and clay-water composites. Nonclay minerals have a high stiffness and serve as a structural framework of mudrocks when they have a high volume fraction. Clay-water composites occupy the void space among nonclay minerals and serve as an in-fill matrix. With the increase of volume fraction of clay-water composites, there is a transition in the structural state from the state of framework supported to the state of matrix supported. The decreases in shear strength and pore size as well as increases in compressibility and anisotropy in fabric are quantitatively related to such transition. The new homogenization approach based on the proposed cm model yields better performance evaluation than common effective medium modeling approaches because the interactions among nonclay minerals and clay-water composites are considered. With wireline logging data, the cm model is applied to quantify the structural states of Colorado shale formations at different depths in the Cold Lake area, Alberta, Canada. Key geomechancial parameters are estimated based on the proposed homogenization approach and the critical intervals with low strength shale formations are identified.

  2. Dynamics of Variable Mass Systems

    Science.gov (United States)

    Eke, Fidelis O.

    1998-01-01

    This report presents the results of an investigation of the effects of mass loss on the attitude behavior of spinning bodies in flight. The principal goal is to determine whether there are circumstances under which the motion of variable mass systems can become unstable in the sense that their transverse angular velocities become unbounded. Obviously, results from a study of this kind would find immediate application in the aerospace field. The first part of this study features a complete and mathematically rigorous derivation of a set of equations that govern both the translational and rotational motions of general variable mass systems. The remainder of the study is then devoted to the application of the equations obtained to a systematic investigation of the effect of various mass loss scenarios on the dynamics of increasingly complex models of variable mass systems. It is found that mass loss can have a major impact on the dynamics of mechanical systems, including a possible change in the systems stability picture. Factors such as nozzle geometry, combustion chamber geometry, propellant's initial shape, size and relative mass, and propellant location can all have important influences on the system's dynamic behavior. The relative importance of these parameters on-system motion are quantified in a way that is useful for design purposes.

  3. Quantifying uncertainty in nuclear analytical measurements

    International Nuclear Information System (INIS)

    2004-07-01

    The lack of international consensus on the expression of uncertainty in measurements was recognised by the late 1970s and led, after the issuance of a series of rather generic recommendations, to the publication of a general publication, known as GUM, the Guide to the Expression of Uncertainty in Measurement. This publication, issued in 1993, was based on co-operation over several years by the Bureau International des Poids et Mesures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization (ISO), the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the Organisation internationale de metrologie legale. The purpose was to promote full information on how uncertainty statements are arrived at and to provide a basis for harmonized reporting and the international comparison of measurement results. The need to provide more specific guidance to different measurement disciplines was soon recognized and the field of analytical chemistry was addressed by EURACHEM in 1995 in the first edition of a guidance report on Quantifying Uncertainty in Analytical Measurements, produced by a group of experts from the field. That publication translated the general concepts of the GUM into specific applications for analytical laboratories and illustrated the principles with a series of selected examples as a didactic tool. Based on feedback from the actual practice, the EURACHEM publication was extensively reviewed in 1997-1999 under the auspices of the Co-operation on International Traceability in Analytical Chemistry (CITAC), and a second edition was published in 2000. Still, except for a single example on the measurement of radioactivity in GUM, the field of nuclear and radiochemical measurements was not covered. The explicit requirement of ISO standard 17025:1999, General Requirements for the Competence of Testing and Calibration

  4. Quantifying the effect of sorption and bioavailability of hydrophobic organic contaminants

    International Nuclear Information System (INIS)

    Zhang, W.; Bouwer, E.; Cunningham, A.

    1994-01-01

    In-situ bioremediation has been applied successfully at a few sites. Several restrictions presently exist which could greatly limit the effectiveness of this promising technology. Hydrophobic organic contaminants tend to sorb onto soil. However, microorganisms are most effective in utilizing substrates from the aqueous phase. Sorption tends to separate the direct contact between microorganisms and contaminants necessary for biodegradation to occur. A series of experiments, which represented scenarios with fast sorption/desorption, slow sorption/desorption, mass transfer across boundary layer and mass transfer within attached microorganisms (biofilm), was conducted to demonstrate the concentration effect and the mass transfer effect. A method has been developed to quantify bioavailability of organic contaminants in aquatic environments. Bioavailability Factor (B f ), a dimensionless parameter derived from mathematical models and verified by experimental results, has been formulated to describe the impact of equilibrium sorption, nonequilibrium sorption, and mass transfer processes on the rate and extent of biodegradation of petroleum hydrocarbons

  5. Quantifying postfire aeolian sediment transport using rare earth element tracers

    Science.gov (United States)

    Dukes, David; Gonzales, Howell B.; Ravi, Sujith; Grandstaff, David E.; Van Pelt, R. Scott; Li, Junran; Wang, Guan; Sankey, Joel B.

    2018-01-01

    Grasslands, which provide fundamental ecosystem services in many arid and semiarid regions of the world, are undergoing rapid increases in fire activity and are highly susceptible to postfire-accelerated soil erosion by wind. A quantitative assessment of physical processes that integrates fire-wind erosion feedbacks is therefore needed relative to vegetation change, soil biogeochemical cycling, air quality, and landscape evolution. We investigated the applicability of a novel tracer technique—the use of multiple rare earth elements (REE)—to quantify soil transport by wind and to identify sources and sinks of wind-blown sediments in both burned and unburned shrub-grass transition zone in the Chihuahuan Desert, NM, USA. Results indicate that the horizontal mass flux of wind-borne sediment increased approximately threefold following the fire. The REE tracer analysis of wind-borne sediments shows that the source of the horizontal mass flux in the unburned site was derived from bare microsites (88.5%), while in the burned site it was primarily sourced from shrub (42.3%) and bare (39.1%) microsites. Vegetated microsites which were predominantly sinks of aeolian sediments in the unburned areas became sediment sources following the fire. The burned areas showed a spatial homogenization of sediment tracers, highlighting a potential negative feedback on landscape heterogeneity induced by shrub encroachment into grasslands. Though fires are known to increase aeolian sediment transport, accompanying changes in the sources and sinks of wind-borne sediments may influence biogeochemical cycling and land degradation dynamics. Furthermore, our experiment demonstrated that REEs can be used as reliable tracers for field-scale aeolian studies.

  6. Quantifying Sediment Transport in a Premontane Transitional Cloud Forest

    Science.gov (United States)

    Waring, E. R.; Brumbelow, J. K.

    2013-12-01

    Quantifying sediment transport is a difficult task in any watershed, and relatively little direct measurement has occurred in tropical, mountainous watersheds. The Howler Monkey Watershed (2.2 hectares) is located in a premontane transitional cloud forest in San Isidro de Peñas Blancas, Costa Rica. In June 2012, a V-notch stream-gaging weir was built in the catchment with a 8 ft by 6 ft by 4 ft concrete stilling basin. Sediment captured by the weir was left untouched for an 11 month time period. To collect the contents of the weir, the stream was rerouted and the weir was drained. The stilling basin contents were systematically sampled, and samples were taken to a lab and characterized using sieve and hydrometer tests. The wet volume of the remaining sediment was obtained, and dry mass was estimated. Particle size distribution of samples were obtained from lab tests, with 96% of sediment trapped by the weir being sand or coarser. The efficiency of the weir as a sediment collector was evaluated by comparing particle fall velocities to residence time of water in the weir under baseflow conditions. Under these assumptions, only two to three percent of the total mass of soil transported in the stream is thought to have been suspended in the water and lost over the V-notch. Data were compared to the Universal Soil Loss Equation (USLE), a widely accepted method for predicting soil loss in agricultural watersheds. As expected, application of the USLE to a tropical rainforest was problematic with uncertainty in parameters yielding a soil loss estimate varying by a factor of 50. Continued monitoring of sediment transport should yield data for improved methods of soil loss estimation applicable to tropical mountainous forests.

  7. Quantifying and Mapping Global Data Poverty.

    Science.gov (United States)

    Leidig, Mathias; Teeuw, Richard M

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this 'proof of concept' study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction.

  8. Stimfit: quantifying electrophysiological data with Python

    Directory of Open Access Journals (Sweden)

    Segundo Jose Guzman

    2014-02-01

    Full Text Available Intracellular electrophysiological recordings provide crucial insights into elementary neuronal signals such as action potentials and synaptic currents. Analyzing and interpreting these signals is essential for a quantitative understanding of neuronal information processing, and requires both fast data visualization and ready access to complex analysis routines. To achieve this goal, we have developed Stimfit, a free software package for cellular neurophysiology with a Python scripting interface and a built-in Python shell. The program supports most standard file formats for cellular neurophysiology and other biomedical signals through the Biosig library. To quantify and interpret the activity of single neurons and communication between neurons, the program includes algorithms to characterize the kinetics of presynaptic action potentials and postsynaptic currents, estimate latencies between pre- and postsynaptic events, and detect spontaneously occurring events. We validate and benchmark these algorithms, give estimation errors, and provide sample use cases, showing that Stimfit represents an efficient, accessible and extensible way to accurately analyze and interpret neuronal signals.

  9. Quantifying capital goods for waste incineration.

    Science.gov (United States)

    Brogaard, L K; Riber, C; Christensen, T H

    2013-06-01

    Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000-240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000-26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000-5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7-14 kg CO2 per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2-3% with respect to kg CO2 per tonne of waste combusted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Fluorescence imaging to quantify crop residue cover

    Science.gov (United States)

    Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

    1994-01-01

    Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

  11. Quantifying Potential Groundwater Recharge In South Texas

    Science.gov (United States)

    Basant, S.; Zhou, Y.; Leite, P. A.; Wilcox, B. P.

    2015-12-01

    Groundwater in South Texas is heavily relied on for human consumption and irrigation for food crops. Like most of the south west US, woody encroachment has altered the grassland ecosystems here too. While brush removal has been widely implemented in Texas with the objective of increasing groundwater recharge, the linkage between vegetation and groundwater recharge in South Texas is still unclear. Studies have been conducted to understand plant-root-water dynamics at the scale of plants. However, little work has been done to quantify the changes in soil water and deep percolation at the landscape scale. Modeling water flow through soil profiles can provide an estimate of the total water flowing into deep percolation. These models are especially powerful with parameterized and calibrated with long term soil water data. In this study we parameterize the HYDRUS soil water model using long term soil water data collected in Jim Wells County in South Texas. Soil water was measured at every 20 cm intervals up to a depth of 200 cm. The parameterized model will be used to simulate soil water dynamics under a variety of precipitation regimes ranging from well above normal to severe drought conditions. The results from the model will be compared with the changes in soil moisture profile observed in response to vegetation cover and treatments from a study in a similar. Comparative studies like this can be used to build new and strengthen existing hypotheses regarding deep percolation and the role of soil texture and vegetation in groundwater recharge.

  12. Quantifying Anthropogenic Stress on Groundwater Resources.

    Science.gov (United States)

    Ashraf, Batool; AghaKouchak, Amir; Alizadeh, Amin; Mousavi Baygi, Mohammad; R Moftakhari, Hamed; Mirchi, Ali; Anjileli, Hassan; Madani, Kaveh

    2017-10-10

    This study explores a general framework for quantifying anthropogenic influences on groundwater budget based on normalized human outflow (h out ) and inflow (h in ). The framework is useful for sustainability assessment of groundwater systems and allows investigating the effects of different human water abstraction scenarios on the overall aquifer regime (e.g., depleted, natural flow-dominated, and human flow-dominated). We apply this approach to selected regions in the USA, Germany and Iran to evaluate the current aquifer regime. We subsequently present two scenarios of changes in human water withdrawals and return flow to the system (individually and combined). Results show that approximately one-third of the selected aquifers in the USA, and half of the selected aquifers in Iran are dominated by human activities, while the selected aquifers in Germany are natural flow-dominated. The scenario analysis results also show that reduced human withdrawals could help with regime change in some aquifers. For instance, in two of the selected USA aquifers, a decrease in anthropogenic influences by ~20% may change the condition of depleted regime to natural flow-dominated regime. We specifically highlight a trending threat to the sustainability of groundwater in northwest Iran and California, and the need for more careful assessment and monitoring practices as well as strict regulations to mitigate the negative impacts of groundwater overexploitation.

  13. Quantifying Supply Risk at a Cellulosic Biorefinery

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Jason K [Idaho National Laboratory; Jacobson, Jacob Jordan [Idaho National Laboratory; Cafferty, Kara Grace [Idaho National Laboratory; Lamers, Patrick [Idaho National Laboratory; Roni, MD S [Idaho National Laboratory

    2015-03-01

    In order to increase the sustainability and security of the nation’s energy supply, the U.S. Department of Energy through its Bioenergy Technology Office has set a vision for one billion tons of biomass to be processed for renewable energy and bioproducts annually by the year 2030. The Renewable Fuels Standard limits the amount of corn grain that can be used in ethanol conversion sold in the U.S, which is already at its maximum. Therefore making the DOE’s vision a reality requires significant growth in the advanced biofuels industry where currently three cellulosic biorefineries convert cellulosic biomass to ethanol. Risk mitigation is central to growing the industry beyond its infancy to a level necessary to achieve the DOE vision. This paper focuses on reducing the supply risk that faces a firm that owns a cellulosic biorefinery. It uses risk theory and simulation modeling to build a risk assessment model based on causal relationships of underlying, uncertain, supply driving variables. Using the model the paper quantifies supply risk reduction achieved by converting the supply chain from a conventional supply system (bales and trucks) to an advanced supply system (depots, pellets, and trains). Results imply that the advanced supply system reduces supply system risk, defined as the probability of a unit cost overrun, from 83% in the conventional system to 4% in the advanced system. Reducing cost risk in this nascent industry improves the odds of realizing desired growth.

  14. Quantifying Supply Risk at a Cellulosic Biorefinery

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Jason K.; Jacobson, Jacob J.; Cafferty, Kara G.; Lamers, Patrick; Roni, Mohammad S.

    2015-07-01

    In order to increase the sustainability and security of the nation’s energy supply, the U.S. Department of Energy through its Bioenergy Technology Office has set a vision for one billion tons of biomass to be processed for renewable energy and bioproducts annually by the year 2030. The Renewable Fuels Standard limits the amount of corn grain that can be used in ethanol conversion sold in the U.S, which is already at its maximum. Therefore making the DOE’s vision a reality requires significant growth in the advanced biofuels industry where currently three cellulosic biorefineries convert cellulosic biomass to ethanol. Risk mitigation is central to growing the industry beyond its infancy to a level necessary to achieve the DOE vision. This paper focuses on reducing the supply risk that faces a firm that owns a cellulosic biorefinery. It uses risk theory and simulation modeling to build a risk assessment model based on causal relationships of underlying, uncertain, supply driving variables. Using the model the paper quantifies supply risk reduction achieved by converting the supply chain from a conventional supply system (bales and trucks) to an advanced supply system (depots, pellets, and trains). Results imply that the advanced supply system reduces supply system risk, defined as the probability of a unit cost overrun, from 83% in the conventional system to 4% in the advanced system. Reducing cost risk in this nascent industry improves the odds of realizing desired growth.

  15. Quantifying and Mapping Global Data Poverty.

    Directory of Open Access Journals (Sweden)

    Mathias Leidig

    Full Text Available Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI. The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this 'proof of concept' study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction.

  16. Data Used in Quantified Reliability Models

    Science.gov (United States)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  17. Mass Sports

    Directory of Open Access Journals (Sweden)

    Elena Grigoryeva

    2017-03-01

    Fitness has become one of the most popular kinds of the mass sport and has completely replaced the traditional “physical culture”. Dozens of variations of fitness and millions of participants pose a great challenge to contemporary architecture. The articles of our issue show the present and the future of architecture for fitness. We present a topical collection with a wide geographical range, including the Irkutsk Agglomeration, Tomsk, Krasnodar, sports in the Moscow Palace of Young Pioneers, and the anthology of the top foreign sports venues.

  18. Critical Mass

    CERN Multimedia

    AUTHOR|(CDS)2070299

    2017-01-01

    Critical Mass is a cycling event typically held on the last Friday of every month; its purpose is not usually formalized beyond the direct action of meeting at a set location and time and traveling as a group through city or town streets on bikes. The event originated in 1992 in San Francisco; by the end of 2003, the event was being held in over 300 cities around the world. At CERN it is held once a year in conjunction with the national Swiss campaing "Bike to work".

  19. Using Mass Spectrometry to Quantify Rituximab and Perform Individualized Immunoglobulin Phenotyping in ANCA-Associated Vasculitis

    NARCIS (Netherlands)

    Mills, John R.; Cornec, Divi; Dasari, Surendra; Ladwig, Paula M.; Hummel, Amber M.; Cheu, Melissa; Murray, David L.; Willrich, Maria A.; Snyder, Melissa R.; Hoffman, Gary S.; Kallenberg, Cees G. M.; Langford, Carol A.; Merkel, Peter A.; Monach, Paul A.; Seo, Philip; Spiera, Robert F.; St Cair, E. William; Stone, John H.; Specks, Ulrich; Barnidge, David R.

    2016-01-01

    Therapeutic monoclonal immunoglobulins (mAbs) are used to treat patients with a wide range of disorders including autoimmune diseases. As pharmaceutical companies bring more fully humanized therapeutic mAb drugs to the healthcare market analytical platforms that perform therapeutic drug monitoring

  20. Quantifying non-linear dynamics of mass-springs in series oscillators via asymptotic approach

    Science.gov (United States)

    Starosta, Roman; Sypniewska-Kamińska, Grażyna; Awrejcewicz, Jan

    2017-05-01

    Dynamical regular response of an oscillator with two serially connected springs with nonlinear characteristics of cubic type and governed by a set of differential-algebraic equations (DAEs) is studied. The classical approach of the multiple scales method (MSM) in time domain has been employed and appropriately modified to solve the governing DAEs of two systems, i.e. with one- and two degrees-of-freedom. The approximate analytical solutions have been verified by numerical simulations.

  1. Survey and assessment of techniques used to quantify the potential for rock mass instability.

    CSIR Research Space (South Africa)

    Brink, AVZ

    2000-03-01

    Full Text Available recommended for the assessment of the seismic hazard as the maximum event magnitude expected, and the return period for a given magnitude or larger. Having defined the hazard, the report describes the effect of coupling between the source and the working... .................................................................................. 11 1.2.1 The seismic event ..................................................................................... 11 1.2.2 Recognition of seismic hazard. ................................................................... 12 1.3 Coupling between...

  2. A virtual trial framework for quantifying the detectability of masses in breast tomosynthesis projection data

    International Nuclear Information System (INIS)

    Young, Stefano; Bakic, Predrag R.; Myers, Kyle J.; Jennings, Robert J.; Park, Subok

    2013-01-01

    Purpose: Digital breast tomosynthesis (DBT) is a promising breast cancer screening tool that has already begun making inroads into clinical practice. However, there is ongoing debate over how to quantitatively evaluate and optimize these systems, because different definitions of image quality can lead to different optimal design strategies. Powerful and accurate tools are desired to extend our understanding of DBT system optimization and validate published design principles. Methods: The authors developed a virtual trial framework for task-specific DBT assessment that uses digital phantoms, open-source x-ray transport codes, and a projection-space, spatial-domain observer model for quantitative system evaluation. The authors considered evaluation of reconstruction algorithms as a separate problem and focused on the information content in the raw, unfiltered projection images. Specifically, the authors investigated the effects of scan angle and number of angular projections on detectability of a small (3 mm diameter) signal embedded in randomly-varying anatomical backgrounds. Detectability was measured by the area under the receiver-operating characteristic curve (AUC). Experiments were repeated for three test cases where the detectability-limiting factor was anatomical variability, quantum noise, or electronic noise. The authors also juxtaposed the virtual trial framework with other published studies to illustrate its advantages and disadvantages. Results: The large number of variables in a virtual DBT study make it difficult to directly compare different authors’ results, so each result must be interpreted within the context of the specific virtual trial framework. The following results apply to 25% density phantoms with 5.15 cm compressed thickness and 500 μm 3 voxels (larger 500 μm 2 detector pixels were used to avoid voxel-edge artifacts): 1. For raw, unfiltered projection images in the anatomical-variability-limited regime, AUC appeared to remain constant or increase slightly with scan angle. 2. In the same regime, when the authors fixed the scan angle, AUC increased asymptotically with the number of projections. The threshold number of projections for asymptotic AUC performance depended on the scan angle. In the quantum- and electronic-noise dominant regimes, AUC behaviors as a function of scan angle and number of projections sometimes differed from the anatomy-limited regime. For example, with a fixed scan angle, AUC generally decreased with the number of projections in the electronic-noise dominant regime. These results are intended to demonstrate the capabilities of the virtual trial framework, not to be used as optimization rules for DBT. Conclusions: The authors have demonstrated a novel simulation framework and tools for evaluating DBT systems in an objective, task-specific manner. This framework facilitates further investigation of image quality tradeoffs in DBT.

  3. Quantifying the impact of mass vaccination programmes on notified cases in the Netherlands.

    NARCIS (Netherlands)

    van Wijhe, M; Tulen, A D; Korthals Altes, H; McDonald, S A; de Melker, H E; Postma, M J; Wallinga, J

    2018-01-01

    Vaccination programmes are considered a main contributor to the decline of infectious diseases over the 20th century. In recent years, the national vaccination coverage in the Netherlands has been declining, highlighting the need for continuous monitoring and evaluation of vaccination programmes.

  4. Quantifying food losses and the potential for reduction in Switzerland.

    Science.gov (United States)

    Beretta, Claudio; Stoessel, Franziska; Baier, Urs; Hellweg, Stefanie

    2013-03-01

    A key element in making our food systems more efficient is the reduction of food losses across the entire food value chain. Nevertheless, food losses are often neglected. This paper quantifies food losses in Switzerland at the various stages of the food value chain (agricultural production, postharvest handling and trade, processing, food service industry, retail, and households), identifies hotspots and analyses the reasons for losses. Twenty-two food categories are modelled separately in a mass and energy flow analysis, based on data from 31 companies within the food value chain, and from public institutions, associations, and from the literature. The energy balance shows that 48% of the total calories produced (edible crop yields at harvest time and animal products, including slaughter waste) is lost across the whole food value chain. Half of these losses would be avoidable given appropriate mitigation measures. Most avoidable food losses occur at the household, processing, and agricultural production stage of the food value chain. Households are responsible for almost half of the total avoidable losses (in terms of calorific content). Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Quantifying the provenance of aeolian sediments using multiple composite fingerprints

    Science.gov (United States)

    Liu, Benli; Niu, Qinghe; Qu, Jianjun; Zu, Ruiping

    2016-09-01

    We introduce a new fingerprinting method that uses multiple composite fingerprints for studies of aeolian sediment provenance. We used this method to quantify the provenance of sediments on both sides of the Qinghai-Tibetan Railway (QTR) in the Cuona Lake section of the Tibetan Plateau (TP), in an environment characterized by aeolian and fluvial interactions. The method involves repeatedly solving a linear mixing model based on mass conservation; the model is not limited to spatial scale or transport types and uses all the tracer groups that passed the range check, Kruskal-Wallis H-test, and a strict analytical solution screening. The proportional estimates that result from using different composite fingerprints are highly variable; however, the average of these fingerprints has a greater accuracy and certainty than any single fingerprint. The results show that sand from the lake beach, hilly surface, and gullies contribute, respectively, 48%, 31% and 21% to the western railway sediments and 43%, 33% and 24% to the eastern railway sediments. The difference between contributions from various sources on either side of the railway, which may increase in the future, was clearly related to variations in local transport characteristics, a conclusion that is supported by grain size analysis. The construction of the QTR changed the local cycling of materials, and the difference in provenance between the sediments that are separated by the railway reflects the changed sedimentary conditions on either side of the railway. The effectiveness of this method suggests that it will be useful in other studies of aeolian sediments.

  6. Quantifying Riverscape Connectivity with Graph Theory

    Science.gov (United States)

    Carbonneau, P.; Milledge, D.; Sinha, R.; Tandon, S. K.

    2013-12-01

    Fluvial catchments convey fluxes of water, sediment, nutrients and aquatic biota. At continental scales, crustal topography defines the overall path of channels whilst at local scales depositional and/or erosional features generally determine the exact path of a channel. Furthermore, constructions such as dams, for either water abstraction or hydropower, often have a significant impact on channel networks.The concept of ';connectivity' is commonly invoked when conceptualising the structure of a river network.This concept is easy to grasp but there have been uneven efforts across the environmental sciences to actually quantify connectivity. Currently there have only been a few studies reporting quantitative indices of connectivity in river sciences, notably, in the study of avulsion processes. However, the majority of current work describing some form of environmental connectivity in a quantitative manner is in the field of landscape ecology. Driven by the need to quantify habitat fragmentation, landscape ecologists have returned to graph theory. Within this formal setting, landscape ecologists have successfully developed a range of indices which can model connectivity loss. Such formal connectivity metrics are currently needed for a range of applications in fluvial sciences. One of the most urgent needs relates to dam construction. In the developed world, hydropower development has generally slowed and in many countries, dams are actually being removed. However, this is not the case in the developing world where hydropower is seen as a key element to low-emissions power-security. For example, several dam projects are envisaged in Himalayan catchments in the next 2 decades. This region is already under severe pressure from climate change and urbanisation, and a better understanding of the network fragmentation which can be expected in this system is urgently needed. In this paper, we apply and adapt connectivity metrics from landscape ecology. We then examine the

  7. Quantifying antimicrobial resistance at veal calf farms.

    Directory of Open Access Journals (Sweden)

    Angela B Bosman

    Full Text Available This study was performed to determine a sampling strategy to quantify the prevalence of antimicrobial resistance on veal calf farms, based on the variation in antimicrobial resistance within and between calves on five farms. Faecal samples from 50 healthy calves (10 calves/farm were collected. From each individual sample and one pooled faecal sample per farm, 90 selected Escherichia coli isolates were tested for their resistance against 25 mg/L amoxicillin, 25 mg/L tetracycline, 0.5 mg/L cefotaxime, 0.125 mg/L ciprofloxacin and 8/152 mg/L trimethoprim/sulfamethoxazole (tmp/s by replica plating. From each faecal sample another 10 selected E. coli isolates were tested for their resistance by broth microdilution as a reference. Logistic regression analysis was performed to compare the odds of testing an isolate resistant between both test methods (replica plating vs. broth microdilution and to evaluate the effect of pooling faecal samples. Bootstrap analysis was used to investigate the precision of the estimated prevalence of resistance to each antimicrobial obtained by several simulated sampling strategies. Replica plating showed similar odds of E. coli isolates tested resistant compared to broth microdilution, except for ciprofloxacin (OR 0.29, p ≤ 0.05. Pooled samples showed in general lower odds of an isolate being resistant compared to individual samples, although these differences were not significant. Bootstrap analysis showed that within each antimicrobial the various compositions of a pooled sample provided consistent estimates for the mean proportion of resistant isolates. Sampling strategies should be based on the variation in resistance among isolates within faecal samples and between faecal samples, which may vary by antimicrobial. In our study, the optimal sampling strategy from the perspective of precision of the estimated levels of resistance and practicality consists of a pooled faecal sample from 20 individual animals, of which

  8. Quantifying the Clinical Significance of Cannabis Withdrawal

    Science.gov (United States)

    Allsop, David J.; Copeland, Jan; Norberg, Melissa M.; Fu, Shanlin; Molnar, Anna; Lewis, John; Budney, Alan J.

    2012-01-01

    Background and Aims Questions over the clinical significance of cannabis withdrawal have hindered its inclusion as a discrete cannabis induced psychiatric condition in the Diagnostic and Statistical Manual of Mental Disorders (DSM IV). This study aims to quantify functional impairment to normal daily activities from cannabis withdrawal, and looks at the factors predicting functional impairment. In addition the study tests the influence of functional impairment from cannabis withdrawal on cannabis use during and after an abstinence attempt. Methods and Results A volunteer sample of 49 non-treatment seeking cannabis users who met DSM-IV criteria for dependence provided daily withdrawal-related functional impairment scores during a one-week baseline phase and two weeks of monitored abstinence from cannabis with a one month follow up. Functional impairment from withdrawal symptoms was strongly associated with symptom severity (p = 0.0001). Participants with more severe cannabis dependence before the abstinence attempt reported greater functional impairment from cannabis withdrawal (p = 0.03). Relapse to cannabis use during the abstinence period was associated with greater functional impairment from a subset of withdrawal symptoms in high dependence users. Higher levels of functional impairment during the abstinence attempt predicted higher levels of cannabis use at one month follow up (p = 0.001). Conclusions Cannabis withdrawal is clinically significant because it is associated with functional impairment to normal daily activities, as well as relapse to cannabis use. Sample size in the relapse group was small and the use of a non-treatment seeking population requires findings to be replicated in clinical samples. Tailoring treatments to target withdrawal symptoms contributing to functional impairment during a quit attempt may improve treatment outcomes. PMID:23049760

  9. Quantifying seasonal velocity at Khumbu Glacier, Nepal

    Science.gov (United States)

    Miles, E.; Quincey, D. J.; Miles, K.; Hubbard, B. P.; Rowan, A. V.

    2017-12-01

    While the low-gradient debris-covered tongues of many Himalayan glaciers exhibit low surface velocities, quantifying ice flow and its variation through time remains a key challenge for studies aimed at determining the long-term evolution of these glaciers. Recent work has suggested that glaciers in the Everest region of Nepal may show seasonal variability in surface velocity, with ice flow peaking during the summer as monsoon precipitation provides hydrological inputs and thus drives changes in subglacial drainage efficiency. However, satellite and aerial observations of glacier velocity during the monsoon are greatly limited due to cloud cover. Those that do exist do not span the period over which the most dynamic changes occur, and consequently short-term (i.e. daily) changes in flow, as well as the evolution of ice dynamics through the monsoon period, remain poorly understood. In this study, we combine field and remote (satellite image) observations to create a multi-temporal, 3D synthesis of ice deformation rates at Khumbu Glacier, Nepal, focused on the 2017 monsoon period. We first determine net annual and seasonal surface displacements for the whole glacier based on Landsat-8 (OLI) panchromatic data (15m) processed with ImGRAFT. We integrate inclinometer observations from three boreholes drilled by the EverDrill project to determine cumulative deformation at depth, providing a 3D perspective and enabling us to assess the role of basal sliding at each site. We additionally analyze high-frequency on-glacier L1 GNSS data from three sites to characterize variability within surface deformation at sub-seasonal timescales. Finally, each dataset is validated against repeat-dGPS observations at gridded points in the vicinity of the boreholes and GNSS dataloggers. These datasets complement one another to infer thermal regime across the debris-covered ablation area of the glacier, and emphasize the seasonal and spatial variability of ice deformation for glaciers in High

  10. Quantifying collective attention from tweet stream.

    Directory of Open Access Journals (Sweden)

    Kazutoshi Sasahara

    Full Text Available Online social media are increasingly facilitating our social interactions, thereby making available a massive "digital fossil" of human behavior. Discovering and quantifying distinct patterns using these data is important for studying social behavior, although the rapid time-variant nature and large volumes of these data make this task difficult and challenging. In this study, we focused on the emergence of "collective attention" on Twitter, a popular social networking service. We propose a simple method for detecting and measuring the collective attention evoked by various types of events. This method exploits the fact that tweeting activity exhibits a burst-like increase and an irregular oscillation when a particular real-world event occurs; otherwise, it follows regular circadian rhythms. The difference between regular and irregular states in the tweet stream was measured using the Jensen-Shannon divergence, which corresponds to the intensity of collective attention. We then associated irregular incidents with their corresponding events that attracted the attention and elicited responses from large numbers of people, based on the popularity and the enhancement of key terms in posted messages or "tweets." Next, we demonstrate the effectiveness of this method using a large dataset that contained approximately 490 million Japanese tweets by over 400,000 users, in which we identified 60 cases of collective attentions, including one related to the Tohoku-oki earthquake. "Retweet" networks were also investigated to understand collective attention in terms of social interactions. This simple method provides a retrospective summary of collective attention, thereby contributing to the fundamental understanding of social behavior in the digital era.

  11. Quantifying the clinical significance of cannabis withdrawal.

    Directory of Open Access Journals (Sweden)

    David J Allsop

    Full Text Available Questions over the clinical significance of cannabis withdrawal have hindered its inclusion as a discrete cannabis induced psychiatric condition in the Diagnostic and Statistical Manual of Mental Disorders (DSM IV. This study aims to quantify functional impairment to normal daily activities from cannabis withdrawal, and looks at the factors predicting functional impairment. In addition the study tests the influence of functional impairment from cannabis withdrawal on cannabis use during and after an abstinence attempt.A volunteer sample of 49 non-treatment seeking cannabis users who met DSM-IV criteria for dependence provided daily withdrawal-related functional impairment scores during a one-week baseline phase and two weeks of monitored abstinence from cannabis with a one month follow up. Functional impairment from withdrawal symptoms was strongly associated with symptom severity (p=0.0001. Participants with more severe cannabis dependence before the abstinence attempt reported greater functional impairment from cannabis withdrawal (p=0.03. Relapse to cannabis use during the abstinence period was associated with greater functional impairment from a subset of withdrawal symptoms in high dependence users. Higher levels of functional impairment during the abstinence attempt predicted higher levels of cannabis use at one month follow up (p=0.001.Cannabis withdrawal is clinically significant because it is associated with functional impairment to normal daily activities, as well as relapse to cannabis use. Sample size in the relapse group was small and the use of a non-treatment seeking population requires findings to be replicated in clinical samples. Tailoring treatments to target withdrawal symptoms contributing to functional impairment during a quit attempt may improve treatment outcomes.

  12. Quantifying motion for pancreatic radiotherapy margin calculation

    International Nuclear Information System (INIS)

    Whitfield, Gillian; Jain, Pooja; Green, Melanie; Watkins, Gillian; Henry, Ann; Stratford, Julie; Amer, Ali; Marchant, Thomas; Moore, Christopher; Price, Patricia

    2012-01-01

    Background and purpose: Pancreatic radiotherapy (RT) is limited by uncertain target motion. We quantified 3D patient/organ motion during pancreatic RT and calculated required treatment margins. Materials and methods: Cone-beam computed tomography (CBCT) and orthogonal fluoroscopy images were acquired post-RT delivery from 13 patients with locally advanced pancreatic cancer. Bony setup errors were calculated from CBCT. Inter- and intra-fraction fiducial (clip/seed/stent) motion was determined from CBCT projections and orthogonal fluoroscopy. Results: Using an off-line CBCT correction protocol, systematic (random) setup errors were 2.4 (3.2), 2.0 (1.7) and 3.2 (3.6) mm laterally (left–right), vertically (anterior–posterior) and longitudinally (cranio-caudal), respectively. Fiducial motion varied substantially. Random inter-fractional changes in mean fiducial position were 2.0, 1.6 and 2.6 mm; 95% of intra-fractional peak-to-peak fiducial motion was up to 6.7, 10.1 and 20.6 mm, respectively. Calculated clinical to planning target volume (CTV–PTV) margins were 1.4 cm laterally, 1.4 cm vertically and 3.0 cm longitudinally for 3D conformal RT, reduced to 0.9, 1.0 and 1.8 cm, respectively, if using 4D planning and online setup correction. Conclusions: Commonly used CTV–PTV margins may inadequately account for target motion during pancreatic RT. Our results indicate better immobilisation, individualised allowance for respiratory motion, online setup error correction and 4D planning would improve targeting.

  13. A Methodological Approach to Quantifying Plyometric Intensity.

    Science.gov (United States)

    Jarvis, Mark M; Graham-Smith, Phil; Comfort, Paul

    2016-09-01

    Jarvis, MM, Graham-Smith, P, and Comfort, P. A Methodological approach to quantifying plyometric intensity. J Strength Cond Res 30(9): 2522-2532, 2016-In contrast to other methods of training, the quantification of plyometric exercise intensity is poorly defined. The purpose of this study was to evaluate the suitability of a range of neuromuscular and mechanical variables to describe the intensity of plyometric exercises. Seven male recreationally active subjects performed a series of 7 plyometric exercises. Neuromuscular activity was measured using surface electromyography (SEMG) at vastus lateralis (VL) and biceps femoris (BF). Surface electromyography data were divided into concentric (CON) and eccentric (ECC) phases of movement. Mechanical output was measured by ground reaction forces and processed to provide peak impact ground reaction force (PF), peak eccentric power (PEP), and impulse (IMP). Statistical analysis was conducted to assess the reliability intraclass correlation coefficient and sensitivity smallest detectable difference of all variables. Mean values of SEMG demonstrate high reliability (r ≥ 0.82), excluding ECC VL during a 40-cm drop jump (r = 0.74). PF, PEP, and IMP demonstrated high reliability (r ≥ 0.85). Statistical power for force variables was excellent (power = 1.0), and good for SEMG (power ≥0.86) excluding CON BF (power = 0.57). There was no significant difference (p > 0.05) in CON SEMG between exercises. Eccentric phase SEMG only distinguished between exercises involving a landing and those that did not (percentage of maximal voluntary isometric contraction [%MVIC] = no landing -65 ± 5, landing -140 ± 8). Peak eccentric power, PF, and IMP all distinguished between exercises. In conclusion, CON neuromuscular activity does not appear to vary when intent is maximal, whereas ECC activity is dependent on the presence of a landing. Force characteristics provide a reliable and sensitive measure enabling precise description of intensity

  14. Quantifying Permafrost Characteristics with DCR-ERT

    Science.gov (United States)

    Schnabel, W.; Trochim, E.; Munk, J.; Kanevskiy, M. Z.; Shur, Y.; Fortier, R.

    2012-12-01

    Geophysical methods are an efficient method for quantifying permafrost characteristics for Arctic road design and engineering. In the Alaskan Arctic construction and maintenance of roads requires integration of permafrost; ground that is below 0 degrees C for two or more years. Features such as ice content and temperature are critical for understanding current and future ground conditions for planning, design and evaluation of engineering applications. This study focused on the proposed Foothills West Transportation Access project corridor where the purpose is to construct a new all-season road connecting the Dalton Highway to Umiat. Four major areas were chosen that represented a range of conditions including gravel bars, alluvial plains, tussock tundra (both unburned and burned conditions), high and low centered ice-wedge polygons and an active thermokarst feature. Direct-current resistivity using galvanic contact (DCR-ERT) was applied over transects. In conjunction complimentary site data including boreholes, active layer depths, vegetation descriptions and site photographs was obtained. The boreholes provided information on soil morphology, ice texture and gravimetric moisture content. Horizontal and vertical resolutions in the DCR-ERT were varied to determine the presence or absence of ground ice; subsurface heterogeneity; and the depth to groundwater (if present). The four main DCR-ERT methods used were: 84 electrodes with 2 m spacing; 42 electrodes with 0.5 m spacing; 42 electrodes with 2 m spacing; and 84 electrodes with 1 m spacing. In terms of identifying the ground ice characteristics the higher horizontal resolution DCR-ERT transects with either 42 or 84 electrodes and 0.5 or 1 m spacing were best able to differentiate wedge-ice. This evaluation is based on a combination of both borehole stratigraphy and surface characteristics. Simulated apparent resistivity values for permafrost areas varied from a low of 4582 Ω m to a high of 10034 Ω m. Previous

  15. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  16. Quantifying emission reduction contributions by emerging economics

    Energy Technology Data Exchange (ETDEWEB)

    Moltmann, Sara; Hagemann, Markus; Eisbrenner, Katja; Hoehne, Niklas [Ecofys GmbH, Koeln (Germany); Sterk, Wolfgang; Mersmann, Florian; Ott, Hermann E.; Watanabe, Rie [Wuppertal Institut (Germany)

    2011-04-15

    Further action is needed that goes far beyond what has been agreed so far under the United Nations Framework Convention on Climate Change (UNFCCC) and the Kyoto Protocol to 'prevent dangerous anthropogenic interference with the climate system', the ultimate objective of the UNFCCC. It is out of question that developed countries (Annex I countries) will have to take a leading role. They will have to commit to substantial emission reductions and financing commitments due to their historical responsibility and their financial capability. However, the stabilisation of the climate system will require global emissions to peak within the next decade and decline well below half of current levels by the middle of the century. It is hence a global issue and, thus, depends on the participation of as many countries as possible. This report provides a comparative analysis of greenhouse gas (GHG) emissions, including their national climate plans, of the major emitting developing countries Brazil, China, India, Mexico, South Africa and South Korea. It includes an overview of emissions and economic development, existing national climate change strategies, uses a consistent methodology for estimating emission reduction potential, costs of mitigation options, provides an estimate of the reductions to be achieved through the national climate plans and finally provides a comparison of the results to the allocation of emission rights according to different global effort-sharing approaches. In addition, the report discusses possible nationally appropriate mitigation actions (NAMAs) the six countries could take based on the analysis of mitigation options. This report is an output of the project 'Proposals for quantifying emission reduction contributions by emerging economies' by Ecofys and the Wuppertal Institute for the Federal Environment Agency in Dessau. It builds upon earlier joint work ''Proposals for contributions of emerging economies to the climate

  17. Quantifying the impacts of global disasters

    Science.gov (United States)

    Jones, L. M.; Ross, S.; Wilson, R. I.; Borrero, J. C.; Brosnan, D.; Bwarie, J. T.; Geist, E. L.; Hansen, R. A.; Johnson, L. A.; Kirby, S. H.; Long, K.; Lynett, P. J.; Miller, K. M.; Mortensen, C. E.; Perry, S. C.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Thio, H. K.; Wein, A. M.; Whitmore, P.; Wood, N. J.

    2012-12-01

    The US Geological Survey, National Oceanic and Atmospheric Administration, California Geological Survey, and other entities are developing a Tsunami Scenario, depicting a realistic outcome of a hypothetical but plausible large tsunami originating in the eastern Aleutian Arc, affecting the west coast of the United States, including Alaska and Hawaii. The scenario includes earth-science effects, damage and restoration of the built environment, and social and economic impacts. Like the earlier ShakeOut and ARkStorm disaster scenarios, the purpose of the Tsunami Scenario is to apply science to quantify the impacts of natural disasters in a way that can be used by decision makers in the affected sectors to reduce the potential for loss. Most natural disasters are local. A major hurricane can destroy a city or damage a long swath of coastline while mostly sparing inland areas. The largest earthquake on record caused strong shaking along 1500 km of Chile, but left the capital relatively unscathed. Previous scenarios have used the local nature of disasters to focus interaction with the user community. However, the capacity for global disasters is growing with the interdependency of the global economy. Earthquakes have disrupted global computer chip manufacturing and caused stock market downturns. Tsunamis, however, can be global in their extent and direct impact. Moreover, the vulnerability of seaports to tsunami damage can increase the global consequences. The Tsunami Scenario is trying to capture the widespread effects while maintaining the close interaction with users that has been one of the most successful features of the previous scenarios. The scenario tsunami occurs in the eastern Aleutians with a source similar to the 2011 Tohoku event. Geologic similarities support the argument that a Tohoku-like source is plausible in Alaska. It creates a major nearfield tsunami in the Aleutian arc and peninsula, a moderate tsunami in the US Pacific Northwest, large but not the

  18. A method to quantify the "cone of economy".

    Science.gov (United States)

    Haddas, Ram; Lieberman, Isador H

    2018-05-01

    A non-randomized, prospective, concurrent control cohort study. The purpose of this study is to develop and evaluate a method to quantify the dimensions of the cone of economy (COE) and the energy expenditure associated with maintaining a balanced posture within the COE, scoliosis patients and compare them to matched non-scoliotic controls in a group of adult degenerative. Balance is defined as the ability of the human body to maintain its center of mass (COM) within the base of support with minimal postural sway. The cone of economy refers to the stable region of upright standing posture. The underlying assumption is that deviating outside one's individual cone challenges the balance mechanisms. Adult degenerative scoliosis (ADS) patients exhibit a variety of postural changes within their COE, involving the spine, pelvis and lower extremities, in their effort to compensate for the altered posture. Ten ADS patients and ten non-scoliotic volunteers performed a series of functional balance tests. The dimensions of the COE and the energy expenditure related to maintaining balance within the COE were measured using a human motion video capture system and dynamic surface electromyography. ADS patients presented more COM sway in the sagittal (ADS: 1.59 cm vs. H: 0.61 cm; p = 0.049) and coronal (ADS: 2.84 cm vs. H: 1.72 cm; p = 0.046) directions in comparison to the non-scoliotic control. ADS patients presented with more COM (ADS: 33.30 cm vs. H: 19.13 cm; p = 0.039) and head (ADS: 31.06 cm vs. H: 19.13 cm; p = 0.013) displacements in comparison to the non-scoliotic controls. Scoliosis patients expended more muscle activity to maintain static standing, as manifest by increased muscle activity in their erector spinae (ADS: 37.16 mV vs. H: 20.31 mV; p = 0.050), and gluteus maximus (ADS: 33.12 mV vs. H: 12.09 mV; p = 0.001) muscles. We were able to develop and evaluate a method that quantifies the COE boundaries, COM displacement, and amount of sway within the COE

  19. Quantifier spreading in child eye movements: A case of the Russian quantifier kazhdyj ‘every'

    Directory of Open Access Journals (Sweden)

    Irina A. Sekerina

    2017-07-01

    Full Text Available Extensive cross-linguistic work has documented that children up to the age of 9–10 make errors when performing a sentence-picture verification task that pairs spoken sentences with the universal quantifier 'every 'and pictures with entities in partial one-to-one correspondence. These errors stem from children’s difficulties in restricting the domain of a universal quantifier to the appropriate noun phrase and are referred in the literature as 'quantifier-spreading '('q'-spreading. We adapted the task to be performed in conjunction with eye-movement recordings using the Visual World Paradigm. Russian-speaking 5-to-6-year-old children ('N '= 31 listened to sentences like 'Kazhdyj alligator lezhit v vanne '‘Every alligator is lying in a bathtub’ and viewed pictures with three alligators, each in a bathtub, and two extra empty bathtubs. Non-spreader children ('N '= 12 were adult-like in their accuracy whereas 'q'-spreading ones ('N '= 19 were only 43% correct in interpreting such sentences compared to the control sentences. Eye movements of 'q'-spreading children revealed that more looks to the extra containers (two empty bathtubs correlated with higher error rates reflecting the processing pattern of 'q'-spreading. In contrast, more looks to the distractors in control sentences did not lead to errors in interpretation. We argue that 'q'-spreading errors are caused by interference from the extra entities in the visual context, and our results support the processing difficulty account of acquisition of quantification. Interference results in cognitive overload as children have to integrate multiple sources of information, i.e., visual context with salient extra entities and the spoken sentence in which these entities are mentioned in real-time processing.   This article is part of the special collection: Acquisition of Quantification

  20. Increasing mass loss from Greenland's Mittivakkat Gletscher

    DEFF Research Database (Denmark)

    Hasholt, Bent; Mernild, S.H.; Knudsen, N.T.

    2011-01-01

    Warming in the Arctic during the past several decades has caused glaciers to thin and retreat, and recent mass loss from the Greenland Ice Sheet is well documented. Local glaciers peripheral to the ice sheet are also retreating, but few mass-balance observations are available to quantify that ret...... a local phenomenon, but are indicative of glacier changes in the broader region. Mass-balance observations for the MG therefore provide unique documentation of the general retreat of Southeast Greenland's local glaciers under ongoing climate warming....

  1. Quantifying the 'naturalness' of the curvaton model

    International Nuclear Information System (INIS)

    Lerner, Rose N.; Melville, Scott

    2014-02-01

    We investigate the probability of obtaining an observable curvature perturbation, using as an example the minimal curvaton-Higgs (MCH) model. We determine 'probably observable' and 'probably excluded' regions of parameter space assuming generic initial conditions and applying a stochastic approach for the curvaton's evolution during inflation. Inflation is assumed to last longer than the N obs ≅55 observable e-folds, and the total number of e-folds of inflation determines the particular ranges of parameters that are probable. For the MCH model, these 'probably observable' regions always lie within the range 8 x 10 4 GeV≤m σ ≤2 x 10 7 GeV, where m σ is the curvaton mass, and the Hubble scale at horizon exit is chosen as H * =10 10 GeV. Because the 'probably observable' region depends on the total duration of inflation, information on parameters in the Lagrangian from particle physics and from precision CMB observables can therefore provide information about the total duration of inflation, not just the last N obs e-folds. This method could also be applied to any model that contains additional scalar fields to determine the probability that these scalar fields contribute to the curvature perturbation.

  2. Use of Computer-Aided Tomography (CT) Imaging for Quantifying Coarse Roots, Rhizomes, Peat, and Particle Densities in Marsh Soils

    Science.gov (United States)

    Computer-aided Tomography (CT) imaging was utilized to quantify wet mass of coarse roots, rhizomes, and peat in cores collected from organic-rich (Jamaica Bay, NY) and mineral (North Inlet, SC) Spartina alterniflora soils. Calibration rods composed of materials with standard dens...

  3. On the contrast between Germanic and Romance negated quantifiers

    OpenAIRE

    Robert Cirillo

    2009-01-01

    Universal quantifiers can be stranded in the manner described by Sportiche (1988), Giusti (1990) and Shlonsky (1991) in both the Romance and Germanic languages, but a negated universal quantifier can only be stranded in the Germanic languages. The goal of this paper is to show that this contrast between the Romance and the Germanic languages can be explained if one adapts the theory of sentential negation in Zeijlstra (2004) to constituent (quantifier) negation. According to Zeijlstra’s theor...

  4. Assessment of rock mass decay in artificial slopes

    NARCIS (Netherlands)

    Huisman, M.

    2006-01-01

    This research investigates the decay of rock masses underlying slopes, and seeks to quantify the relations of such decay with time and geotechnical parameters of the slope and rock mass. Decay can greatly affect the geotechnical properties of rocks within engineering timescales, and may induce a

  5. Certain Verbs Are Syntactically Explicit Quantifiers

    Directory of Open Access Journals (Sweden)

    Anna Szabolcsi

    2010-12-01

    Full Text Available Quantification over individuals, times, and worlds can in principle be made explicit in the syntax of the object language, or left to the semantics and spelled out in the meta-language. The traditional view is that quantification over individuals is syntactically explicit, whereas quantification over times and worlds is not. But a growing body of literature proposes a uniform treatment. This paper examines the scopal interaction of aspectual raising verbs (begin, modals (can, and intensional raising verbs (threaten with quantificational subjects in Shupamem, Dutch, and English. It appears that aspectual raising verbs and at least modals may undergo the same kind of overt or covert scope-changing operations as nominal quantifiers; the case of intensional raising verbs is less clear. Scope interaction is thus shown to be a new potential diagnostic of object-linguistic quantification, and the similarity in the scope behavior of nominal and verbal quantifiers supports the grammatical plausibility of ontological symmetry, explored in Schlenker (2006.ReferencesBen-Shalom, D. 1996. Semantic Trees. Ph.D. thesis, UCLA.Bittner, M. 1993. Case, Scope, and Binding. Dordrecht: Reidel.Cresswell, M. 1990. Entities and Indices. Dordrecht: Kluwer.Cresti, D. 1995. ‘Extraction and reconstruction’. Natural Language Semantics 3: 79–122.http://dx.doi.org/10.1007/BF01252885Curry, B. H. & Feys, R. 1958. Combinatory Logic I. Dordrecht: North-Holland.Dowty, D. R. 1988. ‘Type raising, functional composition, and non-constituent conjunction’. In Richard T. Oehrle, Emmon W. Bach & Deirdre Wheeler (eds. ‘Categorial Grammars and Natural Language Structures’, 153–197. Dordrecht: Reidel.Fox, D. 2002. ‘TOn Logical Form’. In Randall Hendrick (ed. ‘Minimalist Syntax’, 82–124. Oxford: Blackwell.Gallin, D. 1975. Intensional and higher-order modal logic: with applications to Montague semantics. North Holland Pub. Co.; American Elsevier Pub. Co., Amsterdam

  6. Identifying and quantifying recurrent novae masquerading as classical novae

    International Nuclear Information System (INIS)

    Pagnotta, Ashley; Schaefer, Bradley E.

    2014-01-01

    Recurrent novae (RNe) are cataclysmic variables with two or more nova eruptions within a century. Classical novae (CNe) are similar systems with only one such eruption. Many of the so-called CNe are actually RNe for which only one eruption has been discovered. Since RNe are candidate Type Ia supernova progenitors, it is important to know whether there are enough in our Galaxy to provide the supernova rate, and therefore to know how many RNe are masquerading as CNe. To quantify this, we collected all available information on the light curves and spectra of a Galactic, time-limited sample of 237 CNe and the 10 known RNe, as well as exhaustive discovery efficiency records. We recognize RNe as having (1) outburst amplitude smaller than 14.5 – 4.5 × log (t 3 ), (2) orbital period >0.6 days, (3) infrared colors of J – H > 0.7 mag and H – K > 0.1 mag, (4) FWHM of Hα > 2000 km s –1 , (5) high excitation lines, such as Fe X or He II near peak, (6) eruption light curves with a plateau, and (7) white dwarf mass greater than 1.2 M ☉ . Using these criteria, we identify V1721 Aql, DE Cir, CP Cru, KT Eri, V838 Her, V2672 Oph, V4160 Sgr, V4643 Sgr, V4739 Sgr, and V477 Sct as strong RN candidates. We evaluate the RN fraction among the known CNe using three methods to get 24% ± 4%, 12% ± 3%, and 35% ± 3%. With roughly a quarter of the 394 known Galactic novae actually being RNe, there should be approximately a hundred such systems masquerading as CNe.

  7. Quantifying commuter exposures to volatile organic compounds

    Science.gov (United States)

    Kayne, Ashleigh

    Motor-vehicles can be a predominant source of air pollution in cities. Traffic-related air pollution is often unavoidable for people who live in populous areas. Commuters may have high exposures to traffic-related air pollution as they are close to vehicle tailpipes. Volatile organic compounds (VOCs) are one class of air pollutants of concern because exposure to VOCs carries risk for adverse health effects. Specific VOCs of interest for this work include benzene, toluene, ethylbenzene, and xylenes (BTEX), which are often found in gasoline and combustion products. Although methods exist to measure time-integrated personal exposures to BTEX, there are few practical methods to measure a commuter's time-resolved BTEX exposure which could identify peak exposures that could be concealed with a time-integrated measurement. This study evaluated the ability of a photoionization detector (PID) to measure commuters' exposure to BTEX using Tenax TA samples as a reference and quantified the difference in BTEX exposure between cyclists and drivers with windows open and closed. To determine the suitability of two measurement methods (PID and Tenax TA) for use in this study, the precision, linearity, and limits of detection (LODs) for both the PID and Tenax TA measurement methods were determined in the laboratory with standard BTEX calibration gases. Volunteers commuted from their homes to their work places by cycling or driving while wearing a personal exposure backpack containing a collocated PID and Tenax TA sampler. Volunteers completed a survey and indicated if the windows in their vehicle were open or closed. Comparing pairs of exposure data from the Tenax TA and PID sampling methods determined the suitability of the PID to measure the BTEX exposures of commuters. The difference between BTEX exposures of cyclists and drivers with windows open and closed in Fort Collins was determined. Both the PID and Tenax TA measurement methods were precise and linear when evaluated in the

  8. Quantifying the Economic and Cultural Biases of Social Media through Trending Topics.

    Science.gov (United States)

    Carrascosa, Juan Miguel; Cuevas, Ruben; Gonzalez, Roberto; Azcorra, Arturo; Garcia, David

    2015-01-01

    Online social media has recently irrupted as the last major venue for the propagation of news and cultural content, competing with traditional mass media and allowing citizens to access new sources of information. In this paper, we study collectively filtered news and popular content in Twitter, known as Trending Topics (TTs), to quantify the extent to which they show similar biases known for mass media. We use two datasets collected in 2013 and 2014, including more than 300.000 TTs from 62 countries. The existing patterns of leader-follower relationships among countries reveal systemic biases known for mass media: Countries concentrate their attention to small groups of other countries, generating a pattern of centralization in which TTs follow the gradient of wealth across countries. At the same time, we find subjective biases within language communities linked to the cultural similarity of countries, in which countries with closer cultures and shared languages tend to follow each other's TTs. Moreover, using a novel methodology based on the Google News service, we study the influence of mass media in TTs for four countries. We find that roughly half of the TTs in Twitter overlap with news reported by mass media, and that the rest of TTs are more likely to spread internationally within Twitter. Our results confirm that online social media have the power to independently spread content beyond mass media, but at the same time social media content follows economic incentives and is subject to cultural factors and language barriers.

  9. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    rainfall datasets available over Africa on monthly, daily and sub-daily time scales as appropriate to quantify spatial and temporal differences between the datasets. We find regional wet and dry biases between datasets (using the ensemble mean as a reference) with generally larger biases in reanalysis products. Rainfall intensity is poorly represented in some datasets which demonstrates some datasets should not be used for rainfall intensity analyses. Using 10 CORDEX models we show in east Africa that the spread between observed datasets is often similar to the spread between models. We recommend that specific observational rainfall datasets datasets be used for specific investigations and also that where many datasets are applicable to an investigation, a probabilistic view be adopted for rainfall studies over Africa. Endris, H. S., P. Omondi, S. Jain, C. Lennard, B. Hewitson, L. Chang'a, J. L. Awange, A. Dosio, P. Ketiem, G. Nikulin, H-J. Panitz, M. Büchner, F. Stordal, and L. Tazalika (2013) Assessment of the Performance of CORDEX Regional Climate Models in Simulating East African Rainfall. J. Climate, 26, 8453-8475. DOI: 10.1175/JCLI-D-12-00708.1 Gbobaniyi, E., A. Sarr, M. B. Sylla, I. Diallo, C. Lennard, A. Dosio, A. Dhie ?diou, A. Kamga, N. A. B. Klutse, B. Hewitson, and B. Lamptey (2013) Climatology, annual cycle and interannual variability of precipitation and temperature in CORDEX simulations over West Africa. Int. J. Climatol., DOI: 10.1002/joc.3834 Hernández-Díaz, L., R. Laprise, L. Sushama, A. Martynov, K. Winger, and B. Dugas (2013) Climate simulation over CORDEX Africa domain using the fifth-generation Canadian Regional Climate Model (CRCM5). Clim. Dyn. 40, 1415-1433. DOI: 10.1007/s00382-012-1387-z Kalognomou, E., C. Lennard, M. Shongwe, I. Pinto, A. Favre, M. Kent, B. Hewitson, A. Dosio, G. Nikulin, H. Panitz, and M. Büchner (2013) A diagnostic evaluation of precipitation in CORDEX models over southern Africa. Journal of Climate, 26, 9477-9506. DOI:10

  10. Quantifying biodiversity and asymptotics for a sequence of random strings.

    Science.gov (United States)

    Koyano, Hitoshi; Kishino, Hirohisa

    2010-06-01

    We present a methodology for quantifying biodiversity at the sequence level by developing the probability theory on a set of strings. Further, we apply our methodology to the problem of quantifying the population diversity of microorganisms in several extreme environments and digestive organs and reveal the relation between microbial diversity and various environmental parameters.

  11. Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals

    Science.gov (United States)

    Sekerina, Irina A.; Sauermann, Antje

    2015-01-01

    It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking…

  12. Gender Differences in Knee Joint Congruity Quantified from MRI

    DEFF Research Database (Denmark)

    Tummala, Sudhakar; Schiphof, Dieuwke; Byrjalsen, Inger

    2018-01-01

    was located and quantified using Euclidean distance transform. Furthermore, the CI was quantified over the contact area by assessing agreement of the first- and second-order general surface features. Then, the gender differences between CA and CI values were evaluated at different stages of radiographic OA...

  13. Anatomy of Alternating Quantifier Satisfiability (Work in progress)

    DEFF Research Database (Denmark)

    Dung, Phan Anh; Bjørner, Nikolaj; Monniaux, David

    We report on work in progress to generalize an algorithm recently introduced in [10] for checking satisfiability of formulas with quantifier alternation. The algorithm uses two auxiliary procedures: a procedure for producing a candidate formula for quantifier elimination and a procedure for elimi...

  14. The Role of Quantifier Alternations in Cut Elimination

    DEFF Research Database (Denmark)

    Gerhardy, Philipp

    2005-01-01

    Extending previous results on the complexity of cut elimination for the sequent calculus LK, we discuss the role of quantifier alternations and develop a measure to describe the complexity of cut elimination in terms of quantifier alternations in cut formulas and contractions on such formulas...

  15. Características do pasto e acúmulo de forragem em capim-tanzânia submetido a alturas de manejo do pasto Sward characteristics and herbage accumulation of Tanzania grass submitted to sward heights

    Directory of Open Access Journals (Sweden)

    Marcos Weber do Canto

    2008-03-01

    Full Text Available O objetivo deste experimento foi avaliar alturas de manejo do pasto (20, 40, 60 e 80 cm em capim-tanzânia (Panicum maximum Jacq., em regime de lotação contínua, nas características do dossel e acúmulo de matéria seca. Os animais utilizados foram novilhos Nelore (Bos indicus, e a taxa de lotação foi variável. Foram avaliados: a massa de forragem, a massa de lâmina de folha verde, a razão folha:colmo, a composição morfológica e a taxa de acúmulo de matéria seca. O delineamento experimental foi inteiramente casualizado, com duas repetições. A massa de forragem aumentou linearmente com o aumento da altura do pasto. As médias de massa de forragem foram 2.767, 3.105, 3.657 e 4.436 kg ha-1, respectivamente, para as alturas de 20, 40, 60 e 80 cm. As taxas de acúmulo de matéria seca, a 20, 40, 60 e 80 cm, foram, respectivamente, 104, 108, 90 e 81 kg ha-1 por dia, o que indica que houve redução dessas taxas com a elevação da altura do pasto. A razão folha:colmo decresceu linearmente com o aumento da altura do pasto. Pastagens de capim-tanzânia, sob lotação contínua ao final da primavera e durante o verão, devem ser utilizadas entre 40 e 60 cm de altura.The objective of this experiment was to evaluate different sward height (20, 40, 60 e 80 cm in Tanzania grass (Panicum maximum Jacq. pastures managed under continuous stocking. The animals used were Nellore steers, and the control of sward height was done with put-and-take techniques. Evaluations were made for: forage mass, green leaf mass, leaf:stem ratio, morphological composition and dry matter accumulation rate. The experimental design was completely randomized with two replications. Forage mass increased linearly with sward height with overall mean of 2,767, 3,105, 3,657 and 4,436 kg ha-1 at sward heights 20, 40, 60 and 80 cm, respectively. Rates of dry matter accumulation decreased with increasing sward heights and were 104, 108, 90 and 81 kg ha-1 per day for sward

  16. Quantifying the effect of riming on snowfall using ground-based observations

    Science.gov (United States)

    Moisseev, Dmitri; von Lerber, Annakaisa; Tiira, Jussi

    2017-04-01

    Ground-based observations of ice particle size distribution and ensemble mean density are used to quantify the effect of riming on snowfall. The rime mass fraction is derived from these measurements by following the approach that is used in a single ice-phase category microphysical scheme proposed for the use in numerical weather prediction models. One of the characteristics of the proposed scheme is that the prefactor of a power law relation that links mass and size of ice particles is determined by the rime mass fraction, while the exponent does not change. To derive the rime mass fraction, a mass-dimensional relation representative of unrimed snow is also determined. To check the validity of the proposed retrieval method, the derived rime mass fraction is converted to the effective liquid water path that is compared to microwave radiometer observations. Since dual-polarization radar observations are often used to detect riming, the impact of riming on dual-polarization radar variables is studied for differential reflectivity measurements. It is shown that the relation between rime mass fraction and differential reflectivity is ambiguous, other factors such as change in median volume diameter need also be considered. Given the current interest on sensitivity of precipitation to aerosol pollution, which could inhibit riming, the importance of riming for surface snow accumulation is investigated. It is found that riming is responsible for 5% to 40% of snowfall mass. The study is based on data collected at the University of Helsinki field station in Hyytiälä during U.S. Department of Energy Biogenic Aerosols Effects on Clouds and Climate (BAECC) field campaign and the winter 2014/2015. In total 22 winter storms were analyzed, and detailed analysis of two events is presented to illustrate the study.

  17. Quantifying multiple trace elements in uranium ore concentrates. An interlaboratory comparison

    International Nuclear Information System (INIS)

    Buerger, S.; Boulyga, S.F.; Penkin, M.V.; Jovanovic, S.; Lindvall, R.; Rasmussen, G.; Riciputi, L.

    2014-01-01

    An intercomparison was organized, with six laboratories tasked to quantify sixty-nine impurities in two uranium materials. The main technique employed for analysis was inductively coupled plasma mass spectrometry in combination with matrix-matched external calibration. The results presented highlight the current state-of-the-practice; lessons learned include previously unaccounted polyatomic interferences, issues related to sample dissolution, blank correction and calibration, and the challenge of estimating measurement uncertainties. The exercise yielded consensus values for the two analysed materials, suitable for use as laboratory standards to partially fill a gap in the availability of uranium reference materials characterized for impurities. (author)

  18. Data’s Intimacy: Machinic Sensibility and the Quantified Self

    Directory of Open Access Journals (Sweden)

    2016-09-01

    Full Text Available Today, machines observe, record the world – not just for us, but sometimes instead of us (in our stead, and even indifferently to us humans. And yet, remain human. Correlationism may not be up to a comprehensive ontology, but the ways in which we encounter, and struggle to make some kind of sense of, machinic sensibility matters. The nature of that encounter is not instrumentality, or even McLuhanian extension, but a full-blown ‘relationship’ where the terms by which machines ‘experience’ the world, and communicate with each other, parametrises the conditions for our own experience. This essay will play out one such relationship currently in the making: the boom in self-tracking technologies, and the attendant promise of data’s intimacy. This essay proceeds in three sections, all of which draw on a larger research project into self-tracking and contemporary data epistemologies. It thus leverages observations from close reading of self-tracking’s publicisation in the mass media between 2007 and 2016; analysis of over fifty self-tracking products, some of it through self-experimentation; and interviews and ethnographic observation, primarily of the ‘Quantified Self’ connoisseur community. The first section examines the dominant public presentations of self-tracking in early twenty-first century discourse. This discourse embraces a vision of automated and intimate self-surveillance, which is then promised to deliver superior control and objective knowledge over the self. Next, I link these promises to the recent theoretical turns towards the agency of objects and the autonomous sensory capacities of new media to consider the implications of such theories – and the technological shifts they address – for the phenomenology of the new media subject. Finally, I return to self-tracking discourse to consider its own idealisation of such a subject – what I call ‘data-sense’. I conclude by calling for a more explicit public and

  19. Quantified Effects of Late Pregnancy and Lactation on the Osmotic ...

    African Journals Online (AJOL)

    Quantified Effects of Late Pregnancy and Lactation on the Osmotic Stability of ... in the composition of erythrocyte membranes associated with the physiologic states. Keywords: Erythrocyteosmotic stability, osmotic fragility, late pregnancy, ...

  20. Study Quantifies Physical Demands of Yoga in Seniors

    Science.gov (United States)

    ... Z Study Quantifies Physical Demands of Yoga in Seniors Share: A recent NCCAM-funded study measured the ... performance of seven standing poses commonly taught in senior yoga classes: Chair, Wall Plank, Tree, Warrior II, ...

  1. Quantifying the economic water savings benefit of water hyacinth ...

    African Journals Online (AJOL)

    Quantifying the economic water savings benefit of water hyacinth ... Value Method was employed to estimate the average production value of irrigation water, ... invasions of this nature, as they present significant costs to the economy and ...

  2. Analyzing complex networks evolution through Information Theory quantifiers

    International Nuclear Information System (INIS)

    Carpi, Laura C.; Rosso, Osvaldo A.; Saco, Patricia M.; Ravetti, Martin Gomez

    2011-01-01

    A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Nino/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

  3. Analyzing complex networks evolution through Information Theory quantifiers

    Energy Technology Data Exchange (ETDEWEB)

    Carpi, Laura C., E-mail: Laura.Carpi@studentmail.newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Rosso, Osvaldo A., E-mail: rosso@fisica.ufmg.b [Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte (31270-901), MG (Brazil); Chaos and Biology Group, Instituto de Calculo, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellon II, Ciudad Universitaria, 1428 Ciudad de Buenos Aires (Argentina); Saco, Patricia M., E-mail: Patricia.Saco@newcastle.edu.a [Civil, Surveying and Environmental Engineering, University of Newcastle, University Drive, Callaghan NSW 2308 (Australia); Departamento de Hidraulica, Facultad de Ciencias Exactas, Ingenieria y Agrimensura, Universidad Nacional de Rosario, Avenida Pellegrini 250, Rosario (Argentina); Ravetti, Martin Gomez, E-mail: martin.ravetti@dep.ufmg.b [Departamento de Engenharia de Producao, Universidade Federal de Minas Gerais, Av. Antonio Carlos, 6627, Belo Horizonte (31270-901), MG (Brazil)

    2011-01-24

    A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Nino/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

  4. On the contrast between Germanic and Romance negated quantifiers

    Directory of Open Access Journals (Sweden)

    Robert Cirillo

    2009-01-01

    Full Text Available Universal quantifiers can be stranded in the manner described by Sportiche (1988, Giusti (1990 and Shlonsky (1991 in both the Romance and Germanic languages, but a negated universal quantifier can only be stranded in the Germanic languages. The goal of this paper is to show that this contrast between the Romance and the Germanic languages can be explained if one adapts the theory of sentential negation in Zeijlstra (2004 to constituent (quantifier negation. According to Zeijlstra’s theory, a negation marker in the Romance languages is the head of a NegP that dominates vP, whereas in the Germanic languages a negation marker is a maximal projection that occupies the specifier position of a verbal phrase. I will show that the non-occurrence of stranded negated quantifiers in the Romance languages follows from the fact that negation markers in the Romance languages are highly positioned syntactic heads.

  5. A kernel plus method for quantifying wind turbine performance upgrades

    KAUST Repository

    Lee, Giwhyun; Ding, Yu; Xie, Le; Genton, Marc G.

    2014-01-01

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine's upgrade

  6. Quantifying Functional Reuse from Object Oriented Requirements Specifications

    NARCIS (Netherlands)

    Condori-Fernandez, Nelly; Condori-Fernández, N.; Pastor, O; Daneva, Maia; Abran, A.; Castro, J.; Quer, C.; Carvallo, J. B.; Fernandes da Silva, L.

    2008-01-01

    Software reuse is essential in improving efficiency and productivity in the software development process. This paper analyses reuse within requirements engineering phase by taking and adapting a standard functional size measurement method, COSMIC FFP. Our proposal attempts to quantify reusability

  7. User guide : process for quantifying the benefits of research.

    Science.gov (United States)

    2017-07-01

    The Minnesota Department of Transportation Research Services has adopted a process for quantifying the monetary benefits of research projects, such as the dollar value of particular ideas when implemented across the states transportation system. T...

  8. Mass Customization Services

    DEFF Research Database (Denmark)

    Friedrich, Gerhard

    Topics of the IMCM’08 & PETO’08 and this book are: Mass customization in service, mass customizing financial services, mass customization in supply networks, implementation issues in logistics, product life cycle and mass customization. The research field of mass customization is more than 15 years...

  9. How to Quantify Deterrence and Reduce Critical Infrastructure Risk

    OpenAIRE

    Taquechel, Eric F.; Lewis, Ted G.

    2012-01-01

    This article appeared in Homeland Security Affairs (August 2012), v.8, article 12 "We propose a definition of critical infrastructure deterrence and develop a methodology to explicitly quantify the deterrent effects of critical infrastructure security strategies. We leverage historical work on analyzing deterrence, game theory and utility theory. Our methodology quantifies deterrence as the extent to which an attacker's expected utility from an infrastructure attack changes after a defende...

  10. Quantitative mass imaging of single biological macromolecules.

    Science.gov (United States)

    Young, Gavin; Hundt, Nikolas; Cole, Daniel; Fineberg, Adam; Andrecka, Joanna; Tyler, Andrew; Olerinyova, Anna; Ansari, Ayla; Marklund, Erik G; Collier, Miranda P; Chandler, Shane A; Tkachenko, Olga; Allen, Joel; Crispin, Max; Billington, Neil; Takagi, Yasuharu; Sellers, James R; Eichmann, Cédric; Selenko, Philipp; Frey, Lukas; Riek, Roland; Galpin, Martin R; Struwe, Weston B; Benesch, Justin L P; Kukura, Philipp

    2018-04-27

    The cellular processes underpinning life are orchestrated by proteins and their interactions. The associated structural and dynamic heterogeneity, despite being key to function, poses a fundamental challenge to existing analytical and structural methodologies. We used interferometric scattering microscopy to quantify the mass of single biomolecules in solution with 2% sequence mass accuracy, up to 19-kilodalton resolution, and 1-kilodalton precision. We resolved oligomeric distributions at high dynamic range, detected small-molecule binding, and mass-imaged proteins with associated lipids and sugars. These capabilities enabled us to characterize the molecular dynamics of processes as diverse as glycoprotein cross-linking, amyloidogenic protein aggregation, and actin polymerization. Interferometric scattering mass spectrometry allows spatiotemporally resolved measurement of a broad range of biomolecular interactions, one molecule at a time. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  11. Body mass index and smoking: cross-sectional study of a representative sample of adolescents in Denmark

    DEFF Research Database (Denmark)

    Dhariwal, Mukesh; Rasmussen, Mette; Holstein, Bjørn Evald

    2010-01-01

    To quantify the association between body mass index (BMI) and smoking (at all and daily smoking) stratified by gender, family social class, and ethnicity among adolescents aged between 13 and 15.......To quantify the association between body mass index (BMI) and smoking (at all and daily smoking) stratified by gender, family social class, and ethnicity among adolescents aged between 13 and 15....

  12. Galaxy Clusters: Substructure and Mass Systematics

    Science.gov (United States)

    Zhang, Yu-Ying

    2010-07-01

    We calibrate the X-ray measured hydrostatic equilibrium (H.E.) mass and assess the origin of the H.E. mass systematics using 2-D spectrally measured X-ray properties. We obtained that the average X-ray mass derived from H.E. using XMM-Newton data is lower compared to the weak lensing mass from Subaru data for relaxed clusters in a sample of 12 clusters at z~0.2. This is comparable to the expectation of numerical simulations because of the non-thermal pressure support due to turbulence and bulk motions. The gas mass to weak lensing mass ratio shows no dependence on the cluster morphology, which indicates that the gas mass may be a good mass proxy regardless of the cluster dynamical state. To understand the origin of the systematics of the H.E. mass, we investigated 4 nearby clusters, for which the substructure is quantified by the radial fluctuations in the spectrally measured 2-D maps by a cumulative/differential scatter profile relative to the mean profile within/at a given radius. The amplitude of and the discontinuity in the scatter complements 2-D substructure diagnostics, e.g. indicating the most disturbed radial range. There is a tantalizing link between the substructure identified using the scatter of the entropy and pressure fluctuations and the deviation of the H.E. mass relative to the expected mass based on the representative scaling relation, e.g., M-Mgas, particularly at r500-the radius within which the over-density, Δ, is 500 with respect to the critical density. This indicates that at larger radii, the systematic error of the H.E. mass may well be caused by substructure.

  13. Quantifying Ion Transport in Polymers Using Electrochemical Quartz Crystal Microbalance with Dissipation

    Science.gov (United States)

    Lutkenhaus, Jodie; Wang, Shaoyang

    For polymers in energy systems, one of the most common means of quantifying ion transport is that of electrochemical impedance spectroscopy, in which an alternating electric field is applied and the resultant impedance response is recorded. While useful, this approach misses subtle details in transient film swelling, effects of hydration or solvent shells around the transporting ion, and changes in mechanical properties of the polymer. Here we present electrochemical quartz crystal microbalance with dissipation (EQCMD) monitoring as a means to quantify ion transport, dynamic swelling, and mechanical properties of polymers during electrochemical interrogation. We focus upon EQCMD characterization of the redox-active nitroxide radical polymer, poly(2,2,6,6-tetramethylpiperidinyloxy methacrylate) (PTMA). Upon oxidation, PTMA becomes positively charged, which requires the transport of a complementary anion into the polymer for electroneutrality. By EQCMD, we quantify anion transport and resultant swelling upon oxidation, as well as decoupling of contributions attributed to the ion and the solvent. We explore the effect of different lithium electrolyte salts in which each salt gives different charge storage and mass transport behavior. This is attributed to varied polymer-dopant and dopant-solvent interactions. The work was supported by the Grant DE-SC0014006 funded by the U.S. Department of Energy, Office of Science.

  14. Ecosystem site description - an approach to quantify transport and accumulation of matter in a drainage area

    International Nuclear Information System (INIS)

    Soderback, B.; Kautsky, U.; Lindborg, T.

    2004-01-01

    The Swedish Nuclear Fuel and Waste Management Co. (SKB) presently perform site investigations at two sites in Sweden for a future repository of spent nuclear fuel. The safety assessment of a potential repository will, among other methods, use an approach where transport and accumulation of radionuclides is modelled by quantifying the pathways of carbon/nitrogen/phosphorous in the ecosystem. Since water is the most important medium for transportation of matter, the obvious delimitation of an area for quantification of matter transport is the drainage area. This study describes how site-specific data on surface water chemistry and hydrology, measured at several points along the flow paths of a drainage area, can be used to describe and quantify the flow of matter in terms of transport or accumulation. The approach was applied to the drainage area of Lake Eckarfjaerden, investigated as part of the site investigation programme at Forsmark in central Sweden. By using data from inlet and outlet of the lake, together with data from the lake itself, we quantified the flow of matter in the drainage area, and also developed mass-balance budgets for important elements. The results were used to validate process oriented terrestrial and aquatic ecosystem models, developed for the same drainage area in parallel to the present study. In conclusion, applying this approach will contribute substantially to our understanding of the processes controlling transport and accumulation of matter in a drainage area, and thereby reduce the uncertainties in estimating radionuclide flow and consequences to humans and the environment. (author)

  15. Neutrino mass matrix

    International Nuclear Information System (INIS)

    Strobel, E.L.

    1985-01-01

    Given the many conflicting experimental results, examination is made of the neutrino mass matrix in order to determine possible masses and mixings. It is assumed that the Dirac mass matrix for the electron, muon, and tau neutrinos is similar in form to those of the quarks and charged leptons, and that the smallness of the observed neutrino masses results from the Gell-Mann-Ramond-Slansky mechanism. Analysis of masses and mixings for the neutrinos is performed using general structures for the Majorana mass matrix. It is shown that if certain tentative experimental results concerning the neutrino masses and mixing angles are confirmed, significant limitations may be placed on the Majorana mass matrix. The most satisfactory simple assumption concerning the Majorana mass matrix is that it is approximately proportional to the Dirac mass matrix. A very recent experimental neutrino mass result and its implications are discussed. Some general properties of matrices with structure similar to the Dirac mass matrices are discussed

  16. Quantifying the environmental impact of particulate deposition from dry unpaved roadways

    Energy Technology Data Exchange (ETDEWEB)

    Becker, D.L.

    1979-01-01

    Airborne dust is the air pollutant most frequently observed to exceed National Ambient Air Quality Standards in rural areas. This pollutant (also referred to as suspended particulates) may originate from point sources (e.g., large areas of bare soil or pollen-producing vegetation.) Most sources of atmospheric particulates, whether natural or anthropogenic, are difficult to quantify by means of a source strength (i.e., mass of particulates emitted per unit time). A numerical model was developed for calculating the source strength and quantifying the atmospheric transport and eposition of dust generated on unpaved roadways. This model satisfies the second-order differential equation for the diffusion process and also the equation of mass conservation. Input to the model includes meterological variables, surface roughness characteristics, and the size distribution and suspended particulate concentration of dust as sampled downwind of an unpaved roadway. By using predetermined tolerance levels of airborne concentrations or tolerance levels of deposition, maximum allowable vehicular traffic volume can be established. The model also may be used to estimate reduction in photosynthesis resulting from fugitive dust from point or line sources. The contribug ion to sedimentation in aquatic bodies, resulting from airborne particulates also may be assessed with this model.

  17. Mass Transport within Soils

    Energy Technology Data Exchange (ETDEWEB)

    McKone, Thomas E.

    2009-03-01

    Contaminants in soil can impact human health and the environment through a complex web of interactions. Soils exist where the atmosphere, hydrosphere, geosphere, and biosphere converge. Soil is the thin outer zone of the earth's crust that supports rooted plants and is the product of climate and living organisms acting on rock. A true soil is a mixture of air, water, mineral, and organic components. The relative proportions of these components determine the value of the soil for agricultural and for other human uses. These proportions also determine, to a large extent, how a substance added to soil is transported and/or transformed within the soil (Spositio, 2004). In mass-balance models, soil compartments play a major role, functioning both as reservoirs and as the principal media for transport among air, vegetation, surface water, deeper soil, and ground water (Mackay, 2001). Quantifying the mass transport of chemicals within soil and between soil and atmosphere is important for understanding the role soil plays in controlling fate, transport, and exposure to multimedia pollutants. Soils are characteristically heterogeneous. A trench dug into soil typically reveals several horizontal layers having different colors and textures. As illustrated in Figure 1, these multiple layers are often divided into three major horizons: (1) the A horizon, which encompasses the root zone and contains a high concentration of organic matter; (2) the B horizon, which is unsaturated, lies below the roots of most plants, and contains a much lower organic carbon content; and (3) the C horizon, which is the unsaturated zone of weathered parent rock consisting of bedrock, alluvial material, glacial material, and/or soil of an earlier geological period. Below these three horizons lies the saturated zone - a zone that encompasses the area below ground surface in which all interconnected openings within the geologic media are completely filled with water. Similarly to the unsaturated

  18. A systematic review of methods for quantifying serum testosterone in patients with prostate cancer who underwent castration.

    Science.gov (United States)

    Comas, I; Ferrer, R; Planas, J; Celma, A; Regis, L; Morote, J

    2018-03-01

    The clinical practice guidelines recommend measuring serum testosterone in patients with prostate cancer (PC) who undergo castration. The serum testosterone concentration should be IA) has become widespread, although their metrological characteristics do not seem appropriate for quantifying low testosterone concentrations. The objective of this review is to analyse the methods for quantifying testosterone and to establish whether there is scientific evidence that justifies measuring it in patients with PC who undergo castration, through liquid chromatography attached to a mass spectrometry in tandem (LC-MSMS). We performed a search in PubMed with the following MeSH terms: measurement, testosterone, androgen suppression and prostate cancer. We selected 12 studies that compared the metrological characteristics of various methods for quantifying serum testosterone compared with MS detection methods. IAs are standard tools for measuring testosterone levels; however, there is evidence that IAs lack accuracy and precision for quantifying low concentrations. Most chemiluminescent IAs overestimate their concentration, especially below 100ng/dL. The procedures that use LC-MSMS have an adequate lower quantification limit and proper accuracy and precision. We found no specific evidence in patients with PC who underwent castration. LC-MSMS is the appropriate method for quantifying low serum testosterone concentrations. We need to define the level of castration with this method and the optimal level related to better progression of the disease. Copyright © 2017 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  19. Quantifying the perceived risks associated with nuclear energy issues

    International Nuclear Information System (INIS)

    Sandquist, G.M.

    2004-01-01

    A mathematical model is presented for quantifying and assessing perceived risks in an empirical manner. The analytical model provides for the identification and assignment of any number of quantifiable risk perception factors that can be incorporated within standard risk methodology. The set of risk perception factors used to demonstrate the model are those that have been identified by social and behavioural scientists as principal factors influencing people in their perception of risks associated with major technical issues. These same risk factors are commonly associated with nuclear energy issues. A rational means is proposed for determining and quantifying these risk factors for a given application. The model should contribute to improved understanding of the basis and logic of public risk perception and provide practical and effective means for addressing perceived risks when they arise over important technical issues and projects. (author)

  20. Quantifying the value of E and P technology

    International Nuclear Information System (INIS)

    Heinemann, R.F.; Donlon, W.P.; Hoefner, M.L.

    1996-01-01

    A quantitative value-to-cost analysis was performed for the upstream technology portfolio of Mobil Oil for the period 1993 to 1998, by quantifying the cost of developing and delivering various technologies, including the net present value from technologies applied to thirty major assets. The value captured was classified into four general categories: (1) reduced capital costs, (2) reduced operating costs, (3) increased hydrocarbon production, and (4) increased proven reserves. The methodology used in quantifying the value-to-cost of upstream technologies and the results of asset analysis were described, with examples of value of technology to specific assets. A method to incorporate strategic considerations and business alignment to set overall program priorities was also discussed. Identifying and quantifying specific cases of technology application on an asset by asset basis was considered to be the principal advantage of using this method. figs

  1. Galaxy Masses : A Review

    NARCIS (Netherlands)

    Courteau, Stephane; Cappellari, Michele; Jong, Roelof S. de; Dutton, Aaron A.; Koopmans, L.V.E.

    2013-01-01

    Galaxy masses play a fundamental role in our understanding of structure formation models. This review addresses the variety and reliability of mass estimators that pertain to stars, gas, and dark matter. The dierent sections on masses from stellar populations, dynamical masses of gas-rich and

  2. Fourier transform ion cyclotron resonance mass spectrometry

    Science.gov (United States)

    Marshall, Alan G.

    1998-06-01

    As for Fourier transform infrared (FT-IR) interferometry and nuclear magnetic resonance (NMR) spectroscopy, the introduction of pulsed Fourier transform techniques revolutionized ion cyclotron resonance mass spectrometry: increased speed (factor of 10,000), increased sensitivity (factor of 100), increased mass resolution (factor of 10,000-an improvement not shared by the introduction of FT techniques to IR or NMR spectroscopy), increased mass range (factor of 500), and automated operation. FT-ICR mass spectrometry is the most versatile technique for unscrambling and quantifying ion-molecule reaction kinetics and equilibria in the absence of solvent (i.e., the gas phase). In addition, FT-ICR MS has the following analytically important features: speed (~1 second per spectrum); ultrahigh mass resolution and ultrahigh mass accuracy for analysis of mixtures and polymers; attomole sensitivity; MSn with one spectrometer, including two-dimensional FT/FT-ICR/MS; positive and/or negative ions; multiple ion sources (especially MALDI and electrospray); biomolecular molecular weight and sequencing; LC/MS; and single-molecule detection up to 108 Dalton. Here, some basic features and recent developments of FT-ICR mass spectrometry are reviewed, with applications ranging from crude oil to molecular biology.

  3. Ventilation in Sewers Quantified by Measurements of CO2

    DEFF Research Database (Denmark)

    Fuglsang, Emil Dietz; Vollertsen, Jes; Nielsen, Asbjørn Haaning

    2012-01-01

    Understanding and quantifying ventilation in sewer systems is a prerequisite to predict transport of odorous and corrosive gasses within the system as well as their interaction with the urban atmosphere. This paper studies ventilation in sewer systems quantified by measurements of the natural...... occurring compound CO2. Most often Danish wastewater is supersaturated with CO2 and hence a potential for stripping is present. A novel model was built based on the kinetics behind the stripping process. It was applied to simulate ventilation rates from field measurements of wastewater temperature, p...

  4. The Origin of Mass

    OpenAIRE

    森岡, 達史

    2013-01-01

    The quark-lepton mass problem and the ideas of mass protection are reviewed. The hierarchy problem and suggestions for its resolution, including Little Higgs models, are discussed. The Multiple Point Principle is introduced and used within the Standard Model to predict the top quark and Higgs particle masses. Mass matrix ans\\"{a}tze are considered; in particular we discuss the lightest family mass generation model, in which all the quark mixing angles are successfully expressed in terms of si...

  5. Heavy quark masses

    Science.gov (United States)

    Testa, Massimo

    1990-01-01

    In the large quark mass limit, an argument which identifies the mass of the heavy-light pseudoscalar or scalar bound state with the renormalized mass of the heavy quark is given. The following equation is discussed: m(sub Q) = m(sub B), where m(sub Q) and m(sub B) are respectively the mass of the heavy quark and the mass of the pseudoscalar bound state.

  6. Neutrino masses and oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, A Yu

    1996-11-01

    New effects related to refraction of neutrinos in different media are reviewed and implication of the effects to neutrino mass and mixing are discussed. Patterns of neutrino masses and mixing implied by existing hints/bounds are described. Recent results on neutrino mass generation are presented. They include neutrino masses in SO(10) GUT`s and models with anomalous U(1), generation of neutrino mass via neutrino-neutralino mixing, models of sterile neutrino. (author). 95 refs, 9 figs.

  7. Quantifying moisture transport in cementitious materials using neutron radiography

    Science.gov (United States)

    Lucero, Catherine L.

    A portion of the concrete pavements in the US have recently been observed to have premature joint deterioration. This damage is caused in part by the ingress of fluids, like water, salt water, or deicing salts. The ingress of these fluids can damage concrete when they freeze and expand or can react with the cementitious matrix causing damage. To determine the quality of concrete for assessing potential service life it is often necessary to measure the rate of fluid ingress, or sorptivity. Neutron imaging is a powerful method for quantifying fluid penetration since it can describe where water has penetrated, how quickly it has penetrated and the volume of water in the concrete or mortar. Neutrons are sensitive to light atoms such as hydrogen and thus clearly detect water at high spatial and temporal resolution. It can be used to detect small changes in moisture content and is ideal for monitoring wetting and drying in mortar exposed to various fluids. This study aimed at developing a method to accurately estimate moisture content in mortar. The common practice is to image the material dry as a reference before exposing to fluid and normalizing subsequent images to the reference. The volume of water can then be computed using the Beer-Lambert law. This method can be limiting because it requires exact image alignment between the reference image and all subsequent images. A model of neutron attenuation in a multi-phase cementitious composite was developed to be used in cases where a reference image is not available. The attenuation coefficients for water, un-hydrated cement, and sand were directly calculated from the neutron images. The attenuation coefficient for the hydration products was then back-calculated. The model can estimate the degree of saturation in a mortar with known mixture proportions without using a reference image for calculation. Absorption in mortars exposed to various fluids (i.e., deionized water and calcium chloride solutions) were investigated

  8. Seasonal spreading of the Persian Gulf water mass in the Arabian Sea

    Digital Repository Service at National Institute of Oceanography (India)

    Prasad, T.G.; Ikeda, M.; PrasannaKumar, S.

    The characteristics of the subsurface salinity maximum associated with the Persian Gulf Water mass (PGW) are used to quantify the spreading and mixing of PGW in the thermocline of the Arabian Sea based on a bimonthly climatology of temperature...

  9. Alienation, Mass Society and Mass Culture.

    Science.gov (United States)

    Dam, Hari N.

    This monograph examines the nature of alienation in mass society and mass culture. Conceptually based on the "Gemeinschaft-Gesellschaft" paradigm of sociologist Ferdinand Tonnies, discussion traces the concept of alienation as it appears in the philosophies of Hegel, Marx, Kierkegaard, Sartre, and others. Dwight Macdonald's "A Theory of Mass…

  10. herbage mineral nutrition indexed as tools for rapid mineral status

    African Journals Online (AJOL)

    Administrator

    mineral indices were calculated from chemical analysis with a view to generate relevant fertilisation recommenda- tions. Although the dry .... P, and K established in temperate climate (Blanfort ..... like rotational grazing rhythms or stocking rates.

  11. Estimating seasonal herbage production of a semi-arid grassland ...

    African Journals Online (AJOL)

    The relation between above-ground phytomass production and three independent variables, namely, seasonal rainfall, evapotranspiration (Et) and veld condition, were investigated using fourteen years' data (1977-1991) from the dry Themeda-Cymbopogon grassveld of the central Orange Free State. The data showed that ...

  12. Forage herbs improve mineral composition of grassland herbage

    DEFF Research Database (Denmark)

    Pirhofer-Walzl, Karin; Søegaard, Karen; Jensen, Henning Høgh

    2011-01-01

    there is limited information about mineral concentrations in forage herbs. To determine whether herbs have greater macro- and micromineral concentrations than forage legumes and grasses, we conducted a 2-year experiment on a loamy-sand site in Denmark sown with a multi-species mixture comprised of three functional...

  13. Herbage productivity of the Winneba plains of Ghana | Fleischer ...

    African Journals Online (AJOL)

    The biomass productivity of the Winneba plains of Ghana was measured between January 1990 and February 1992. Ten sampling sites were chosen for the study. An area of 5.0 m W 5.0 m was demarcated and within it an area of 1.0 m W 1.0 m was harvested at monthly intervals, clipped by means of sickle at 5 cm above ...

  14. GROWTH AND HERBAGE OF TELFAIRIA OCCIDENTALIS (HOOK F).

    African Journals Online (AJOL)

    DR. AMINU

    2013-06-01

    Jun 1, 2013 ... occidentalis. INTRODUCTION ... maintenance of motor and internal combustion engines. ... polluted soil caused stunted growth in plant and the ... productive of soil polluted with spent engine oil and .... in total N and exchangeable K and moderate in ... rise in the level of heavy metal concentrations is in.

  15. Herbage mineral nutrition indexed as tools for rapid mineral status ...

    African Journals Online (AJOL)

    on scientific data. Fertilisation is one of the potential options to improve pasture management as indicated by findings of this study. This is useful evidence-based information that could be incorporated in extension packages and resource materials for dissemination and subsequent adoption by livestock farming ...

  16. Simulation of water use and herbage growth in arid regions

    NARCIS (Netherlands)

    Keulen, van H.

    1975-01-01

    The and and semi-arid regions of the world, totalling about 30% of the land surface of the earth, are predominantly used for extensive grazing, as low and erratic rainfall presents too high a risk for arable farming. The population that can be sustained by the animal products -meat, milk or

  17. Herbage availability as a stress factor on grazed Coastcross II ...

    African Journals Online (AJOL)

    South African Journal of Animal Science. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 13, No 1 (1983) >. Log in or Register to get access to full text downloads.

  18. Produção de forragem e produção animal em pastagem com duas disponibilidades de forragem associadas ou não à suplementação energética Effects of forage availability and energy supplementation on herbage accumulation rate and animal yield

    Directory of Open Access Journals (Sweden)

    Alcides Pilau

    2005-08-01

    Full Text Available Neste experimento, avaliou-se o efeito de duas disponibilidades de forragem, 1.200 e 1.500 kg/ha de matéria seca (MS, e do uso de suplementação energética sobre a produção de forragem e a produção animal em pastagem de aveia (Avena strigosa Schreb + azevém (Lolium multiflorum Lam. Foram utilizadas 90 novilhas da raça Charolês e suas cruzas com Nelore, com peso vivo (PV de 164 kg no início do pastejo, submetidas às seguintes combinações: DFB - disponibilidade de forragem baixa; DFA - disponibilidade de forragem alta; DFBS - disponibilidade de forragem baixa + suplementação energética; DFAS - disponibilidade de forragem alta + suplementação energética. O suplemento fornecido foi grão de sorgo moído, na proporção de 0,7% do PV. As variáveis estudadas foram produção de forragem (PF, carga animal (CA e ganho de peso vivo por área (GPA. A PF não foi influenciada pelo manejo da pastagem e pela suplementação aos animais. O acúmulo de forragem diário estimado foi de 45,53 kg/ha de MS. A CA na DFA, média de 862 kg/ha de PV, apresentou pouca variação no decorrer do ciclo de pastejo. Em DFAS, DFB e DFBS, a carga animal foi extremamente variável e, a partir de 26/08, foi maior na DFBS. A utilização de 1.200 kg/ha de MS não afetou o GPA, enquanto a suplementação aos animais proporcionou aumento de 59,4% em relação ao uso exclusivo da pastagem.The trial was conducted to evaluate two forage availabilities (1.200 and 1.500 kg/ha of dry matter [DM] and energy supplementation of winter pasture on herbage and animal yields. The pasture was a mixture of oat (Avena strigosa Schreb + Italian ryegrass (Lolium multiflorum Lam. Ninety Charolais and Charolais crossbred Nellore heifers, with initial live weight of 164 kg, were submitted to the following treatments: LFA - low forage availability; HFA - high forage availability; LFAS - low forage availability + supplementation; HFAS - high forage availability + supplementation

  19. Disponibilidade, composição bromatológica e consumo de matéria seca em pastagem consorciada de Brachiaria decumbens com Stylosanthes guianensis Herbage availability, chemical composition and dry matter intake in mixed pasture of Brachiaria decumbens with Stylosanthes guianensis

    Directory of Open Access Journals (Sweden)

    Luiz Januário Magalhães Aroeira

    2005-04-01

    Full Text Available O objetivo deste trabalho foi avaliar a disponibilidade de forragem, a composição bromatológica, o consumo de matéria seca e a proporção de gramínea e leguminosa na dieta de vacas mestiças Holandês x Zebu, em pastagem consorciada de Brachiaria decumbens cv. Basilisk, Stylosanthes guianensis var. vulgaris cv. Mineirão e leguminosas arbóreas. Para estimativa da produção fecal, foram usados 10 g vaca-1 dia-1 de óxido crômico, durante dez dias. Amostras de extrusa foram usadas para determinação da composição bromatológica e digestibilidade in vitro da matéria seca. A disponibilidade de matéria seca de forragem de B. decumbens variou com as condições climáticas, enquanto a de S. guianensis decresceu linearmente ao longo do período experimental. O consumo de matéria seca foi maior em maio de 2001 (1,9% do peso do animal vivo e não diferiu entre os demais meses (1,5% do peso do animal vivo. Os baixos índices de consumo de matéria seca refletiram altos teores de fibra em detergente neutro (70,2% a 79,4% e baixos coeficientes de digestibilidade in vitro de matéria seca (42,1% a 48,0% da forragem. O consumo de leguminosa variou entre 8,7% e 24,1% do total ingerido. O consumo de matéria seca esteve diretamente relacionado à porcentagem de leguminosa na pastagem, o que evidencia o potencial de uso de pastagens consorciadas para vacas leiteiras.The objective of this work was to evaluate the herbage availability, nutritive value, dry matter intake and grass and legume percentage in diet of crossbred Holstein-Zebu cows, in pasture with Brachiaria decumbens cv. Basilisk, Stylosanthes guianensis var. vulgaris cv. Mineirão and tree legumes. To estimate the fecal output, it was used 10 g cow-1 day-1 of chromium oxide during ten consecutive days. Extrusa samples were used to determine the chemical composition and in vitro dry matter digestibility. B. decumbens availability varied with climatic conditions, while S. guianensis

  20. A user-oriented and quantifiable approach to irrigation design.

    NARCIS (Netherlands)

    Baars, E.; Bastiaansen, A.P.M.; Menenti, M.

    1995-01-01

    A new user-oriented approach is presented to apply marketing research techniques to quantify perceptions, preferences and utility values of farmers. This approach was applied to design an improved water distribution method for an irrigation scheme in Mendoza, Argentina. The approach comprises two

  1. Quantifying the CO{sub 2} permit price sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Gruell, Georg; Kiesel, Ruediger [Duisburg-Essen Univ., Essen (Germany). Inst. of Energy Trading and Financial Services

    2012-06-15

    Equilibrium models have been widely used in the literature with the aim of showing theoretical properties of emissions trading schemes. This paper applies equilibrium models to empirically study permit prices and to quantify the permit price sensitivity. In particular, we demonstrate that emission trading schemes both with and without banking are inherently prone to price jumps. (orig.)

  2. Quantifying Creative Destruction Entrepreneurship and Productivity in New Zealand

    OpenAIRE

    John McMillan

    2005-01-01

    This paper (a) provides a framework for quantifying any economy’s flexibility, and (b) reviews the evidence on New Zealand firms’ birth, growth and death. The data indicate that, by and large, the labour market and the financial market are doing their job.

  3. Comparing methods to quantify experimental transmission of infectious agents

    NARCIS (Netherlands)

    Velthuis, A.G.J.; Jong, de M.C.M.; Bree, de J.

    2007-01-01

    Transmission of an infectious agent can be quantified from experimental data using the transient-state (TS) algorithm. The TS algorithm is based on the stochastic SIR model and provides a time-dependent probability distribution over the number of infected individuals during an epidemic, with no need

  4. Quantifying Solar Cell Cracks in Photovoltaic Modules by Electroluminescence Imaging

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Hacke, Peter; Sera, Dezso

    2015-01-01

    This article proposes a method for quantifying the percentage of partially and totally disconnected solar cell cracks by analyzing electroluminescence images of the photovoltaic module taken under high- and low-current forward bias. The method is based on the analysis of the module’s electrolumin...

  5. Quantifying levels of animal activity using camera trap data

    NARCIS (Netherlands)

    Rowcliffe, J.M.; Kays, R.; Kranstauber, B.; Carbone, C.; Jansen, P.A.

    2014-01-01

    1.Activity level (the proportion of time that animals spend active) is a behavioural and ecological metric that can provide an indicator of energetics, foraging effort and exposure to risk. However, activity level is poorly known for free-living animals because it is difficult to quantify activity

  6. Information on Quantifiers and Argument Structure in English Learner's Dictionaries.

    Science.gov (United States)

    Lee, Thomas Hun-tak

    1993-01-01

    Lexicographers have been arguing for the inclusion of abstract and complex grammatical information in dictionaries. This paper examines the extent to which information about quantifiers and the argument structure of verbs is encoded in English learner's dictionaries. The Oxford Advanced Learner's Dictionary (1989), the Longman Dictionary of…

  7. Quantifying trail erosion and stream sedimentation with sediment tracers

    Science.gov (United States)

    Mark S. Riedel

    2006-01-01

    Abstract--The impacts of forest disturbance and roads on stream sedimentation have been rigorously investigated and documented. While historical research on turbidity and suspended sediments has been thorough, studies of stream bed sedimentation have typically relied on semi-quantitative measures such as embeddedness or marginal pool depth. To directly quantify the...

  8. Coupling and quantifying resilience and sustainability in facilities management

    DEFF Research Database (Denmark)

    Cox, Rimante Andrasiunaite; Nielsen, Susanne Balslev; Rode, Carsten

    2015-01-01

    Purpose – The purpose of this paper is to consider how to couple and quantify resilience and sustainability, where sustainability refers to not only environmental impact, but also economic and social impacts. The way a particular function of a building is provisioned may have significant repercus......Purpose – The purpose of this paper is to consider how to couple and quantify resilience and sustainability, where sustainability refers to not only environmental impact, but also economic and social impacts. The way a particular function of a building is provisioned may have significant...... repercussions beyond just resilience. The goal is to develop a decision support tool for facilities managers. Design/methodology/approach – A risk framework is used to quantify both resilience and sustainability in monetary terms. The risk framework allows to couple resilience and sustainability, so...... that the provisioning of a particular building can be investigated with consideration of functional, environmental, economic and, possibly, social dimensions. Findings – The method of coupling and quantifying resilience and sustainability (CQRS) is illustrated with a simple example that highlights how very different...

  9. Quantifying Time Dependent Moisture Storage and Transport Properties

    DEFF Research Database (Denmark)

    Peuhkuri, Ruut H

    2003-01-01

    This paper describes an experimental and numerical approach to quantify the time dependence of sorption mechanisms for some hygroscopic building - mostly insulation - materials. Some investigations of retarded sorption and non-Fickian phenomena, mostly on wood, have given inspiration to the present...

  10. A framework for quantifying net benefits of alternative prognostic models

    NARCIS (Netherlands)

    Rapsomaniki, E.; White, I.R.; Wood, A.M.; Thompson, S.G.; Feskens, E.J.M.; Kromhout, D.

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit)

  11. Using multiple linear regression techniques to quantify carbon ...

    African Journals Online (AJOL)

    Fallow ecosystems provide a significant carbon stock that can be quantified for inclusion in the accounts of global carbon budgets. Process and statistical models of productivity, though useful, are often technically rigid as the conditions for their application are not easy to satisfy. Multiple regression techniques have been ...

  12. Quantifying Stakeholder Values of VET Provision in the Netherlands

    Science.gov (United States)

    van der Sluis, Margriet E.; Reezigt, Gerry J.; Borghans, Lex

    2014-01-01

    It is well-known that the quality of vocational education and training (VET) depends on how well a given programme aligns with the values and interests of its stakeholders, but it is less well-known what these values and interests are and to what extent they are shared across different groups of stakeholders. We use vignettes to quantify the…

  13. Cross-linguistic patterns in the acquisition of quantifiers

    Science.gov (United States)

    Cummins, Chris; Gavarró, Anna; Kuvač Kraljević, Jelena; Hrzica, Gordana; Grohmann, Kleanthes K.; Skordi, Athina; Jensen de López, Kristine; Sundahl, Lone; van Hout, Angeliek; Hollebrandse, Bart; Overweg, Jessica; Faber, Myrthe; van Koert, Margreet; Smith, Nafsika; Vija, Maigi; Zupping, Sirli; Kunnari, Sari; Morisseau, Tiffany; Rusieshvili, Manana; Yatsushiro, Kazuko; Fengler, Anja; Varlokosta, Spyridoula; Konstantzou, Katerina; Farby, Shira; Guasti, Maria Teresa; Vernice, Mirta; Okabe, Reiko; Isobe, Miwa; Crosthwaite, Peter; Hong, Yoonjee; Balčiūnienė, Ingrida; Ahmad Nizar, Yanti Marina; Grech, Helen; Gatt, Daniela; Cheong, Win Nee; Asbjørnsen, Arve; Torkildsen, Janne von Koss; Haman, Ewa; Miękisz, Aneta; Gagarina, Natalia; Puzanova, Julia; Anđelković, Darinka; Savić, Maja; Jošić, Smiljana; Slančová, Daniela; Kapalková, Svetlana; Barberán, Tania; Özge, Duygu; Hassan, Saima; Chan, Cecilia Yuet Hung; Okubo, Tomoya; van der Lely, Heather; Sauerland, Uli; Noveck, Ira

    2016-01-01

    Learners of most languages are faced with the task of acquiring words to talk about number and quantity. Much is known about the order of acquisition of number words as well as the cognitive and perceptual systems and cultural practices that shape it. Substantially less is known about the acquisition of quantifiers. Here, we consider the extent to which systems and practices that support number word acquisition can be applied to quantifier acquisition and conclude that the two domains are largely distinct in this respect. Consequently, we hypothesize that the acquisition of quantifiers is constrained by a set of factors related to each quantifier’s specific meaning. We investigate competence with the expressions for “all,” “none,” “some,” “some…not,” and “most” in 31 languages, representing 11 language types, by testing 768 5-y-old children and 536 adults. We found a cross-linguistically similar order of acquisition of quantifiers, explicable in terms of four factors relating to their meaning and use. In addition, exploratory analyses reveal that language- and learner-specific factors, such as negative concord and gender, are significant predictors of variation. PMID:27482119

  14. FRAGSTATS: spatial pattern analysis program for quantifying landscape structure.

    Science.gov (United States)

    Kevin McGarigal; Barbara J. Marks

    1995-01-01

    This report describes a program, FRAGSTATS, developed to quantify landscape structure. FRAGSTATS offers a comprehensive choice of landscape metrics and was designed to be as versatile as possible. The program is almost completely automated and thus requires little technical training. Two separate versions of FRAGSTATS exist: one for vector images and one for raster...

  15. Quantifying Spin Hall Angles from Spin Pumping : Experiments and Theory

    NARCIS (Netherlands)

    Mosendz, O.; Pearson, J.E.; Fradin, F.Y.; Bauer, G.E.W.; Bader, S.D.; Hoffmann, A.

    2010-01-01

    Spin Hall effects intermix spin and charge currents even in nonmagnetic materials and, therefore, ultimately may allow the use of spin transport without the need for ferromagnets. We show how spin Hall effects can be quantified by integrating Ni80Fe20|normal metal (N) bilayers into a coplanar

  16. Quantifying Effectiveness of Streambank Stabilization Practices on Cedar River, Nebraska

    Directory of Open Access Journals (Sweden)

    Naisargi Dave

    2017-11-01

    Full Text Available Excessive sediment is a major pollutant to surface waters worldwide. In some watersheds, streambanks are a significant source of this sediment, leading to the expenditure of billions of dollars in stabilization projects. Although costly streambank stabilization projects have been implemented worldwide, long-term monitoring to quantify their success is lacking. There is a critical need to document the long-term success of streambank restoration projects. The objectives of this research were to (1 quantify streambank retreat before and after the stabilization of 18 streambanks on the Cedar River in North Central Nebraska, USA; (2 assess the impact of a large flood event; and (3 determine the most cost-efficient stabilization practice. The stabilized streambanks included jetties (10, rock-toe protection (1, slope reduction/gravel bank (1, a retaining wall (1, rock vanes (2, and tree revetments (3. Streambank retreat and accumulation were quantified using aerial images from 1993 to 2016. Though streambank retreat has been significant throughout the study period, a breached dam in 2010 caused major flooding and streambank erosion on the Cedar River. This large-scale flood enabled us to quantify the effect of one extreme event and evaluate the effectiveness of the stabilized streambanks. With a 70% success rate, jetties were the most cost-efficient practice and yielded the most deposition. If minimal risk is unacceptable, a more costly yet immobile practice such as a gravel bank or retaining wall is recommended.

  17. Quantifying carbon stores and decomposition in dead wood: A review

    Science.gov (United States)

    Matthew B. Russell; Shawn Fraver; Tuomas Aakala; Jeffrey H. Gove; Christopher W. Woodall; Anthony W. D’Amato; Mark J. Ducey

    2015-01-01

    The amount and dynamics of forest dead wood (both standing and downed) has been quantified by a variety of approaches throughout the forest science and ecology literature. Differences in the sampling and quantification of dead wood can lead to differences in our understanding of forests and their role in the sequestration and emissions of CO2, as...

  18. Quantifying soil respiration at landscape scales. Chapter 11

    Science.gov (United States)

    John B. Bradford; Michael G. Ryan

    2008-01-01

    Soil CO2, efflux, or soil respiration, represents a substantial component of carbon cycling in terrestrial ecosystems. Consequently, quantifying soil respiration over large areas and long time periods is an increasingly important goal. However, soil respiration rates vary dramatically in space and time in response to both environmental conditions...

  19. Lecture Note on Discrete Mathematics: Predicates and Quantifiers

    DEFF Research Database (Denmark)

    Nordbjerg, Finn Ebertsen

    2016-01-01

    This lecture note supplements the treatment of predicates and quantifiers given in standard textbooks on Discrete Mathematics (e.g.: [1]) and introduces the notation used in this course. We will present central concepts that are important, when predicate logic is used for specification...

  20. Quantifying the FIR interaction enhancement in paired galaxies

    International Nuclear Information System (INIS)

    Xu Cong; Sulentic, J.W.

    1990-01-01

    We studied the ''Catalogue of Isolated Pairs of Galaxies in the Northern Hemisphere'' by Karachentsev (1972) and a well matched comparison sample taken from the ''Catalogue of Isolated Galaxies'' by Karachentseva (1973) in order to quantify the enhanced FIR emission properties of interacting galaxies. 8 refs, 6 figs

  1. A Sustainability Initiative to Quantify Carbon Sequestration by Campus Trees

    Science.gov (United States)

    Cox, Helen M.

    2012-01-01

    Over 3,900 trees on a university campus were inventoried by an instructor-led team of geography undergraduates in order to quantify the carbon sequestration associated with biomass growth. The setting of the project is described, together with its logistics, methodology, outcomes, and benefits. This hands-on project provided a team of students…

  2. Designing a systematic landscape monitoring approach for quantifying ecosystem services

    Science.gov (United States)

    A key problem encountered early on by governments striving to incorporate the ecosystem services concept into decision making is quantifying ecosystem services across large landscapes. Basically, they are faced with determining what to measure, how to measure it and how to aggre...

  3. Challenges in quantifying biosphere-atmosphere exchange of nitrogen species

    DEFF Research Database (Denmark)

    Sutton, M.A.; Nemitz, E.; Erisman, J.W.

    2007-01-01

    Recent research in nitrogen exchange with the atmosphere has separated research communities according to N form. The integrated perspective needed to quantify the net effect of N on greenhouse-gas balance is being addressed by the NitroEurope Integrated Project (NEU). Recent advances have depende...

  4. Quantifying and mapping spatial variability in simulated forest plots

    Science.gov (United States)

    Gavin R. Corral; Harold E. Burkhart

    2016-01-01

    We used computer simulations to test the efficacy of multivariate statistical methods to detect, quantify, and map spatial variability of forest stands. Simulated stands were developed of regularly-spaced plantations of loblolly pine (Pinus taeda L.). We assumed no affects of competition or mortality, but random variability was added to individual tree characteristics...

  5. Quantifying Ladder Fuels: A New Approach Using LiDAR

    Science.gov (United States)

    Heather Kramer; Brandon Collins; Maggi Kelly; Scott Stephens

    2014-01-01

    We investigated the relationship between LiDAR and ladder fuels in the northern Sierra Nevada, California USA. Ladder fuels are often targeted in hazardous fuel reduction treatments due to their role in propagating fire from the forest floor to tree crowns. Despite their importance, ladder fuels are difficult to quantify. One common approach is to calculate canopy base...

  6. Quantifying a Negative: How Homeland Security Adds Value

    Science.gov (United States)

    2015-12-01

    access to future victims. The Law Enforcement agency could then identifying and quantifying the value of future crimes. For example, if a serial ... killer is captured with evidence of the next victim or an established pattern of victimization, network theory could be used to identify the next

  7. A NEW METHOD TO QUANTIFY X-RAY SUBSTRUCTURES IN CLUSTERS OF GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Andrade-Santos, Felipe; Lima Neto, Gastao B.; Lagana, Tatiana F. [Departamento de Astronomia, Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Geofisica e Ciencias Atmosfericas, Rua do Matao 1226, Cidade Universitaria, 05508-090 Sao Paulo, SP (Brazil)

    2012-02-20

    We present a new method to quantify substructures in clusters of galaxies, based on the analysis of the intensity of structures. This analysis is done in a residual image that is the result of the subtraction of a surface brightness model, obtained by fitting a two-dimensional analytical model ({beta}-model or Sersic profile) with elliptical symmetry, from the X-ray image. Our method is applied to 34 clusters observed by the Chandra Space Telescope that are in the redshift range z in [0.02, 0.2] and have a signal-to-noise ratio (S/N) greater than 100. We present the calibration of the method and the relations between the substructure level with physical quantities, such as the mass, X-ray luminosity, temperature, and cluster redshift. We use our method to separate the clusters in two sub-samples of high- and low-substructure levels. We conclude, using Monte Carlo simulations, that the method recuperates very well the true amount of substructure for small angular core radii clusters (with respect to the whole image size) and good S/N observations. We find no evidence of correlation between the substructure level and physical properties of the clusters such as gas temperature, X-ray luminosity, and redshift; however, analysis suggest a trend between the substructure level and cluster mass. The scaling relations for the two sub-samples (high- and low-substructure level clusters) are different (they present an offset, i.e., given a fixed mass or temperature, low-substructure clusters tend to be more X-ray luminous), which is an important result for cosmological tests using the mass-luminosity relation to obtain the cluster mass function, since they rely on the assumption that clusters do not present different scaling relations according to their dynamical state.

  8. Using Satellite Imagery to Quantify Water Quality Impacts and Recovery from Hurricane Harvey

    Science.gov (United States)

    Sobel, R. S.; Kiaghadi, A.; Rifai, H. S.

    2017-12-01

    Record rainfall during Hurricane Harvey in the Houston-Galveston region generated record flows containing suspended sediment that was likely contaminated. Conventional water quality monitoring requires resource intensive field campaigns, and produces sparse datasets. In this study, satellite data were used to quantify suspended sediment (TSS) concentrations and mass within the region's estuary system and to estimate sediment deposition and transport. A conservative two band, red-green empirical regression was developed from the Sentinel 2 satellite to calculate TSS concentrations and masses. The regression was calibrated with an R2 = 0.73 (n=28) and validated with an R2 = 0.75 (n=12) using 2016 & 2017 imagery. TSS concentrations four days, 14 days, and 44 days post-storm were compared with a reference condition three days before storm arrival. Results indicated that TSS concentrations were an average of 100% higher four days post-storm, and 150% higher after 14 days, however, the average concentration on day 144 was only seven percent higher than the reference condition, suggesting the estuary system is approaching recovery to pre-storm conditions. Sediment masses were determined from the regressed concentrations and water volumes estimated from a bottom elevation grid combined with water surface elevations observed coincidently with the satellite image. While water volumes were only 13% higher on both day four and day 14 post-storm; sediment masses were 195% and 227% higher than the reference condition, respectively. By day 44, estuary sediment mass returned to just 2.9% above the reference load. From a mechanistic standpoint, the elevated TSS concentrations on day four indicated an advection-based regime due to stormwater runoff draining through the estuarine system. Sometime, however, between days 14 and 44, transport conditions switched from advection-dominated to deposition-driven as indicated by the near normal TSS concentrations on day 44.

  9. Comparison of three techniques for estimating the forage intake of lactating dairy cows on pasture.

    Science.gov (United States)

    Macoon, B; Sollenberger, L E; Moore, J E; Staples, C R; Fike, J H; Portier, K M

    2003-09-01

    Quantifying DMI is necessary for estimation of nutrient consumption by ruminants, but it is inherently difficult on grazed pastures and even more so when supplements are fed. Our objectives were to compare three methods of estimating forage DMI (inference from animal performance, evaluation from fecal output using a pulse-dose marker, and estimation from herbage disappearance methods) and to identify the most useful approach or combination of approaches for estimating pasture intake by lactating dairy cows. During three continuous 28-d periods in the winter season, Holstein cows (Bos taurus; n = 32) grazed a cool-season grass or a cool-season grass-clover mixture at two stocking rates (SR; 5 vs. 2.5 cows/ha) and were fed two rates of concentrate supplementation (CS; 1 kg of concentrate [as-fed] per 2.5 or 3.5 kg of milk produced). Animal response data used in computations for the animal performance method were obtained from the latter 14 d of each period. For the pulse-dose marker method, chromium-mordanted fiber was used. Pasture sampling to determine herbage disappearance was done weekly throughout the study. Forage DMI estimated by the animal performance method was different among periods (P forage mass. The pulse-dose marker method generally provided greater estimates of forage DMI (as much as 11.0 kg/d more than the animal performance method) and was not correlated with the other methods. Estimates of forage DMI by the herbage disappearance method were correlated with the animal performance method. The difference between estimates from these two methods, ranging from -4.7 to 5.4 kg/d, were much lower than their difference from pulse-dose marker estimates. The results of this study suggest that, when appropriate for the research objectives, the animal performance or herbage disappearance methods may be useful and less costly alternatives to using the pulse-dose method.

  10. Quantifying black carbon light absorption enhancement with a novel statistical approach

    Science.gov (United States)

    Wu, Cheng; Wu, Dui; Zhen Yu, Jian

    2018-01-01

    Black carbon (BC) particles in the atmosphere can absorb more light when coated by non-absorbing or weakly absorbing materials during atmospheric aging, due to the lensing effect. In this study, the light absorption enhancement factor, Eabs, was quantified using a 1-year measurement of mass absorption efficiency (MAE) in the Pearl River Delta region (PRD). A new approach for calculating primary MAE (MAEp), the key for Eabs estimation, is demonstrated using the minimum R squared (MRS) method, exploring the inherent source independency between BC and its coating materials. A unique feature of Eabs estimation with the MRS approach is its insensitivity to systematic biases in elemental carbon (EC) and σabs measurements. The annual average Eabs550 is found to be 1.50 ± 0.48 (±1 SD) in the PRD region, exhibiting a clear seasonal pattern with higher values in summer and lower in winter. Elevated Eabs in the summertime is likely associated with aged air masses, predominantly of marine origin, along with long-range transport of biomass-burning-influenced air masses from Southeast Asia. Core-shell Mie simulations along with measured Eabs and absorption Ångström exponent (AAE) constraints suggest that in the PRD, the coating materials are unlikely to be dominated by brown carbon and the coating thickness is higher in the rainy season than in the dry season.

  11. We are not the 99 percent: quantifying asphericity in the distribution of Local Group satellites

    Science.gov (United States)

    Forero-Romero, Jaime E.; Arias, Verónica

    2018-05-01

    We use simulations to build an analytic probability distribution for the asphericity in the satellite distribution around Local Group (LG) type galaxies in the Lambda Cold Dark Matter (LCDM) paradigm. We use this distribution to estimate the atypicality of the satellite distributions in the LG even when the underlying simulations do not have enough systems fully resembling the LG in terms of its typical masses, separation and kinematics. We demonstrate the method using three different simulations (Illustris-1, Illustris-1-Dark and ELVIS) and a number of satellites ranging from 11 to 15. Detailed results differ greatly among the simulations suggesting a strong influence of the typical DM halo mass, the number of satellites and the simulated baryonic effects. However, there are three common trends. First, at most 2% of the pairs are expected to have satellite distributions with the same asphericity as the LG; second, at most 80% of the pairs have a halo with a satellite distribution as aspherical as in M31; and third, at most 4% of the pairs have a halo with satellite distribution as planar as in the MW. These quantitative results place the LG at the level of a 3σ outlier in the LCDM paradigm. We suggest that understanding the reasons for this atypicality requires quantifying the asphericity probability distribution as a function of halo mass and large scale environment. The approach presented here can facilitate that kind of study and other comparisons between different numerical setups and choices to study satellites around LG pairs in simulations.

  12. Quantifying black carbon light absorption enhancement with a novel statistical approach

    Directory of Open Access Journals (Sweden)

    C. Wu

    2018-01-01

    Full Text Available Black carbon (BC particles in the atmosphere can absorb more light when coated by non-absorbing or weakly absorbing materials during atmospheric aging, due to the lensing effect. In this study, the light absorption enhancement factor, Eabs, was quantified using a 1-year measurement of mass absorption efficiency (MAE in the Pearl River Delta region (PRD. A new approach for calculating primary MAE (MAEp, the key for Eabs estimation, is demonstrated using the minimum R squared (MRS method, exploring the inherent source independency between BC and its coating materials. A unique feature of Eabs estimation with the MRS approach is its insensitivity to systematic biases in elemental carbon (EC and σabs measurements. The annual average Eabs550 is found to be 1.50 ± 0.48 (±1 SD in the PRD region, exhibiting a clear seasonal pattern with higher values in summer and lower in winter. Elevated Eabs in the summertime is likely associated with aged air masses, predominantly of marine origin, along with long-range transport of biomass-burning-influenced air masses from Southeast Asia. Core–shell Mie simulations along with measured Eabs and absorption Ångström exponent (AAE constraints suggest that in the PRD, the coating materials are unlikely to be dominated by brown carbon and the coating thickness is higher in the rainy season than in the dry season.

  13. The benefits and risks of quantified relationship technologies : response to open peer commentaries on "the quantified relationship"

    NARCIS (Netherlands)

    Danaher, J.; Nyholm, S.R.; Earp, B.D.

    2018-01-01

    Our critics argue that quantified relationships (QR) will threaten privacy, undermine autonomy, reinforce problematic business models, and promote epistemic injustice. We do not deny these risks. But to determine the appropriate policy response, it will be necessary to assess their likelihood,

  14. Some mass measurement problems

    International Nuclear Information System (INIS)

    Merritt, J.S.

    1976-01-01

    Concerning the problem of determining the thickness of a target, an uncomplicated approach is to measure its mass and area and take the quotient. This paper examines the mass measurement aspect of such an approach. (author)

  15. Biodiesel Mass Transit Demonstration

    Science.gov (United States)

    2010-04-01

    The Biodiesel Mass Transit Demonstration report is intended for mass transit decision makers and fleet managers considering biodiesel use. This is the final report for the demonstration project implemented by the National Biodiesel Board under a gran...

  16. Reconstruction of specific mass balance for glaciers in Western ...

    Indian Academy of Sciences (India)

    Seasonal sensitivity characteristics (SSCs) were developed for Naradu, Shaune Garang, Gor Garang and Gara glaciers, Western Himalaya to quantify the changes in mean specific mass balance using monthly temperature and precipitation perturbations. The temperature sensitivities were observed high during summer ...

  17. Mass Customization Measurements Metrics

    DEFF Research Database (Denmark)

    Nielsen, Kjeld; Brunø, Thomas Ditlev; Jørgensen, Kaj Asbjørn

    2014-01-01

    A recent survey has indicated that 17 % of companies have ceased mass customizing less than 1 year after initiating the effort. This paper presents measurement for a company’s mass customization performance, utilizing metrics within the three fundamental capabilities: robust process design, choice...... navigation, and solution space development. A mass customizer when assessing performance with these metrics can identify within which areas improvement would increase competitiveness the most and enable more efficient transition to mass customization....

  18. Radiative Majorana Neutrino Masses

    OpenAIRE

    Hou, Wei-Shu; Wong, Gwo-Guang

    1994-01-01

    We present new radiative mechanisms for generating Majorana neutrino masses, within an extension of the standard model that successfully generates radiative charged lepton masses, order by order, from heavy sequential leptons. Only the new sequential neutral lepton has a right-handed partner, and its Majorana mass provides the seed for Majorana neutrino mass generation. Saturating the cosmological bound of $50$ eV with $m_{\

  19. Asteroids mass determination

    International Nuclear Information System (INIS)

    Hoffmann, M.

    1989-01-01

    Basic methods for asteroid mass determinations and their errors are discussed. New results and some current developments in the astrometric method are reviewed. New methods and techniques, such as electronic imaging, radar ranging and space probes are becoming important for asteroid mass determinations. Mass and density estimations on rotational properties and possible satelites are also discussed

  20. Fourier Transform Mass Spectrometry.

    Science.gov (United States)

    Gross, Michael L.; Rempel, Don L.

    1984-01-01

    Discusses the nature of Fourier transform mass spectrometry and its unique combination of high mass resolution, high upper mass limit, and multichannel advantage. Examines its operation, capabilities and limitations, applications (ion storage, ion manipulation, ion chemistry), and future applications and developments. (JN)

  1. Scalar quarkonium masses

    International Nuclear Information System (INIS)

    Lee, W.; Weingarten, D.

    1996-01-01

    We evaluate the valence approximation to the mass of scalar quarkonium for a range of different parameters. Our results strongly suggest that the infinite volume continuum limit of the mass of ss scalar quarkonium lies well below the mass of f J (1710). The resonance f 0 (1500) appears to the best candidate for ss scalar quarkonium. (orig.)

  2. What masses for Cepheids

    International Nuclear Information System (INIS)

    Davis, C.G.

    To understand the evolution of giant stars, it is important to pin down the masses for Cepheids. The 7- to 10-day bump Cepheids imply lower than evolutionary mass (60%). Recent theoretical work, though, indicates that for Cepheids with periods of 15 to 16 days, the best understanding of the light curves results from using evolutionary masses

  3. Quantifying performance on an outdoor agility drill using foot-mounted inertial measurement units.

    Directory of Open Access Journals (Sweden)

    Antonia M Zaferiou

    Full Text Available Running agility is required for many sports and other physical tasks that demand rapid changes in body direction. Quantifying agility skill remains a challenge because measuring rapid changes of direction and quantifying agility skill from those measurements are difficult to do in ways that replicate real task/game play situations. The objectives of this study were to define and to measure agility performance for a (five-cone agility drill used within a military obstacle course using data harvested from two foot-mounted inertial measurement units (IMUs. Thirty-two recreational athletes ran an agility drill while wearing two IMUs secured to the tops of their athletic shoes. The recorded acceleration and angular rates yield estimates of the trajectories, velocities and accelerations of both feet as well as an estimate of the horizontal velocity of the body mass center. Four agility performance metrics were proposed and studied including: 1 agility drill time, 2 horizontal body speed, 3 foot trajectory turning radius, and 4 tangential body acceleration. Additionally, the average horizontal ground reaction during each footfall was estimated. We hypothesized that shorter agility drill performance time would be observed with small turning radii and large tangential acceleration ranges and body speeds. Kruskal-Wallis and mean rank post-hoc statistical analyses revealed that shorter agility drill performance times were observed with smaller turning radii and larger tangential acceleration ranges and body speeds, as hypothesized. Moreover, measurements revealed the strategies that distinguish high versus low performers. Relative to low performers, high performers used sharper turns, larger changes in body speed (larger tangential acceleration ranges, and shorter duration footfalls that generated larger horizontal ground reactions during the turn phases. Overall, this study advances the use of foot-mounted IMUs to quantify agility performance in

  4. Quantifying the effects of soil temperature, moisture and sterilization on elemental mercury formation in boreal soils.

    Science.gov (United States)

    Pannu, Ravinder; Siciliano, Steven D; O'Driscoll, Nelson J

    2014-10-01

    Soils are a source of elemental mercury (Hg(0)) to the atmosphere, however the effects of soil temperature and moisture on Hg(0) formation is not well defined. This research quantifies the effect of varying soil temperature (278-303 K), moisture (15-80% water filled pore space (WFPS)) and sterilization on the kinetics of Hg(0) formation in forested soils of Nova Scotia, Canada. Both, the logarithm of cumulative mass of Hg(0) formed in soils and the reduction rate constants (k values) increased with temperature and moisture respectively. Sterilizing soils significantly (p soils and our results highlight two key processes: (i) a fast abiotic process that peaks at 45% WFPS and depletes a small pool of Hg(0) and; (ii) a slower, rate limiting biotic process that generates a large pool of reducible Hg(II). Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Quantifying Pilot Visual Attention in Low Visibility Terminal Operations

    Science.gov (United States)

    Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.

    2012-01-01

    Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation

  6. A new paradigm of quantifying ecosystem stress through chemical signatures

    Energy Technology Data Exchange (ETDEWEB)

    Kravitz, Ben [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, P.O. Box 999, MSIN K9-30 Richland Washington 99352 USA; Guenther, Alex B. [Department of Earth System Science, University of California Irvine, 3200 Croul Hall Street Irvine California 92697 USA; Gu, Lianhong [Environmental Sciences Division, Oak Ridge National Laboratory, Oak Ridge Tennessee 37831 USA; Karl, Thomas [Institute of Atmospheric and Crysopheric Sciences, University of Innsbruck, Innrain 52f A-6020 Innsbruck Austria; Kaser, Lisa [National Center for Atmospheric Research, P.O. Box 3000 Boulder Colorado 80307 USA; Pallardy, Stephen G. [Department of Forestry, University of Missouri, 203 Anheuser-Busch Natural Resources Building Columbia Missouri 65211 USA; Peñuelas, Josep [CREAF, Cerdanyola del Vallès 08193 Catalonia Spain; Global Ecology Unit CREAF-CSIC-UAB, CSIC, Cerdanyola del Vallès 08193 Catalonia Spain; Potosnak, Mark J. [Department of Environmental Science and Studies, DePaul University, McGowan South, Suite 203 Chicago Illinois 60604 USA; Seco, Roger [Department of Earth System Science, University of California Irvine, 3200 Croul Hall Street Irvine California 92697 USA

    2016-11-01

    Stress-induced emissions of biogenic volatile organic compounds (VOCs) from terrestrial ecosystems may be one of the dominant sources of VOC emissions world-wide. Understanding the ecosystem stress response could reveal how ecosystems will respond and adapt to climate change and, in turn, quantify changes in the atmospheric burden of VOC oxidants and secondary organic aerosols. Here we argue, based on preliminary evidence from several opportunistic measurement sources, that chemical signatures of stress can be identified and quantified at the ecosystem scale. We also outline future endeavors that we see as next steps toward uncovering quantitative signatures of stress, including new advances in both VOC data collection and analysis of "big data."

  7. A framework for quantifying net benefits of alternative prognostic models

    DEFF Research Database (Denmark)

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit......) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk...... reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple...

  8. Quantifying the value of SHM for wind turbine blades

    DEFF Research Database (Denmark)

    Nielsen, Jannie Sønderkær; Tcherniak, Dmitri; Ulriksen, Martin Dalgaard

    2018-01-01

    is developed to quantify the value of SHM for an 8 MW OWT using a decision framework based on Bayesian pre-posterior decision analysis. Deterioration is modelled as a Markov chain developed based on data, and the costs are obtained from a service provider for OWTs. Discrete Bayesian networks are used......In this paper, the value of information (VoI) from structural health monitoring (SHM) is quantified in a case study for offshore wind turbines (OWTs). This is done by combining data from an operating turbine equipped with a blade SHM system with cost information from a service provider for OWTs...... is compared to a statistical model from the healthy state using a metric that yields a damage index representing the structural integrity. As the damage was introduced artificially, it is possible to statistically estimate the confusion matrix corresponding to different threshold values, and here we opt...

  9. Quantifiers for randomness of chaotic pseudo-random number generators.

    Science.gov (United States)

    De Micco, L; Larrondo, H A; Plastino, A; Rosso, O A

    2009-08-28

    We deal with randomness quantifiers and concentrate on their ability to discern the hallmark of chaos in time series used in connection with pseudo-random number generators (PRNGs). Workers in the field are motivated to use chaotic maps for generating PRNGs because of the simplicity of their implementation. Although there exist very efficient general-purpose benchmarks for testing PRNGs, we feel that the analysis provided here sheds additional didactic light on the importance of the main statistical characteristics of a chaotic map, namely (i) its invariant measure and (ii) the mixing constant. This is of help in answering two questions that arise in applications: (i) which is the best PRNG among the available ones? and (ii) if a given PRNG turns out not to be good enough and a randomization procedure must still be applied to it, which is the best applicable randomization procedure? Our answer provides a comparative analysis of several quantifiers advanced in the extant literature.

  10. Resolving and quantifying overlapped chromatographic bands by transmutation

    Science.gov (United States)

    Malinowski

    2000-09-15

    A new chemometric technique called "transmutation" is developed for the purpose of sharpening overlapped chromatographic bands in order to quantify the components. The "transmutation function" is created from the chromatogram of the pure component of interest, obtained from the same instrument, operating under the same experimental conditions used to record the unresolved chromatogram of the sample mixture. The method is used to quantify mixtures containing toluene, ethylbenzene, m-xylene, naphthalene, and biphenyl from unresolved chromatograms previously reported. The results are compared to those obtained using window factor analysis, rank annihilation factor analysis, and matrix regression analysis. Unlike the latter methods, the transmutation method is not restricted to two-dimensional arrays of data, such as those obtained from HPLC/DAD, but is also applicable to chromatograms obtained from single detector experiments. Limitations of the method are discussed.

  11. Pitfalls in quantifying species turnover: the residency effect

    Directory of Open Access Journals (Sweden)

    Kevin Chase Burns

    2014-03-01

    Full Text Available The composition of ecological communities changes continuously through time and space. Understanding this turnover in species composition is a central goal in biogeography, but quantifying species turnover can be problematic. Here, I describe an underappreciated source of bias in quantifying species turnover, namely ‘the residency effect’, which occurs when the contiguous distributions of species across sampling domains are small relative to census intervals. I present the results of a simulation model that illustrates the problem theoretically and then I demonstrate the problem empirically using a long-term dataset of plant species turnover on islands. Results from both exercises indicate that empirical estimates of species turnover may be susceptible to significant observer bias, which may potentially cloud a better understanding of how the composition of ecological communities changes through time.

  12. Numerical Model to Quantify the Influence of the Cellulosic Substrate on the Ignition Propensity Tests

    Directory of Open Access Journals (Sweden)

    Guindos Pablo

    2016-07-01

    Full Text Available A numerical model based on the finite element method has been constructed to simulate the ignition propensity (IP tests. The objective of this mathematical model was to quantify the influence of different characteristics of the cellulosic substrate on the results of the IP-tests. The creation and validation of the model included the following steps: (I formulation of the model based on experimental thermodynamic characteristics of the cellulosic substrate; (ii calibration of the model according to cone calorimeter tests; (iii validation of the model through mass loss and temperature profiling during IP-testing. Once the model was validated, the influence of each isolated parameter of the cellulosic substrate was quantified via a parametric study. The results revealed that the substrate heat capacity, the cigarette temperature and the pyrolysis activation energy are the most influencing parameters on the thermodynamic response of the substrates, while other parameters like heat of the pyrolysis reaction, density and roughness of the substrate showed little influence. Also the results indicated that the thermodynamic mechanisms involved in the pyrolysis and combustion of the cellulosic substrate are complex and show low repeatability which might impair the reliability of the IP-tests.

  13. Quantifying sex, race, and age specific differences in bone microstructure requires measurement of anatomically equivalent regions.

    Science.gov (United States)

    Ghasem-Zadeh, Ali; Burghardt, Andrew; Wang, Xiao-Fang; Iuliano, Sandra; Bonaretti, Serena; Bui, Minh; Zebaze, Roger; Seeman, Ego

    2017-08-01

    Individuals differ in forearm length. As microstructure differs along the radius, we hypothesized that errors may occur when sexual and racial dimorphisms are quantified at a fixed distance from the radio-carpal joint. Microstructure was quantified ex vivo in 18 cadaveric radii using high resolution peripheral quantitative computed tomography and in vivo in 158 Asian and Caucasian women and men at a fixed region of interest (ROI), a corrected ROI positioned at 4.5-6% of forearm length and using the fixed ROI adjusted for cross sectional area (CSA), forearm length or height. Secular effects of age were assessed by comparing 38 younger and 33 older women. Ex vivo, similar amounts of bone mass fashioned adjacent cross sections. Larger distal cross sections had thinner porous cortices of lower matrix mineral density (MMD), a larger medullary CSA and higher trabecular density. Smaller proximal cross-sections had thicker less porous cortices of higher MMD, a small medullary canal with little trabecular bone. Taller persons had more distally positioned fixed ROIs which moved proximally when corrected. Shorter persons had more proximally positioned fixed ROIs which moved distally when corrected, so dimorphisms lessened. In the corrected ROIs, in Caucasians, women had 0.6 SD higher porosity and 0.6 SD lower trabecular density than men (pmicrostructure requires measurement of anatomically equivalent regions. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Quantifying sleep architecture dynamics and individual differences using big data and Bayesian networks.

    Science.gov (United States)

    Yetton, Benjamin D; McDevitt, Elizabeth A; Cellini, Nicola; Shelton, Christian; Mednick, Sara C

    2018-01-01

    The pattern of sleep stages across a night (sleep architecture) is influenced by biological, behavioral, and clinical variables. However, traditional measures of sleep architecture such as stage proportions, fail to capture sleep dynamics. Here we quantify the impact of individual differences on the dynamics of sleep architecture and determine which factors or set of factors best predict the next sleep stage from current stage information. We investigated the influence of age, sex, body mass index, time of day, and sleep time on static (e.g. minutes in stage, sleep efficiency) and dynamic measures of sleep architecture (e.g. transition probabilities and stage duration distributions) using a large dataset of 3202 nights from a non-clinical population. Multi-level regressions show that sex effects duration of all Non-Rapid Eye Movement (NREM) stages, and age has a curvilinear relationship for Wake After Sleep Onset (WASO) and slow wave sleep (SWS) minutes. Bayesian network modeling reveals sleep architecture depends on time of day, total sleep time, age and sex, but not BMI. Older adults, and particularly males, have shorter bouts (more fragmentation) of Stage 2, SWS, and they transition less frequently to these stages. Additionally, we showed that the next sleep stage and its duration can be optimally predicted by the prior 2 stages and age. Our results demonstrate the potential benefit of big data and Bayesian network approaches in quantifying static and dynamic architecture of normal sleep.

  15. Quantifying resilience for resilience engineering of socio technical systems

    OpenAIRE

    Häring, Ivo; Ebenhöch, Stefan; Stolz, Alexander

    2016-01-01

    Resilience engineering can be defined to comprise originally technical, engineering and natural science approaches to improve the resilience and sustainability of socio technical cyber-physical systems of various complexities with respect to disruptive events. It is argued how this emerging interdisciplinary technical and societal science approach may contribute to civil and societal security research. In this context, the article lists expected benefits of quantifying resilience. Along the r...

  16. Quantifying the Lateral Bracing Provided by Standing Steam Roof Systems

    OpenAIRE

    Sorensen, Taylor J.

    2016-01-01

    One of the major challenges of engineering is finding the proper balance between economical and safe. Currently engineers at Nucor Corporation have ignored the additional lateral bracing provided by standing seam roofing systems to joists because of the lack of methods available to quantify the amount of bracing provided. Based on the results of testing performed herein, this bracing is significant, potentially resulting in excessively conservative designs and unnecessary costs. This proje...

  17. A framework for quantifying net benefits of alternative prognostic models

    OpenAIRE

    Rapsomaniki, E.; White, I.R.; Wood, A.M.; Thompson, S.G.; Ford, I.

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measure...

  18. PREDICTION OF SURGICAL TREATMENT WITH POUR PERITONITIS QUANTIFYING RISK FACTORS

    Directory of Open Access Journals (Sweden)

    І. К. Churpiy

    2012-11-01

    Full Text Available Explored the possibility of quantitative assessment of risk factors of complications in the treatment of diffuse peritonitis. Highlighted 53 groups of features that are important in predicting the course of diffuse peritonitis. The proposed scheme of defining the risk of clinical course of diffuse peritonitis can quantify the severity of the source of patients and in most cases correctly predict the results of treatment of disease.

  19. Simulating non-prenex cuts in quantified propositional calculus

    Czech Academy of Sciences Publication Activity Database

    Jeřábek, Emil; Nguyen, P.

    2011-01-01

    Roč. 57, č. 5 (2011), s. 524-532 ISSN 0942-5616 R&D Projects: GA AV ČR IAA100190902; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : proof complexity * prenex cuts * quantified propositional calculus Subject RIV: BA - General Mathematics Impact factor: 0.496, year: 2011 http://onlinelibrary.wiley.com/doi/10.1002/malq.201020093/abstract

  20. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  1. Parkinson's Law Quantified: Three Investigations on Bureaucratic Inefficiency

    OpenAIRE

    Klimek, Peter; Hanel, Rudolf; Thurner, Stefan

    2008-01-01

    We formulate three famous, descriptive essays of C.N. Parkinson on bureaucratic inefficiency in a quantifiable and dynamical socio-physical framework. In the first model we show how the use of recent opinion formation models for small groups can be used to understand Parkinson's observation that decision making bodies such as cabinets or boards become highly inefficient once their size exceeds a critical 'Coefficient of Inefficiency', typically around 20. A second observation of Parkinson - w...

  2. The quantified self a sociology of self-tracking

    CERN Document Server

    Lupton, Deborah

    2016-01-01

    With the advent of digital devices and software, self-tracking practices have gained new adherents and have spread into a wide array of social domains. The Quantified Self movement has emerged to promote 'self knowledge through numbers'. In this ground-breaking book, Deborah Lupton critically analyses the social, cultural and political dimensions of contemporary self-tracking and identifies the concepts of selfhood, human embodiment and the value of data that underpin them.

  3. Quantifying the ice-albedo feedback through decoupling

    Science.gov (United States)

    Kravitz, B.; Rasch, P. J.

    2017-12-01

    The ice-albedo feedback involves numerous individual components, whereby warming induces sea ice melt, inducing reduced surface albedo, inducing increased surface shortwave absorption, causing further warming. Here we attempt to quantify the sea ice albedo feedback using an analogue of the "partial radiative perturbation" method, but where the governing mechanisms are directly decoupled in a climate model. As an example, we can isolate the insulating effects of sea ice on surface energy and moisture fluxes by allowing sea ice thickness to change but fixing Arctic surface albedo, or vice versa. Here we present results from such idealized simulations using the Community Earth System Model in which individual components are successively fixed, effectively decoupling the ice-albedo feedback loop. We isolate the different components of this feedback, including temperature change, sea ice extent/thickness, and air-sea exchange of heat and moisture. We explore the interactions between these different components, as well as the strengths of the total feedback in the decoupled feedback loop, to quantify contributions from individual pieces. We also quantify the non-additivity of the effects of the components as a means of investigating the dominant sources of nonlinearity in the ice-albedo feedback.

  4. A novel approach to quantify cybersecurity for electric power systems

    Science.gov (United States)

    Kaster, Paul R., Jr.

    Electric Power grid cybersecurity is a topic gaining increased attention in academia, industry, and government circles, yet a method of quantifying and evaluating a system's security is not yet commonly accepted. In order to be useful, a quantification scheme must be able to accurately reflect the degree to which a system is secure, simply determine the level of security in a system using real-world values, model a wide variety of attacker capabilities, be useful for planning and evaluation, allow a system owner to publish information without compromising the security of the system, and compare relative levels of security between systems. Published attempts at quantifying cybersecurity fail at one or more of these criteria. This document proposes a new method of quantifying cybersecurity that meets those objectives. This dissertation evaluates the current state of cybersecurity research, discusses the criteria mentioned previously, proposes a new quantification scheme, presents an innovative method of modeling cyber attacks, demonstrates that the proposed quantification methodology meets the evaluation criteria, and proposes a line of research for future efforts.

  5. Clinical relevance of quantified fundus autofluorescence in diabetic macular oedema.

    Science.gov (United States)

    Yoshitake, S; Murakami, T; Uji, A; Unoki, N; Dodo, Y; Horii, T; Yoshimura, N

    2015-05-01

    To quantify the signal intensity of fundus autofluorescence (FAF) and evaluate its association with visual function and optical coherence tomography (OCT) findings in diabetic macular oedema (DMO). We reviewed 103 eyes of 78 patients with DMO and 30 eyes of 22 patients without DMO. FAF images were acquired using Heidelberg Retina Angiograph 2, and the signal levels of FAF in the individual subfields of the Early Treatment Diabetic Retinopathy Study grid were measured. We evaluated the association between quantified FAF and the logMAR VA and OCT findings. One hundred and three eyes with DMO had lower FAF signal intensity levels in the parafoveal subfields compared with 30 eyes without DMO. The autofluorescence intensity in the parafoveal subfields was associated negatively with logMAR VA and the retinal thickness in the corresponding subfields. The autofluorescence levels in the parafoveal subfield, except the nasal subfield, were lower in eyes with autofluorescent cystoid spaces in the corresponding subfield than in those without autofluorescent cystoid spaces. The autofluorescence level in the central subfield was related to foveal cystoid spaces but not logMAR VA or retinal thickness in the corresponding area. Quantified FAF in the parafovea has diagnostic significance and is clinically relevant in DMO.

  6. Information criteria for quantifying loss of reversibility in parallelized KMC

    Energy Technology Data Exchange (ETDEWEB)

    Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu

    2017-01-01

    Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.

  7. Leveraging 3D-HST Grism Redshifts to Quantify Photometric Redshift Performance

    Science.gov (United States)

    Bezanson, Rachel; Wake, David A.; Brammer, Gabriel B.; van Dokkum, Pieter G.; Franx, Marijn; Labbé, Ivo; Leja, Joel; Momcheva, Ivelina G.; Nelson, Erica J.; Quadri, Ryan F.; Skelton, Rosalind E.; Weiner, Benjamin J.; Whitaker, Katherine E.

    2016-05-01

    We present a study of photometric redshift accuracy in the 3D-HST photometric catalogs, using 3D-HST grism redshifts to quantify and dissect trends in redshift accuracy for galaxies brighter than JH IR > 24 with an unprecedented and representative high-redshift galaxy sample. We find an average scatter of 0.0197 ± 0.0003(1 + z) in the Skelton et al. photometric redshifts. Photometric redshift accuracy decreases with magnitude and redshift, but does not vary monotonically with color or stellar mass. The 1σ scatter lies between 0.01 and 0.03 (1 + z) for galaxies of all masses and colors below z 2), dusty star-forming galaxies for which the scatter increases to ˜0.1 (1 + z). We find that photometric redshifts depend significantly on galaxy size; the largest galaxies at fixed magnitude have photo-zs with up to ˜30% more scatter and ˜5 times the outlier rate. Although the overall photometric redshift accuracy for quiescent galaxies is better than that for star-forming galaxies, scatter depends more strongly on magnitude and redshift than on galaxy type. We verify these trends using the redshift distributions of close pairs and extend the analysis to fainter objects, where photometric redshift errors further increase to ˜0.046 (1 + z) at {H}F160W=26. We demonstrate that photometric redshift accuracy is strongly filter dependent and quantify the contribution of multiple filter combinations. We evaluate the widths of redshift probability distribution functions and find that error estimates are underestimated by a factor of ˜1.1-1.6, but that uniformly broadening the distribution does not adequately account for fitting outliers. Finally, we suggest possible applications of these data in planning for current and future surveys and simulate photometric redshift performance in the Large Synoptic Survey Telescope, Dark Energy Survey (DES), and combined DES and Vista Hemisphere surveys.

  8. Validity and reliability of the session-RPE method for quantifying training load in karate athletes.

    Science.gov (United States)

    Tabben, M; Tourny, C; Haddad, M; Chaabane, H; Chamari, K; Coquart, J B

    2015-04-24

    To test the construct validity and reliability of the session rating of perceived exertion (sRPE) method by examining the relationship between RPE and physiological parameters (heart rate: HR and blood lactate concentration: [La --] ) and the correlations between sRPE and two HR--based methods for quantifying internal training load (Banister's method and Edwards's method) during karate training camp. Eighteen elite karate athletes: ten men (age: 24.2 ± 2.3 y, body mass: 71.2 ± 9.0 kg, body fat: 8.2 ± 1.3% and height: 178 ± 7 cm) and eight women (age: 22.6 ± 1.2 y, body mass: 59.8 ± 8.4 kg, body fat: 20.2 ± 4.4%, height: 169 ± 4 cm) were included in the study. During training camp, subjects participated in eight karate--training sessions including three training modes (4 tactical--technical, 2 technical--development, and 2 randori training), during which RPE, HR, and [La -- ] were recorded. Significant correlations were found between RPE and physiological parameters (percentage of maximal HR: r = 0.75, 95% CI = 0.64--0.86; [La --] : r = 0.62, 95% CI = 0.49--0.75; P training load ( r = 0.65--0.95; P reliability of the same intensity across training sessions (Cronbach's α = 0.81, 95% CI = 0.61--0.92). This study demonstrates that the sRPE method is valid for quantifying internal training load and intensity in karate.

  9. Quantifying the impact of mergers on the angular momentum of simulated galaxies

    Science.gov (United States)

    Lagos, Claudia del P.; Stevens, Adam R. H.; Bower, Richard G.; Davis, Timothy A.; Contreras, Sergio; Padilla, Nelson D.; Obreschkow, Danail; Croton, Darren; Trayford, James W.; Welker, Charlotte; Theuns, Tom

    2018-02-01

    We use EAGLE to quantify the effect galaxy mergers have on the stellar specific angular momentum of galaxies, jstars. We split mergers into dry (gas-poor)/wet (gas-rich), major/minor and different spin alignments and orbital parameters. Wet (dry) mergers have an average neutral gas-to-stellar mass ratio of 1.1 (0.02), while major (minor) mergers are those with stellar mass ratios ≥0.3 (0.1-0.3). We correlate the positions of galaxies in the jstars-stellar mass plane at z = 0 with their merger history, and find that galaxies of low spins suffered dry mergers, while galaxies of normal/high spins suffered predominantly wet mergers, if any. The radial jstars profiles of galaxies that went through dry mergers are deficient by ≈0.3 dex at r ≲ 10 r50 (with r50 being the half-stellar mass radius), compared to galaxies that went through wet mergers. Studying the merger remnants reveals that dry mergers reduce jstars by ≈30 per cent, while wet mergers increase it by ≈10 per cent, on average. The latter is connected to the build-up of the bulge by newly formed stars of high rotational speed. Moving from minor to major mergers accentuates these effects. When the spin vectors of the galaxies prior to the dry merger are misaligned, jstars decreases by a greater magnitude, while in wet mergers corotation and high orbital angular momentum efficiently spun-up galaxies. We predict what would be the observational signatures in the jstars profiles driven by dry mergers: (i) shallow radial profiles and (ii) profiles that rise beyond ≈10 r50, both of which are significantly different from spiral galaxies.

  10. On Defining Mass

    Science.gov (United States)

    Hecht, Eugene

    2011-01-01

    Though central to any pedagogical development of physics, the concept of mass is still not well understood. Properly defining mass has proven to be far more daunting than contemporary textbooks would have us believe. And yet today the origin of mass is one of the most aggressively pursued areas of research in all of physics. Much of the excitement surrounding the Large Hadron Collider at CERN is associated with discovering the mechanism responsible for the masses of the elementary particles. This paper will first briefly examine the leading definitions, pointing out their shortcomings. Then, utilizing relativity theory, it will propose—for consideration by the community of physicists—a conceptual definition of mass predicated on the more fundamental concept of energy, more fundamental in that everything that has mass has energy, yet not everything that has energy has mass.

  11. Fourier Transform Mass Spectrometry

    Science.gov (United States)

    Scigelova, Michaela; Hornshaw, Martin; Giannakopulos, Anastassios; Makarov, Alexander

    2011-01-01

    This article provides an introduction to Fourier transform-based mass spectrometry. The key performance characteristics of Fourier transform-based mass spectrometry, mass accuracy and resolution, are presented in the view of how they impact the interpretation of measurements in proteomic applications. The theory and principles of operation of two types of mass analyzer, Fourier transform ion cyclotron resonance and Orbitrap, are described. Major benefits as well as limitations of Fourier transform-based mass spectrometry technology are discussed in the context of practical sample analysis, and illustrated with examples included as figures in this text and in the accompanying slide set. Comparisons highlighting the performance differences between the two mass analyzers are made where deemed useful in assisting the user with choosing the most appropriate technology for an application. Recent developments of these high-performing mass spectrometers are mentioned to provide a future outlook. PMID:21742802

  12. The Point Mass Concept

    Directory of Open Access Journals (Sweden)

    Lehnert B.

    2011-04-01

    Full Text Available A point-mass concept has been elaborated from the equations of the gravitational field. One application of these deductions results in a black hole configuration of the Schwarzschild type, having no electric charge and no angular momentum. The critical mass of a gravitational collapse with respect to the nuclear binding energy is found to be in the range of 0.4 to 90 solar masses. A second application is connected with the spec- ulation about an extended symmetric law of gravitation, based on the options of positive and negative mass for a particle at given positive energy. This would make masses of equal polarity attract each other, while masses of opposite polarity repel each other. Matter and antimatter are further proposed to be associated with the states of positive and negative mass. Under fully symmetric conditions this could provide a mechanism for the separation of antimatter from matter at an early stage of the universe.

  13. The Point Mass Concept

    Directory of Open Access Journals (Sweden)

    Lehnert B.

    2011-04-01

    Full Text Available A point-mass concept has been elaborated from the equations of the gravitational field. One application of these deductions results in a black hole configuration of the Schwarzschild type, having no electric charge and no angular momentum. The critical mass of a gravitational collapse with respect to the nuclear binding energy is found to be in the range of 0.4 to 90 solar masses. A second application is connected with the speculation about an extended symmetric law of gravitation, based on the options of positive and negative mass for a particle at given positive energy. This would make masses of equal polarity attract each other, while masses of opposite polarity repel each other. Matter and antimatter are further proposed to be associated with the states of positive and negative mass. Under fully symmetric conditions this could provide a mechanism for the separation of antimatter from matter at an early stage of the universe.

  14. Milk production and composition, nitrogen utilization, and grazing behavior of late-lactation dairy cows as affected by time of allocation of a fresh strip of pasture.

    Science.gov (United States)

    Vibart, R E; Tavendale, M; Otter, D; Schwendel, B H; Lowe, K; Gregorini, P; Pacheco, D

    2017-07-01

    Eighty late-lactation dairy cows were used to examine the effects of allocating a new pasture strip of a sward based on ryegrass (Lolium perenne L.) in the morning (a.m.; ∼0730 h) or in the afternoon (p.m.; ∼1530 h) on milk production and composition, nitrogen (N) utilization, and grazing behavior. Cows grazed the same pasture strips for 24 h and were offered the same daily herbage allowance. Herbage composition differed among treatments; p.m. herbage had greater dry matter (DM; 22.7 vs. 19.9%), organic matter (OM; 89.5 vs. 88.9%), and water-soluble carbohydrate (10.9 vs. 7.6%) concentrations and lesser crude protein (20.5 vs. 22.2%) and neutral detergent fiber (48.8 vs. 50.4%) concentrations compared with a.m. herbage. Total fatty acids (FA), α-linolenic acid, and polyunsaturated FA (PUFA) were greater in a.m. herbage, whereas monounsaturated FA were greater in p.m. herbage. Estimates of herbage DM intake did not differ among treatments. Daily milk yields and milk fat and milk protein concentrations were similar among treatments, whereas milk fat (684 vs. 627 g/cow), milk protein (545 vs. 505 g/cow), and milk solids (milk fat + milk protein) yields (1,228 vs. 1,132 g/cow) tended to be greater for cows on p.m. herbage. Rumenic acid and total PUFA in milk were greater for cows on a.m. herbage, whereas oleic acid was greater for cows on p.m. herbage. Estimates of urinary N excretion (g/d) did not differ among treatments, but urinary N concentrations were greater for cows on a.m. herbage (5.85 vs. 5.36 g/L). Initial herbage mass (HM) available (kg of DM/ha) and instantaneous HM disappearance rates (kg of DM/ha and kg of DM/h) did not differ, but fractional disappearance rates (0.56 vs. 0.74 per hour for a.m. vs. p.m., respectively) differed. Under the current conditions, timing of pasture strip allocation altered the herbage nutrient supply to cows; allocating a fresh strip of pasture later in the day resulted in moderate increases in milk and milk solids yields

  15. Is Mass Customization Sustainable?

    DEFF Research Database (Denmark)

    Petersen, Thomas Ditlev; Jørgensen, Kaj Asbjørn; Nielsen, Kjeld

    2011-01-01

    Mass customizers are like other companies currently experiencing an increasing customer demand for environmentally sustainable products as well as an increasingly strict legislation regarding environmental sustainability. This paper addresses the issue whether the concepts mass customization...... and sustainability are fundamentally compatible by asking the question: can a mass customized product be sustainable? Several factors could indicate that mass customized products are less sustainable than standardized products; however other factors suggest the opposite. This paper explores these factors during...... three life cycle phases for a product: Production, Use and End of Life. It is concluded that there is not an unambiguous causal relationship between mass customization and sustainability; however several factors unique to mass customized products are essential to consider during product and process...

  16. Main sequence mass loss

    International Nuclear Information System (INIS)

    Brunish, W.M.; Guzik, J.A.; Willson, L.A.; Bowen, G.

    1987-01-01

    It has been hypothesized that variable stars may experience mass loss, driven, at least in part, by oscillations. The class of stars we are discussing here are the δ Scuti variables. These are variable stars with masses between about 1.2 and 2.25 M/sub θ/, lying on or very near the main sequence. According to this theory, high rotation rates enhance the rate of mass loss, so main sequence stars born in this mass range would have a range of mass loss rates, depending on their initial rotation velocity and the amplitude of the oscillations. The stars would evolve rapidly down the main sequence until (at about 1.25 M/sub θ/) a surface convection zone began to form. The presence of this convective region would slow the rotation, perhaps allowing magnetic braking to occur, and thus sharply reduce the mass loss rate. 7 refs

  17. Quantifying the tibiofemoral joint space using x-ray tomosynthesis.

    Science.gov (United States)

    Kalinosky, Benjamin; Sabol, John M; Piacsek, Kelly; Heckel, Beth; Gilat Schmidt, Taly

    2011-12-01

    Digital x-ray tomosynthesis (DTS) has the potential to provide 3D information about the knee joint in a load-bearing posture, which may improve diagnosis and monitoring of knee osteoarthritis compared with projection radiography, the current standard of care. Manually quantifying and visualizing the joint space width (JSW) from 3D tomosynthesis datasets may be challenging. This work developed a semiautomated algorithm for quantifying the 3D tibiofemoral JSW from reconstructed DTS images. The algorithm was validated through anthropomorphic phantom experiments and applied to three clinical datasets. A user-selected volume of interest within the reconstructed DTS volume was enhanced with 1D multiscale gradient kernels. The edge-enhanced volumes were divided by polarity into tibial and femoral edge maps and combined across kernel scales. A 2D connected components algorithm was performed to determine candidate tibial and femoral edges. A 2D joint space width map (JSW) was constructed to represent the 3D tibiofemoral joint space. To quantify the algorithm accuracy, an adjustable knee phantom was constructed, and eleven posterior-anterior (PA) and lateral DTS scans were acquired with the medial minimum JSW of the phantom set to 0-5 mm in 0.5 mm increments (VolumeRad™, GE Healthcare, Chalfont St. Giles, United Kingdom). The accuracy of the algorithm was quantified by comparing the minimum JSW in a region of interest in the medial compartment of the JSW map to the measured phantom setting for each trial. In addition, the algorithm was applied to DTS scans of a static knee phantom and the JSW map compared to values estimated from a manually segmented computed tomography (CT) dataset. The algorithm was also applied to three clinical DTS datasets of osteoarthritic patients. The algorithm segmented the JSW and generated a JSW map for all phantom and clinical datasets. For the adjustable phantom, the estimated minimum JSW values were plotted against the measured values for all

  18. The DiskMass Survey. VII. The distribution of luminous and dark matter in spiral galaxies

    NARCIS (Netherlands)

    Martinsson, T.P.K.; Verheijen, M.; Westfall, K.; Bershady, M.; Andersen, D.; Swaters, R.

    2013-01-01

    We present dynamically-determined rotation-curve mass decompositions of 30 spiral galaxies, which were carried out to test the maximum-disk hypothesis and to quantify properties of their dark-matter halos. We used measured vertical velocity dispersions of the disk stars to calculate dynamical mass

  19. The DiskMass Survey. VII. The distribution of luminous and dark matter in spiral galaxies

    NARCIS (Netherlands)

    Martinsson, Thomas P. K.; Verheijen, Marc A. W.; Westfall, Kyle B.; Bershady, Matthew A.; Andersen, David R.; Swaters, Rob A.

    We present dynamically-determined rotation-curve mass decompositions of 30 spiral galaxies, which were carried out to test the maximum-disk hypothesis and to quantify properties of their dark-matter halos. We used measured vertical velocity dispersions of the disk stars to calculate dynamical mass

  20. MassTRIX: mass translator into pathways.

    Science.gov (United States)

    Suhre, Karsten; Schmitt-Kopplin, Philippe

    2008-07-01

    Recent technical advances in mass spectrometry (MS) have brought the field of metabolomics to a point where large numbers of metabolites from numerous prokaryotic and eukaryotic organisms can now be easily and precisely detected. The challenge today lies in the correct annotation of these metabolites on the basis of their accurate measured masses. Assignment of bulk chemical formula is generally possible, but without consideration of the biological and genomic context, concrete metabolite annotations remain difficult and uncertain. MassTRIX responds to this challenge by providing a hypothesis-driven approach to high precision MS data annotation. It presents the identified chemical compounds in their genomic context as differentially colored objects on KEGG pathway maps. Information on gene transcription or differences in the gene complement (e.g. samples from different bacterial strains) can be easily added. The user can thus interpret the metabolic state of the organism in the context of its potential and, in the case of submitted transcriptomics data, real enzymatic capacities. The MassTRIX web server is freely accessible at http://masstrix.org.

  1. Radiology reports: a quantifiable and objective textual approach

    International Nuclear Information System (INIS)

    Scott, J.A.; Palmer, E.L.

    2015-01-01

    Aim: To examine the feasibility of using automated lexical analysis in conjunction with machine learning to create a means of objectively characterising radiology reports for quality improvement. Materials and methods: Twelve lexical parameters were quantified from the collected reports of four radiologists. These included the number of different words used, number of sentences, reading grade, readability, usage of the passive voice, and lexical metrics of concreteness, ambivalence, complexity, passivity, embellishment, communication and cognition. Each radiologist was statistically compared to the mean of the group for each parameter to determine outlying report characteristics. The reproducibility of these parameters in a given radiologist's reporting style was tested by using only these 12 parameters as input to a neural network designed to establish the authorship of 60 unknown reports. Results: Significant differences in report characteristics were observed between radiologists, quantifying and characterising deviations of individuals from the group reporting style. The 12 metrics employed in a neural network correctly identified the author in each of 60 unknown reports tested, indicating a robust parametric signature. Conclusion: Automated and quantifiable methods can be used to analyse reporting style and provide impartial and objective feedback as well as to detect and characterise significant differences from the group. The parameters examined are sufficiently specific to identify the authors of reports and can potentially be useful in quality improvement and residency training. - Highlights: • Radiology reports can be objectively studied based upon their lexical characteristics. • This analysis can help establish norms for reporting, resident training and authorship attribution. • This analysis can complement higher level subjective analysis in quality improvement efforts.

  2. Using nitrate to quantify quick flow in a karst aquifer

    Science.gov (United States)

    Mahler, B.J.; Garner, B.D.

    2009-01-01

    In karst aquifers, contaminated recharge can degrade spring water quality, but quantifying the rapid recharge (quick flow) component of spring flow is challenging because of its temporal variability. Here, we investigate the use of nitrate in a two-endmember mixing model to quantify quick flow in Barton Springs, Austin, Texas. Historical nitrate data from recharging creeks and Barton Springs were evaluated to determine a representative nitrate concentration for the aquifer water endmember (1.5 mg/L) and the quick flow endmember (0.17 mg/L for nonstormflow conditions and 0.25 mg/L for stormflow conditions). Under nonstormflow conditions for 1990 to 2005, model results indicated that quick flow contributed from 0% to 55% of spring flow. The nitrate-based two-endmember model was applied to the response of Barton Springs to a storm and results compared to those produced using the same model with ??18O and specific conductance (SC) as tracers. Additionally, the mixing model was modified to allow endmember quick flow values to vary over time. Of the three tracers, nitrate appears to be the most advantageous because it is conservative and because the difference between the concentrations in the two endmembers is large relative to their variance. The ??18O- based model was very sensitive to variability within the quick flow endmember, and SC was not conservative over the timescale of the storm response. We conclude that a nitrate-based two-endmember mixing model might provide a useful approach for quantifying the temporally variable quick flow component of spring flow in some karst systems. ?? 2008 National Ground Water Association.

  3. A simple method for quantifying jump loads in volleyball athletes.

    Science.gov (United States)

    Charlton, Paula C; Kenneally-Dabrowski, Claire; Sheppard, Jeremy; Spratford, Wayne

    2017-03-01

    Evaluate the validity of a commercially available wearable device, the Vert, for measuring vertical displacement and jump count in volleyball athletes. Propose a potential method of quantifying external load during training and match play within this population. Validation study. The ability of the Vert device to measure vertical displacement in male, junior elite volleyball athletes was assessed against reference standard laboratory motion analysis. The ability of the Vert device to count jumps during training and match-play was assessed via comparison with retrospective video analysis to determine precision and recall. A method of quantifying external load, known as the load index (LdIx) algorithm was proposed using the product of the jump count and average kinetic energy. Correlation between two separate Vert devices and three-dimensional trajectory data were good to excellent for all jump types performed (r=0.83-0.97), with a mean bias of between 3.57-4.28cm. When matched against jumps identified through video analysis, the Vert demonstrated excellent precision (0.995-1.000) evidenced by a low number of false positives. The number of false negatives identified with the Vert was higher resulting in lower recall values (0.814-0.930). The Vert is a commercially available tool that has potential for measuring vertical displacement and jump count in elite junior volleyball athletes without the need for time-consuming analysis and bespoke software. Subsequently, allowing the collected data to better quantify load using the proposed algorithm (LdIx). Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  4. A simplified score to quantify comorbidity in COPD.

    Directory of Open Access Journals (Sweden)

    Nirupama Putcha

    Full Text Available Comorbidities are common in COPD, but quantifying their burden is difficult. Currently there is a COPD-specific comorbidity index to predict mortality and another to predict general quality of life. We sought to develop and validate a COPD-specific comorbidity score that reflects comorbidity burden on patient-centered outcomes.Using the COPDGene study (GOLD II-IV COPD, we developed comorbidity scores to describe patient-centered outcomes employing three techniques: 1 simple count, 2 weighted score, and 3 weighted score based upon statistical selection procedure. We tested associations, area under the Curve (AUC and calibration statistics to validate scores internally with outcomes of respiratory disease-specific quality of life (St. George's Respiratory Questionnaire, SGRQ, six minute walk distance (6MWD, modified Medical Research Council (mMRC dyspnea score and exacerbation risk, ultimately choosing one score for external validation in SPIROMICS.Associations between comorbidities and all outcomes were comparable across the three scores. All scores added predictive ability to models including age, gender, race, current smoking status, pack-years smoked and FEV1 (p<0.001 for all comparisons. Area under the curve (AUC was similar between all three scores across outcomes: SGRQ (range 0·7624-0·7676, MMRC (0·7590-0·7644, 6MWD (0·7531-0·7560 and exacerbation risk (0·6831-0·6919. Because of similar performance, the comorbidity count was used for external validation. In the SPIROMICS cohort, the comorbidity count performed well to predict SGRQ (AUC 0·7891, MMRC (AUC 0·7611, 6MWD (AUC 0·7086, and exacerbation risk (AUC 0·7341.Quantifying comorbidity provides a more thorough understanding of the risk for patient-centered outcomes in COPD. A comorbidity count performs well to quantify comorbidity in a diverse population with COPD.

  5. Quantifying the ventilatory control contribution to sleep apnoea using polysomnography.

    Science.gov (United States)

    Terrill, Philip I; Edwards, Bradley A; Nemati, Shamim; Butler, James P; Owens, Robert L; Eckert, Danny J; White, David P; Malhotra, Atul; Wellman, Andrew; Sands, Scott A

    2015-02-01

    Elevated loop gain, consequent to hypersensitive ventilatory control, is a primary nonanatomical cause of obstructive sleep apnoea (OSA) but it is not possible to quantify this in the clinic. Here we provide a novel method to estimate loop gain in OSA patients using routine clinical polysomnography alone. We use the concept that spontaneous ventilatory fluctuations due to apnoeas/hypopnoeas (disturbance) result in opposing changes in ventilatory drive (response) as determined by loop gain (response/disturbance). Fitting a simple ventilatory control model (including chemical and arousal contributions to ventilatory drive) to the ventilatory pattern of OSA reveals the underlying loop gain. Following mathematical-model validation, we critically tested our method in patients with OSA by comparison with a standard (continuous positive airway pressure (CPAP) drop method), and by assessing its ability to detect the known reduction in loop gain with oxygen and acetazolamide. Our method quantified loop gain from baseline polysomnography (correlation versus CPAP-estimated loop gain: n=28; r=0.63, p<0.001), detected the known reduction in loop gain with oxygen (n=11; mean±sem change in loop gain (ΔLG) -0.23±0.08, p=0.02) and acetazolamide (n=11; ΔLG -0.20±0.06, p=0.005), and predicted the OSA response to loop gain-lowering therapy. We validated a means to quantify the ventilatory control contribution to OSA pathogenesis using clinical polysomnography, enabling identification of likely responders to therapies targeting ventilatory control. Copyright ©ERS 2015.

  6. Glycomics using mass spectrometry

    OpenAIRE

    Wuhrer, Manfred

    2013-01-01

    Mass spectrometry plays an increasingly important role in structural glycomics. This review provides an overview on currently used mass spectrometric approaches such as the characterization of glycans, the analysis of glycopeptides obtained by proteolytic cleavage of proteins and the analysis of glycosphingolipids. The given examples are demonstrating the application of mass spectrometry to study glycosylation changes associated with congenital disorders of glycosylation, lysosomal storage di...

  7. Mass spectrometry in oceanography

    International Nuclear Information System (INIS)

    Aggarwal, Suresh K.

    2000-01-01

    Mass spectrometry plays an important role in oceanography for various applications. Different types of inorganic as well as organic mass spectrometric techniques are being exploited world-wide to understand the different aspects of marine science, for palaeogeography, palaeoclimatology and palaeoecology, for isotopic composition and concentrations of different elements as well as for speciation studies. The present paper reviews some of the applications of atomic mass spectrometric techniques in the area of oceanography

  8. Large mass storage facility

    International Nuclear Information System (INIS)

    Peskin, A.M.

    1978-01-01

    The report of a committee to study the questions surrounding possible acquisition of a large mass-storage device is presented. The current computing environment at BNL and justification for an online large mass storage device are briefly discussed. Possible devices to meet the requirements of large mass storage are surveyed, including future devices. The future computing needs of BNL are prognosticated. 2 figures, 4 tables

  9. Probabilistic structural analysis to quantify uncertainties associated with turbopump blades

    Science.gov (United States)

    Nagpal, Vinod K.; Rubinstein, Robert; Chamis, Christos C.

    1987-01-01

    A probabilistic study of turbopump blades has been in progress at NASA Lewis Research Center for over the last two years. The objectives of this study are to evaluate the effects of uncertainties in geometry and material properties on the structural response of the turbopump blades to evaluate the tolerance limits on the design. A methodology based on probabilistic approach has been developed to quantify the effects of the random uncertainties. The results of this study indicate that only the variations in geometry have significant effects.

  10. Quantifying uncertainties in the structural response of SSME blades

    Science.gov (United States)

    Nagpal, Vinod K.

    1987-01-01

    To quantify the uncertainties associated with the geometry and material properties of a Space Shuttle Main Engine (SSME) turbopump blade, a computer code known as STAEBL was used. A finite element model of the blade used 80 triangular shell elements with 55 nodes and five degrees of freedom per node. The whole study was simulated on the computer and no real experiments were conducted. The structural response has been evaluated in terms of three variables which are natural frequencies, root (maximum) stress, and blade tip displacements. The results of the study indicate that only the geometric uncertainties have significant effects on the response. Uncertainties in material properties have insignificant effects.

  11. Quantifying DNA melting transitions using single-molecule force spectroscopy

    International Nuclear Information System (INIS)

    Calderon, Christopher P; Chen, W-H; Harris, Nolan C; Kiang, C-H; Lin, K-J

    2009-01-01

    We stretched a DNA molecule using an atomic force microscope (AFM) and quantified the mechanical properties associated with B and S forms of double-stranded DNA (dsDNA), molten DNA, and single-stranded DNA. We also fit overdamped diffusion models to the AFM time series and used these models to extract additional kinetic information about the system. Our analysis provides additional evidence supporting the view that S-DNA is a stable intermediate encountered during dsDNA melting by mechanical force. In addition, we demonstrated that the estimated diffusion models can detect dynamical signatures of conformational degrees of freedom not directly observed in experiments.

  12. Quantifying DNA melting transitions using single-molecule force spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Calderon, Christopher P [Department of Computational and Applied Mathematics, Rice University, Houston, TX (United States); Chen, W-H; Harris, Nolan C; Kiang, C-H [Department of Physics and Astronomy, Rice University, Houston, TX (United States); Lin, K-J [Department of Chemistry, National Chung Hsing University, Taichung, Taiwan (China)], E-mail: chkiang@rice.edu

    2009-01-21

    We stretched a DNA molecule using an atomic force microscope (AFM) and quantified the mechanical properties associated with B and S forms of double-stranded DNA (dsDNA), molten DNA, and single-stranded DNA. We also fit overdamped diffusion models to the AFM time series and used these models to extract additional kinetic information about the system. Our analysis provides additional evidence supporting the view that S-DNA is a stable intermediate encountered during dsDNA melting by mechanical force. In addition, we demonstrated that the estimated diffusion models can detect dynamical signatures of conformational degrees of freedom not directly observed in experiments.

  13. Quantifying unsteadiness and dynamics of pulsatory volcanic activity

    Science.gov (United States)

    Dominguez, L.; Pioli, L.; Bonadonna, C.; Connor, C. B.; Andronico, D.; Harris, A. J. L.; Ripepe, M.

    2016-06-01

    Pulsatory eruptions are marked by a sequence of explosions which can be separated by time intervals ranging from a few seconds to several hours. The quantification of the periodicities associated with these eruptions is essential not only for the comprehension of the mechanisms controlling explosivity, but also for classification purposes. We focus on the dynamics of pulsatory activity and quantify unsteadiness based on the distribution of the repose time intervals between single explosive events in relation to magma properties and eruptive styles. A broad range of pulsatory eruption styles are considered, including Strombolian, violent Strombolian and Vulcanian explosions. We find a general relationship between the median of the observed repose times in eruptive sequences and the viscosity of magma given by η ≈ 100 ṡtmedian. This relationship applies to the complete range of magma viscosities considered in our study (102 to 109 Pa s) regardless of the eruption length, eruptive style and associated plume heights, suggesting that viscosity is the main magma property controlling eruption periodicity. Furthermore, the analysis of the explosive sequences in terms of failure time through statistical survival analysis provides further information: dynamics of pulsatory activity can be successfully described in terms of frequency and regularity of the explosions, quantified based on the log-logistic distribution. A linear relationship is identified between the log-logistic parameters, μ and s. This relationship is useful for quantifying differences among eruptive styles from very frequent and regular mafic events (Strombolian activity) to more sporadic and irregular Vulcanian explosions in silicic systems. The time scale controlled by the parameter μ, as a function of the median of the distribution, can be therefore correlated with the viscosity of magmas; while the complexity of the erupting system, including magma rise rate, degassing and fragmentation efficiency

  14. Quantifying environmental performance using an environmental footprint calculator

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.B.; Loney, A.C.; Chan, V. [Conestoga-Rovers & Associates, Waterloo, Ontario (Canada)

    2009-07-01

    This paper provides a case study using relevant key performance indicators (KPIs) to evaluate the environmental performance of a business. Using recognized calculation and reporting frameworks, Conestoga-Rovers & Associates (CRA) designed the Environmental Footprint Calculator to quantify the environmental performance of a Canadian construction materials company. CRA designed the Environmental Footprint calculator for our client to track and report their environmental performance in accordance with their targets, based on requirements of relevant guidance documents. The objective was to design a tool that effectively manages, calculates, and reports environmental performance to various stakeholders in a user-friendly format. (author)

  15. Quantifying capital goods for biological treatment of organic waste

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Petersen, Per H.; Nielsen, Peter D.

    2015-01-01

    for the AD plant. For the composting plants, gravel and concrete slabs for the pavement were used in large amounts. To frame the quantification, environmental impact assessments (EIAs) showed that the steel used for tanks at the AD plant and the concrete slabs at the composting plants made the highest...... on the different sizes for the three different types of waste (garden and park waste, food waste and sludge from wastewater treatment) in amounts of 10,000 or 50,000 tonnes per year. The AD plant was quantified for a capacity of 80,000 tonnes per year. Concrete and steel for the tanks were the main materials...

  16. Quantifying capital goods for collection and transport of waste

    DEFF Research Database (Denmark)

    Brogaard, Line Kai-Sørensen; Christensen, Thomas Højlund

    2012-01-01

    he capital goods for collection and transport of waste were quantified for different types of containers (plastic containers, cubes and steel containers) and an 18-tonnes compacting collection truck. The data were collected from producers and vendors of the bins and the truck. The service lifetime...... tonne of waste handled. The impact of producing the capital goods for waste collection and transport cannot be neglected as the capital goods dominate (>85%) the categories human-toxicity (non-cancer and cancer), ecotoxicity, resource depletion and aquatic eutrophication, but also play a role (>13...

  17. Quantifying camouflage: how to predict detectability from appearance.

    Science.gov (United States)

    Troscianko, Jolyon; Skelhorn, John; Stevens, Martin

    2017-01-06

    Quantifying the conspicuousness of objects against particular backgrounds is key to understanding the evolution and adaptive value of animal coloration, and in designing effective camouflage. Quantifying detectability can reveal how colour patterns affect survival, how animals' appearances influence habitat preferences, and how receiver visual systems work. Advances in calibrated digital imaging are enabling the capture of objective visual information, but it remains unclear which methods are best for measuring detectability. Numerous descriptions and models of appearance have been used to infer the detectability of animals, but these models are rarely empirically validated or directly compared to one another. We compared the performance of human 'predators' to a bank of contemporary methods for quantifying the appearance of camouflaged prey. Background matching was assessed using several established methods, including sophisticated feature-based pattern analysis, granularity approaches and a range of luminance and contrast difference measures. Disruptive coloration is a further camouflage strategy where high contrast patterns disrupt they prey's tell-tale outline, making it more difficult to detect. Disruptive camouflage has been studied intensely over the past decade, yet defining and measuring it have proven far more problematic. We assessed how well existing disruptive coloration measures predicted capture times. Additionally, we developed a new method for measuring edge disruption based on an understanding of sensory processing and the way in which false edges are thought to interfere with animal outlines. Our novel measure of disruptive coloration was the best predictor of capture times overall, highlighting the importance of false edges in concealment over and above pattern or luminance matching. The efficacy of our new method for measuring disruptive camouflage together with its biological plausibility and computational efficiency represents a substantial

  18. Quantifying the energetics of cooperativity in a ternary protein complex

    DEFF Research Database (Denmark)

    Andersen, Peter S; Schuck, Peter; Sundberg, Eric J

    2002-01-01

    and mathematical modeling to describe the energetics of cooperativity in a trimolecular protein complex. As a model system for quantifying cooperativity, we studied the ternary complex formed by the simultaneous interaction of a superantigen with major histocompatibility complex and T cell receptor, for which...... a structural model is available. This system exhibits positive and negative cooperativity, as well as augmentation of the temperature dependence of binding kinetics upon the cooperative interaction of individual protein components in the complex. Our experimental and theoretical analysis may be applicable...... to other systems involving cooperativity....

  19. The quantified patient of the future: Opportunities and challenges.

    Science.gov (United States)

    Majmudar, Maulik D; Colucci, Lina Avancini; Landman, Adam B

    2015-09-01

    The healthcare system is undergoing rapid transformation as national policies increase patient access, reward positive health outcomes, and push for an end to the current era of episodic care. Advances in health sensors are rapidly moving diagnostic and monitoring capabilities into consumer products, enabling new care models. Although hospitals and health care providers have been slow to embrace novel health technologies, such innovations may help meet mounting pressure to provide timely, high quality, and low-cost care to large populations. This leading edge perspective focuses on the quantified-self movement and highlights the opportunities and challenges for patients, providers, and researchers. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Quantifying environmental performance using an environmental footprint calculator

    International Nuclear Information System (INIS)

    Smith, D.B.; Loney, A.C.; Chan, V.

    2009-01-01

    This paper provides a case study using relevant key performance indicators (KPIs) to evaluate the environmental performance of a business. Using recognized calculation and reporting frameworks, Conestoga-Rovers & Associates (CRA) designed the Environmental Footprint Calculator to quantify the environmental performance of a Canadian construction materials company. CRA designed the Environmental Footprint calculator for our client to track and report their environmental performance in accordance with their targets, based on requirements of relevant guidance documents. The objective was to design a tool that effectively manages, calculates, and reports environmental performance to various stakeholders in a user-friendly format. (author)

  1. The origin of mass

    International Nuclear Information System (INIS)

    Cashmore, R.; Sutton, C.

    1992-01-01

    The existence of mass in the Universe remains unexplained but recent high-energy experiments, described in this article, are close to testing the most plausible explanation for the masses of fundamental particles, which may, in turn, lead to a clearer understanding of mass on the macro-scale. The Standard Model includes the concept of the Higgs mechanism which endows particles with mass. Actual evidence for the existence of the postulated particle known as the Higgs boson would lead to confirmation of the theory and efforts to detect it at CERN are complex and determined. (UK)

  2. MassAI

    DEFF Research Database (Denmark)

    2011-01-01

    A software tool for general analysis and data-mining of mass-spectrometric datasets. The program features a strong emphasis on scan-by-scan identification and results-transparency. MassAI also accommodates residue level analysis of labelled runs, e.g. HDX.......A software tool for general analysis and data-mining of mass-spectrometric datasets. The program features a strong emphasis on scan-by-scan identification and results-transparency. MassAI also accommodates residue level analysis of labelled runs, e.g. HDX....

  3. Masses of Cepheids

    International Nuclear Information System (INIS)

    Fox, A.N.

    1980-01-01

    About ten years ago it became apparent that the masses of Cepheids predicted from the theory of stellar evolution were larger than those predicted by pulsation theory. This mass anomaly for the classical Cepheids was displayed by Christy (1968) and Stobie (1969a,b,c) using nonlinear hydrodynamic calculations and by Cogan (1970) using linear theory. Rodgers (1970) has also discussed the several mass anomalies in some detail. These mass anomalies, and some others to be discussed, have not yet been completely resolved, but many of the discrepancies have been alleviated mostly by an increase in the Cepheid luminosities and a decrease in their surface temperatures

  4. The increase of breath ammonia induced by niacin ingestion quantified by selected ion flow tube mass spectrometry

    Czech Academy of Sciences Publication Activity Database

    Smith, D.; Wang, T. S.; Španěl, Patrik; Bloor, R.

    2006-01-01

    Roč. 27, č. 6 (2006), s. 437-444 ISSN 0967-3334 Institutional research plan: CEZ:AV0Z40400503 Keywords : SIFT-MS * ammonia * breath analysis * niacin Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.438, year: 2006

  5. Educational Differences in Postmenopausal Breast Cancer - Quantifying Indirect Effects through Health Behaviors, Body Mass Index and Reproductive Patterns

    DEFF Research Database (Denmark)

    Hvidtfeldt, Ulla Arthur; Lange, Theis; Andersen, Ingelise

    2013-01-01

    Studying mechanisms underlying social inequality in postmenopausal breast cancer is important in order to develop prevention strategies. Standard methods for investigating indirect effects, by comparing crude models to adjusted, are often biased. We applied a new method enabling the decomposition......-years at risk. Of these, 26% (95% CI 14%-69%) could be attributed to alcohol consumption. Similar effects were observed for age at first birth (32%; 95% CI 10%-257%), parity (19%; 95%CI 10%-45%), and hormone therapy use (10%; 95% CI 6%-18%). Educational level modified the effect of physical activity on breast...

  6. Use of Mass-Flux Measurement and Vapor-Phase Tomography to Quantify Vadose-Zone Source Strength and Distribution

    Science.gov (United States)

    2016-01-01

    noted i) K = Permeability (Pa-Pb)i = Pressure differential between the sampling location and the extraction well µ= Viscosity Values for permeability...A case study of soil-gas 29 sampling in silt and clay -rich (low-permeability) soils. Ground Water Monitor. Remed. 29: 144-152. Ni, C.F. and T-CJ

  7. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  8. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  9. Quantifying the measurement uncertainty of results from environmental analytical methods.

    Science.gov (United States)

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  10. Statistical Measures to Quantify Similarity between Molecular Dynamics Simulation Trajectories

    Directory of Open Access Journals (Sweden)

    Jenny Farmer

    2017-11-01

    Full Text Available Molecular dynamics simulation is commonly employed to explore protein dynamics. Despite the disparate timescales between functional mechanisms and molecular dynamics (MD trajectories, functional differences are often inferred from differences in conformational ensembles between two proteins in structure-function studies that investigate the effect of mutations. A common measure to quantify differences in dynamics is the root mean square fluctuation (RMSF about the average position of residues defined by C α -atoms. Using six MD trajectories describing three native/mutant pairs of beta-lactamase, we make comparisons with additional measures that include Jensen-Shannon, modifications of Kullback-Leibler divergence, and local p-values from 1-sample Kolmogorov-Smirnov tests. These additional measures require knowing a probability density function, which we estimate by using a nonparametric maximum entropy method that quantifies rare events well. The same measures are applied to distance fluctuations between C α -atom pairs. Results from several implementations for quantitative comparison of a pair of MD trajectories are made based on fluctuations for on-residue and residue-residue local dynamics. We conclude that there is almost always a statistically significant difference between pairs of 100 ns all-atom simulations on moderate-sized proteins as evident from extraordinarily low p-values.

  11. Quantifying construction and demolition waste: An analytical review

    International Nuclear Information System (INIS)

    Wu, Zezhou; Yu, Ann T.W.; Shen, Liyin; Liu, Guiwen

    2014-01-01

    Highlights: • Prevailing C and D waste quantification methodologies are identified and compared. • One specific methodology cannot fulfill all waste quantification scenarios. • A relevance tree for appropriate quantification methodology selection is proposed. • More attentions should be paid to civil and infrastructural works. • Classified information is suggested for making an effective waste management plan. - Abstract: Quantifying construction and demolition (C and D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C and D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C and D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested

  12. Methods to Quantify Nickel in Soils and Plant Tissues

    Directory of Open Access Journals (Sweden)

    Bruna Wurr Rodak

    2015-06-01

    Full Text Available In comparison with other micronutrients, the levels of nickel (Ni available in soils and plant tissues are very low, making quantification very difficult. The objective of this paper is to present optimized determination methods of Ni availability in soils by extractants and total content in plant tissues for routine commercial laboratory analyses. Samples of natural and agricultural soils were processed and analyzed by Mehlich-1 extraction and by DTPA. To quantify Ni in the plant tissues, samples were digested with nitric acid in a closed system in a microwave oven. The measurement was performed by inductively coupled plasma/optical emission spectrometry (ICP-OES. There was a positive and significant correlation between the levels of available Ni in the soils subjected to Mehlich-1 and DTPA extraction, while for plant tissue samples the Ni levels recovered were high and similar to the reference materials. The availability of Ni in some of the natural soil and plant tissue samples were lower than the limits of quantification. Concentrations of this micronutrient were higher in the soil samples in which Ni had been applied. Nickel concentration differed in the plant parts analyzed, with highest levels in the grains of soybean. The grain, in comparison with the shoot and leaf concentrations, were better correlated with the soil available levels for both extractants. The methods described in this article were efficient in quantifying Ni and can be used for routine laboratory analysis of soils and plant tissues.

  13. Development of an algorithm for quantifying extremity biological tissue

    International Nuclear Information System (INIS)

    Pavan, Ana L.M.; Miranda, Jose R.A.; Pina, Diana R. de

    2013-01-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab ® software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom

  14. A framework for quantifying net benefits of alternative prognostic models.

    Science.gov (United States)

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-30

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.

  15. A framework for quantifying net benefits of alternative prognostic models‡

    Science.gov (United States)

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21905066

  16. Quantifying potential recharge in mantled sinkholes using ERT.

    Science.gov (United States)

    Schwartz, Benjamin F; Schreiber, Madeline E

    2009-01-01

    Potential recharge through thick soils in mantled sinkholes was quantified using differential electrical resistivity tomography (ERT). Conversion of time series two-dimensional (2D) ERT profiles into 2D volumetric water content profiles using a numerically optimized form of Archie's law allowed us to monitor temporal changes in water content in soil profiles up to 9 m in depth. Combining Penman-Monteith daily potential evapotranspiration (PET) and daily precipitation data with potential recharge calculations for three sinkhole transects indicates that potential recharge occurred only during brief intervals over the study period and ranged from 19% to 31% of cumulative precipitation. Spatial analysis of ERT-derived water content showed that infiltration occurred both on sinkhole flanks and in sinkhole bottoms. Results also demonstrate that mantled sinkholes can act as regions of both rapid and slow recharge. Rapid recharge is likely the result of flow through macropores (such as root casts and thin gravel layers), while slow recharge is the result of unsaturated flow through fine-grained sediments. In addition to developing a new method for quantifying potential recharge at the field scale in unsaturated conditions, we show that mantled sinkholes are an important component of storage in a karst system.

  17. Quantifying Diffuse Contamination: Method and Application to Pb in Soil.

    Science.gov (United States)

    Fabian, Karl; Reimann, Clemens; de Caritat, Patrice

    2017-06-20

    A new method for detecting and quantifying diffuse contamination at the continental to regional scale is based on the analysis of cumulative distribution functions (CDFs). It uses cumulative probability (CP) plots for spatially representative data sets, preferably containing >1000 determinations. Simulations demonstrate how different types of contamination influence elemental CDFs of different sample media. It is found that diffuse contamination is characterized by a distinctive shift of the low-concentration end of the distribution of the studied element in its CP plot. Diffuse contamination can be detected and quantified via either (1) comparing the distribution of the contaminating element to that of an element with a geochemically comparable behavior but no contamination source (e.g., Pb vs Rb), or (2) comparing the top soil distribution of an element to the distribution of the same element in subsoil samples from the same area, taking soil forming processes into consideration. Both procedures are demonstrated for geochemical soil data sets from Europe, Australia, and the U.S.A. Several different data sets from Europe deliver comparable results at different scales. Diffuse Pb contamination in surface soil is estimated to be contamination sources and can be used to efficiently monitor diffuse contamination at the continental to regional scale.

  18. Quantifying facial paralysis using the Kinect v2.

    Science.gov (United States)

    Gaber, Amira; Taher, Mona F; Wahed, Manal Abdel

    2015-01-01

    Assessment of facial paralysis (FP) and quantitative grading of facial asymmetry are essential in order to quantify the extent of the condition as well as to follow its improvement or progression. As such, there is a need for an accurate quantitative grading system that is easy to use, inexpensive and has minimal inter-observer variability. A comprehensive automated system to quantify and grade FP is the main objective of this work. An initial prototype has been presented by the authors. The present research aims to enhance the accuracy and robustness of one of this system's modules: the resting symmetry module. This is achieved by including several modifications to the computation method of the symmetry index (SI) for the eyebrows, eyes and mouth. These modifications are the gamma correction technique, the area of the eyes, and the slope of the mouth. The system was tested on normal subjects and showed promising results. The mean SI of the eyebrows decreased slightly from 98.42% to 98.04% using the modified method while the mean SI for the eyes and mouth increased from 96.93% to 99.63% and from 95.6% to 98.11% respectively while using the modified method. The system is easy to use, inexpensive, automated and fast, has no inter-observer variability and is thus well suited for clinical use.

  19. Digitally quantifying cerebral hemorrhage using Photoshop and Image J.

    Science.gov (United States)

    Tang, Xian Nan; Berman, Ari Ethan; Swanson, Raymond Alan; Yenari, Midori Anne

    2010-07-15

    A spectrophotometric hemoglobin assay is widely used to estimate the extent of brain hemorrhage by measuring the amount of hemoglobin in the brain. However, this method requires using the entire brain sample, leaving none for histology or other assays. Other widely used measures of gross brain hemorrhage are generally semi-quantitative and can miss subtle differences. Semi-quantitative brain hemorrhage scales may also be subject to bias. Here, we present a method to digitally quantify brain hemorrhage using Photoshop and Image J, and compared this method to the spectrophotometric hemoglobin assay. Male Sprague-Dawley rats received varying amounts of autologous blood injected into the cerebral hemispheres in order to generate different sized hematomas. 24h later, the brains were harvested, sectioned, photographed then prepared for the hemoglobin assay. From the brain section photographs, pixels containing hemorrhage were identified by Photoshop and the optical intensity was measured by Image J. Identification of hemorrhage size using optical intensities strongly correlated to the hemoglobin assay (R=0.94). We conclude that our method can accurately quantify the extent of hemorrhage. An advantage of this technique is that brain tissue can be used for additional studies. Published by Elsevier B.V.

  20. DIGITALLY QUANTIFYING CEREBRAL HEMORRHAGE USING PHOTOSHOP® AND IMAGE J

    Science.gov (United States)

    Tang, Xian Nan; Berman, Ari Ethan; Swanson, Raymond Alan; Yenari, Midori Anne

    2010-01-01

    A spectrophotometric hemoglobin assay is widely used to estimate the extent of brain hemorrhage by measuring the amount of hemoglobin in the brain. However, this method requires using the entire brain sample, leaving none for histology or other assays. Other widely used measures of gross brain hemorrhage are generally semi-quantitative and can miss subtle differences. Semi-quantitative brain hemorrhage scales may also be subject to bias. Here, we present a method to digitally quantify brain hemorrhage using Photoshop and Image J, and compared this method to the spectrophotometric hemoglobin assay. Male Sprague-Dawley rats received varying amounts of autologous blood injected into the cerebral hemispheres in order to generate different sized hematomas. 24 hours later, the brains were harvested, sectioned, photographed then prepared for the hemoglobin assay. From the brain section photographs, pixels containing hemorrhage were identified by Photoshop® and the optical intensity was measured by Image J. Identification of hemorrhage size using optical intensities strongly correlated to the hemoglobin assay (R=0.94). We conclude that our method can accurately quantify the extent of hemorrhage. An advantage of this technique is that brain tissue can be used for additional studies. PMID:20452374

  1. Quantifying construction and demolition waste: an analytical review.

    Science.gov (United States)

    Wu, Zezhou; Yu, Ann T W; Shen, Liyin; Liu, Guiwen

    2014-09-01

    Quantifying construction and demolition (C&D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C&D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C&D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Quantifying Urban Fragmentation under Economic Transition in Shanghai City, China

    Directory of Open Access Journals (Sweden)

    Heyuan You

    2015-12-01

    Full Text Available Urban fragmentation affects sustainability through multiple impacts on economic, social, and environmental cost. Characterizing the dynamics of urban fragmentation in relation to economic transition should provide implications for sustainability. However, rather few efforts have been made in this issue. Using the case of Shanghai (China, this paper quantifies urban fragmentation in relation to economic transition. In particular, urban fragmentation is quantified by a time-series of remotely sensed images and a set of landscape metrics; and economic transition is described by a set of indicators from three aspects (globalization, decentralization, and marketization. Results show that urban fragmentation presents an increasing linear trend. Multivariate regression identifies positive linear correlation between urban fragmentation and economic transition. More specifically, the relative influence is different for the three components of economic transition. The relative influence of decentralization is stronger than that of globalization and marketization. The joint influences of decentralization and globalization are the strongest for urban fragmentation. The demonstrated methodology can be applicable to other places after making suitable adjustment of the economic transition indicators and fragmentation metrics.

  3. Design Life Level: Quantifying risk in a changing climate

    Science.gov (United States)

    Rootzén, Holger; Katz, Richard W.

    2013-09-01

    In the past, the concepts of return levels and return periods have been standard and important tools for engineering design. However, these concepts are based on the assumption of a stationary climate and do not apply to a changing climate, whether local or global. In this paper, we propose a refined concept, Design Life Level, which quantifies risk in a nonstationary climate and can serve as the basis for communication. In current practice, typical hydrologic risk management focuses on a standard (e.g., in terms of a high quantile corresponding to the specified probability of failure for a single year). Nevertheless, the basic information needed for engineering design should consist of (i) the design life period (e.g., the next 50 years, say 2015-2064); and (ii) the probability (e.g., 5% chance) of a hazardous event (typically, in the form of the hydrologic variable exceeding a high level) occurring during the design life period. Capturing both of these design characteristics, the Design Life Level is defined as an upper quantile (e.g., 5%) of the distribution of the maximum value of the hydrologic variable (e.g., water level) over the design life period. We relate this concept and variants of it to existing literature and illustrate how they, and some useful complementary plots, may be computed and used. One practically important consideration concerns quantifying the statistical uncertainty in estimating a high quantile under nonstationarity.

  4. Quantifying complexity in translational research: an integrated approach.

    Science.gov (United States)

    Munoz, David A; Nembhard, Harriet Black; Kraschnewski, Jennifer L

    2014-01-01

    The purpose of this paper is to quantify complexity in translational research. The impact of major operational steps and technical requirements is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. A three-phase integrated quality function deployment (QFD) and analytic hierarchy process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, the authors found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research.

  5. Quantifying the strength of quorum sensing crosstalk within microbial communities.

    Directory of Open Access Journals (Sweden)

    Kalinga Pavan T Silva

    2017-10-01

    Full Text Available In multispecies microbial communities, the exchange of signals such as acyl-homoserine lactones (AHL enables communication within and between species of Gram-negative bacteria. This process, commonly known as quorum sensing, aids in the regulation of genes crucial for the survival of species within heterogeneous populations of microbes. Although signal exchange was studied extensively in well-mixed environments, less is known about the consequences of crosstalk in spatially distributed mixtures of species. Here, signaling dynamics were measured in a spatially distributed system containing multiple strains utilizing homologous signaling systems. Crosstalk between strains containing the lux, las and rhl AHL-receptor circuits was quantified. In a distributed population of microbes, the impact of community composition on spatio-temporal dynamics was characterized and compared to simulation results using a modified reaction-diffusion model. After introducing a single term to account for crosstalk between each pair of signals, the model was able to reproduce the activation patterns observed in experiments. We quantified the robustness of signal propagation in the presence of interacting signals, finding that signaling dynamics are largely robust to interference. The ability of several wild isolates to participate in AHL-mediated signaling was investigated, revealing distinct signatures of crosstalk for each species. Our results present a route to characterize crosstalk between species and predict systems-level signaling dynamics in multispecies communities.

  6. Using multiscale norms to quantify mixing and transport

    International Nuclear Information System (INIS)

    Thiffeault, Jean-Luc

    2012-01-01

    Mixing is relevant to many areas of science and engineering, including the pharmaceutical and food industries, oceanography, atmospheric sciences and civil engineering. In all these situations one goal is to quantify and often then to improve the degree of homogenization of a substance being stirred, referred to as a passive scalar or tracer. A classical measure of mixing is the variance of the concentration of the scalar, which is the L 2 norm of a mean-zero concentration field. Recently, other norms have been used to quantify mixing, in particular the mix-norm as well as negative Sobolev norms. These norms have the advantage that unlike variance they decay even in the absence of diffusion, and their decay corresponds to the flow being mixing in the sense of ergodic theory. General Sobolev norms weigh scalar gradients differently, and are known as multiscale norms for mixing. We review the applications of such norms to mixing and transport, and show how they can be used to optimize the stirring and mixing of a decaying passive scalar. We then review recent work on the less-studied case of a continuously replenished scalar field—the source–sink problem. In that case the flows that optimally reduce the norms are associated with transport rather than mixing: they push sources onto sinks, and vice versa. (invited article)

  7. Constructing carbon offsets: The obstacles to quantifying emission reductions

    International Nuclear Information System (INIS)

    Millard-Ball, Adam; Ortolano, Leonard

    2010-01-01

    The existing literature generally ascribes the virtual absence of the transport sector from the Clean Development Mechanism (CDM) to the inherent complexity of quantifying emission reductions from mobile sources. We use archival analysis and interviews with CDM decision-makers and experts to identify two additional groups of explanations. First, we show the significance of aspects of the CDM's historical evolution, such as the order in which methodologies were considered and the assignment of expert desk reviewers. Second, we highlight inconsistencies in the treatment of uncertainty across sectors. In contrast to transport methodologies, other sectors are characterized by a narrow focus on sources of measurement uncertainty and a neglect of economic effects ('market leakages'). We do not argue that the rejection of transport methodologies was unjustified, but rather than many of the same problems are inherent in other sectors. Thus, the case of transport sheds light on fundamental problems in quantifying emission reductions under the CDM. We argue that a key theoretical attraction of the CDM-equalization of marginal abatement costs across all sectors-has been difficult to achieve in practice.

  8. A kernel plus method for quantifying wind turbine performance upgrades

    KAUST Repository

    Lee, Giwhyun

    2014-04-21

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine\\'s upgrade, the results may not be accurate because many other environmental factors in addition to wind speed, such as temperature, air pressure, turbulence intensity, wind shear and humidity, all potentially affect the turbine\\'s power output. Wind industry practitioners are aware of the need to filter out effects from environmental conditions. Toward that objective, we developed a kernel plus method that allows incorporation of multivariate environmental factors in a power curve model, thereby controlling the effects from environmental factors while comparing power outputs. We demonstrate that the kernel plus method can serve as a useful tool for quantifying a turbine\\'s upgrade because it is sensitive to small and moderate changes caused by certain turbine upgrades. Although we demonstrate the utility of the kernel plus method in this specific application, the resulting method is a general, multivariate model that can connect other physical factors, as long as their measurements are available, with a turbine\\'s power output, which may allow us to explore new physical properties associated with wind turbine performance. © 2014 John Wiley & Sons, Ltd.

  9. Disordered crystals from first principles I: Quantifying the configuration space

    Science.gov (United States)

    Kühne, Thomas D.; Prodan, Emil

    2018-04-01

    This work represents the first chapter of a project on the foundations of first-principle calculations of the electron transport in crystals at finite temperatures. We are interested in the range of temperatures, where most electronic components operate, that is, room temperature and above. The aim is a predictive first-principle formalism that combines ab-initio molecular dynamics and a finite-temperature Kubo-formula for homogeneous thermodynamic phases. The input for this formula is the ergodic dynamical system (Ω , G , dP) defining the thermodynamic crystalline phase, where Ω is the configuration space for the atomic degrees of freedom, G is the space group acting on Ω and dP is the ergodic Gibbs measure relative to the G-action. The present work develops an algorithmic method for quantifying (Ω , G , dP) from first principles. Using the silicon crystal as a working example, we find the Gibbs measure to be extremely well characterized by a multivariate normal distribution, which can be quantified using a small number of parameters. The latter are computed at various temperatures and communicated in the form of a table. Using this table, one can generate large and accurate thermally-disordered atomic configurations to serve, for example, as input for subsequent simulations of the electronic degrees of freedom.

  10. Quantifying Fire Cycle from Dendroecological Records Using Survival Analyses

    Directory of Open Access Journals (Sweden)

    Dominic Cyr

    2016-06-01

    Full Text Available Quantifying fire regimes in the boreal forest ecosystem is crucial for understanding the past and present dynamics, as well as for predicting its future dynamics. Survival analyses have often been used to estimate the fire cycle in eastern Canada because they make it possible to take into account the censored information that is made prevalent by the typically long fire return intervals and the limited scope of the dendroecological methods that are used to quantify them. Here, we assess how the true length of the fire cycle, the short-term temporal variations in fire activity, and the sampling effort affect the accuracy and precision of estimates obtained from two types of parametric survival models, the Weibull and the exponential models, and one non-parametric model obtained with the Cox regression. Then, we apply those results in a case area located in eastern Canada. Our simulation experiment confirms some documented concerns regarding the detrimental effects of temporal variations in fire activity on parametric estimation of the fire cycle. Cox regressions appear to provide the most accurate and robust estimator, being by far the least affected by temporal variations in fire activity. The Cox-based estimate of the fire cycle for the last 300 years in the case study area is 229 years (CI95: 162–407, compared with the likely overestimated 319 years obtained with the commonly used exponential model.

  11. Constraining the Mass Loss Geometry of Beta Lyrae

    Directory of Open Access Journals (Sweden)

    Jamie R. Lomax

    2012-03-01

    Full Text Available Massive binary stars lose mass by two mechanisms: jet-driven mass loss during periods of active mass transfer and by wind-driven mass loss. Beta Lyrae is an eclipsing, semi-detached binary whose state of active mass transfer provides a unique opportunity to study how the evolution of binary systems is affected by jet-driven mass loss. Roche lobe overflow from the primary star feeds the thick accretion disk which almost completely obscures the mass-gaining star. A hot spot predicted to be on the edge of the accretion disk may be the source of beta Lyrae’s bipolar outflows. I present results from spectropolarimetric data taken with the University of Wisconsin’s Half-Wave Spectropolarimeter and the Flower and Cook Observatory’s photoelastic modulating polarimeter instrument which have implications for our current understanding of the system’s disk geometry. Using broadband polarimetric analysis, I derive new information about the structure of the disk and the presence and location of a hot spot. These results place constraints on the geometrical distribution of material in beta Lyrae and can help quantify the amount of mass lost from massive interacting binary systems during phases of mass transfer and jet-driven mass loss.

  12. THE METALLICITIES OF LOW STELLAR MASS GALAXIES AND THE SCATTER IN THE MASS-METALLICITY RELATION

    International Nuclear Information System (INIS)

    Zahid, H. J.; Bresolin, F.; Kewley, L. J.; Coil, A. L.; Davé, R.

    2012-01-01

    In this investigation, we quantify the metallicities of low-mass galaxies by constructing the most comprehensive census to date. We use galaxies from the Sloan Digital Sky Survey (SDSS) and DEEP2 survey and estimate metallicities from their optical emission lines. We also use two smaller samples from the literature that have metallicities determined by the direct method using the temperature sensitive [O III]λ4363 line. We examine the scatter in the local mass-metallicity (MZ) relation determined from ∼20,000 star-forming galaxies in the SDSS and show that it is larger at lower stellar masses, consistent with the theoretical scatter in the MZ relation determined from hydrodynamical simulations. We determine a lower limit for the scatter in metallicities of galaxies down to stellar masses of ∼10 7 M ☉ which is only slightly smaller than the expected scatter inferred from the SDSS MZ relation and significantly larger than what has been previously established in the literature. The average metallicity of star-forming galaxies increases with stellar mass. By examining the scatter in the SDSS MZ relation, we show that this is mostly due to the lowest metallicity galaxies. The population of low-mass, metal-rich galaxies have properties that are consistent with previously identified galaxies that may be transitional objects between gas-rich dwarf irregulars and gas-poor dwarf spheroidals and ellipticals.

  13. Quantifying beetle-macrofungal associations in a temperate biodiversity hot spot.

    Science.gov (United States)

    Epps, Mary Jane; Arnold, A Elizabeth

    2018-01-29

    Beetles (Coleoptera) are often among the most abundant and diverse insects that feed on sporocarps of macrofungi, but little is known regarding their relative specialism or generalism in most communities. We surveyed >9000 sporocarps in montane hardwood forest in the Appalachian Mountains (USA) to characterize associations of mycophagous beetles and macrofungi. We used traditional metrics and network analyses to quantify relationships between sporocarp traits (mass, age, persistence, and toughness) and assemblages of adult beetles, drawing from >50 000 beetles collected over two survey years. Strict-sense specificity was rare in these associations: most beetle species were found on multiple fungal genera, and most fungi hosted multiple beetle species. Sporocarp age and fresh mass were positively associated with beetle diversity in fungi with ephemeral sporocarps (here including 12 genera of Agaricales and Russulales), but sporocarp persistence was not. In Polyporales, beetle diversity was greater in softer sporocarps than in tough or woody sporocarps. The increase of beetle diversity in aging sporocarps could not be attributed to increases in sporocarp mass or sampling point in the growing season, suggesting that age-related changes in chemistry or structure may support increasingly diverse beetle communities. Interaction networks differed as a function of sporocarp age, revealing that community-wide measures of generalism (i.e., network connectance) and evenness (i.e., variance in normalized degree) change as sporocarps mature and senesce. Beetles observed on Agaricales and Russulales with more persistent sporocarps had narrower interaction breadth (i.e., were more host-specific) than those on less persistent sporocarps, and beetles on Polyporales with tougher sporocarps had narrower interaction breadth than those on soft sporocarps. In addition to providing a large-scale evaluation of sporocarp use by adult beetles in this temperate biodiversity hot spot, this

  14. Effects of confinement on rock mass modulus: A synthetic rock mass modelling (SRM study

    Directory of Open Access Journals (Sweden)

    I. Vazaios

    2018-06-01

    Full Text Available The main objective of this paper is to examine the influence of the applied confining stress on the rock mass modulus of moderately jointed rocks (well interlocked undisturbed rock mass with blocks formed by three or less intersecting joints. A synthetic rock mass modelling (SRM approach is employed to determine the mechanical properties of the rock mass. In this approach, the intact body of rock is represented by the discrete element method (DEM-Voronoi grains with the ability of simulating the initiation and propagation of microcracks within the intact part of the model. The geometry of the pre-existing joints is generated by employing discrete fracture network (DFN modelling based on field joint data collected from the Brockville Tunnel using LiDAR scanning. The geometrical characteristics of the simulated joints at a representative sample size are first validated against the field data, and then used to measure the rock quality designation (RQD, joint spacing, areal fracture intensity (P21, and block volumes. These geometrical quantities are used to quantitatively determine a representative range of the geological strength index (GSI. The results show that estimating the GSI using the RQD tends to make a closer estimate of the degree of blockiness that leads to GSI values corresponding to those obtained from direct visual observations of the rock mass conditions in the field. The use of joint spacing and block volume in order to quantify the GSI value range for the studied rock mass suggests a lower range compared to that evaluated in situ. Based on numerical modelling results and laboratory data of rock testing reported in the literature, a semi-empirical equation is proposed that relates the rock mass modulus to confinement as a function of the areal fracture intensity and joint stiffness. Keywords: Synthetic rock mass modelling (SRM, Discrete fracture network (DFN, Rock mass modulus, Geological strength index (GSI, Confinement

  15. Cyclotrons as mass spectrometers

    International Nuclear Information System (INIS)

    Clark, D.J.

    1984-04-01

    The principles and design choices for cyclotrons as mass spectrometers are described. They are illustrated by examples of cyclotrons developed by various groups for this purpose. The use of present high energy cyclotrons for mass spectrometry is also described. 28 references, 12 figures

  16. ABSOLUTE NEUTRINO MASSES

    DEFF Research Database (Denmark)

    Schechter, J.; Shahid, M. N.

    2012-01-01

    We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos.......We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos....

  17. Systematics of quark mass

    International Nuclear Information System (INIS)

    Frampton, P.H.; Jarlskog, C.

    1985-01-01

    It is shown that the quark mass matrices in the Standard Electroweak Model satisfy the empirical relation M = M 1 + Ψ(Λ 2 ), where M(M sp (')) refers to the mass matrix of the charge 2/3(-1/3) quarks normalized to the largest eigenvalue, m sub (t)(m sub (b)), and Λ = V sub (us) = 0.22

  18. Measurements of neutrino mass

    International Nuclear Information System (INIS)

    Robertson, R.G.H.

    1985-01-01

    Direct experimental information of neutrino mass as derived from the study of nuclear and elementary-particle weak decays is reviewed. Topics include tritium beta decay; the 3 He-T mass difference; electron capture decay of 163 Ho and 158 Tb; and limits on massive neutrinos from cosmology. 38 references

  19. The Origin of Mass

    Energy Technology Data Exchange (ETDEWEB)

    Giese, Albrecht

    2010-07-01

    The world of physics presently looks to the LHC (CERN), where many expect the Higgs boson to be found. The Higgs is supposed to (partly) explain the cause of mass. There are indications that neither the Higgs nor Supersymmetric Particles will be found. In order to understand mass, the Higgs is not needed. Inertial mass is caused by a fundamental process. Binding fields propagate at the finite speed of light. An inevitable consequence is that every expanded object has an inertial behaviour, even if the constituents of the object are mass-less. To explain the mass of elementary particles, we have to accept that these particles are expanded. This is on the one hand in conflict with the concept of present physics; on the other hand it is in no conflict with any experiment. And it conforms to the analysis of Schroedinger with respect to the Dirac function of the electron. The corresponding particle model explains particle properties, like the magnetic moment (and therefore also the Bohr Magneton) and the constancy of the spin, correctly without any use of QM. Also the dynamic properties of mass, i.e. the relativistic increase of mass at motion and the mass-energy-relation, follow in a straight way from this concept.

  20. Mass spectrometers in medicine

    International Nuclear Information System (INIS)

    Bushman, J.A.

    1975-01-01

    This paper describes how the mass spectrometer enables true lung function, namely the exchange of gases between the environment and the organism, to be measured. This has greatly improved the understanding of respiratory disease and the latest generation of respiratory mass spectrometers will do much to increase the application of the technique. (author)

  1. Mass of AC Andromedae

    International Nuclear Information System (INIS)

    King, D.S.; Cox, A.N.; Hodson, S.W.

    1975-01-01

    Calculations indicate that AC Andromedae is population I rather than population II. A mass and radius for this star are calculated using a new set of opacities for the Kippenhahn Ia mixture. It is concluded that the mass is too high for an ordinary RR Lyrae star. (BJG)

  2. Mass preserving image registration

    DEFF Research Database (Denmark)

    Gorbunova, Vladlena; Sporring, Jon; Lo, Pechin Chien Pau

    2010-01-01

    The paper presents results the mass preserving image registration method in the Evaluation of Methods for Pulmonary Image Registration 2010 (EMPIRE10) Challenge. The mass preserving image registration algorithm was applied to the 20 image pairs. Registration was evaluated using four different...

  3. Elbow mass flow meter

    Science.gov (United States)

    McFarland, A.R.; Rodgers, J.C.; Ortiz, C.A.; Nelson, D.C.

    1994-08-16

    The present invention includes a combination of an elbow pressure drop generator and a shunt-type mass flow sensor for providing an output which gives the mass flow rate of a gas that is nearly independent of the density of the gas. For air, the output is also approximately independent of humidity. 3 figs.

  4. Miniature mass analyzer

    CERN Document Server

    Cuna, C; Lupsa, N; Cuna, S; Tuzson, B

    2003-01-01

    The paper presents the concept of different mass analyzers that were specifically designed as small dimension instruments able to detect with great sensitivity and accuracy the main environmental pollutants. The mass spectrometers are very suited instrument for chemical and isotopic analysis, needed in environmental surveillance. Usually, this is done by sampling the soil, air or water followed by laboratory analysis. To avoid drawbacks caused by sample alteration during the sampling process and transport, the 'in situ' analysis is preferred. Theoretically, any type of mass analyzer can be miniaturized, but some are more appropriate than others. Quadrupole mass filter and trap, magnetic sector, time-of-flight and ion cyclotron mass analyzers can be successfully shrunk, for each of them some performances being sacrificed but we must know which parameters are necessary to be kept unchanged. To satisfy the miniaturization criteria of the analyzer, it is necessary to use asymmetrical geometries, with ion beam obl...

  5. Proton mass decomposition

    Science.gov (United States)

    Yang, Yi-Bo; Chen, Ying; Draper, Terrence; Liang, Jian; Liu, Keh-Fei

    2018-03-01

    We report the results on the proton mass decomposition and also on the related quark and glue momentum fractions. The results are based on overlap valence fermions on four ensembles of Nf = 2 + 1 DWF configurations with three lattice spacings and volumes, and several pion masses including the physical pion mass. With 1-loop pertur-bative calculation and proper normalization of the glue operator, we find that the u, d, and s quark masses contribute 9(2)% to the proton mass. The quark energy and glue field energy contribute 31(5)% and 37(5)% respectively in the MS scheme at µ = 2 GeV. The trace anomaly gives the remaining 23(1)% contribution. The u, d, s and glue momentum fractions in the MS scheme are consistent with the global analysis at µ = 2 GeV.

  6. Linear mass reflectron

    International Nuclear Information System (INIS)

    Mamyrin, B.A.; Shmikk, D.V.

    1979-01-01

    A description and operating principle of a linear mass reflectron with V-form trajectory of ion motion -a new non-magnetic time-of-flight mass spectrometer with high resolution are presented. The ion-optical system of the device consists of an ion source with ionization by electron shock, of accelerating gaps, reflector gaps, a drift space and ion detector. Ions move in the linear mass refraction along the trajectories parallel to the axis of the analyzer chamber. The results of investigations into the experimental device are given. With an ion drift length of 0.6 m the device resolution is 1200 with respect to the peak width at half-height. Small-sized mass spectrometric transducers with high resolution and sensitivity may be designed on the base of the linear mass reflectron principle

  7. Top Quark Mass

    CERN Document Server

    Mulders, Martijn

    2016-01-01

    Ever since the discovery of the top quark at the Tevatron collider in 1995 the measurement of its mass has been a high priority. As one of the fundamental parameters of the Standard Theory of particle physics, the precise value of the top quark mass together with other inputs provides a test for the self-consistency of the theory, and has consequences for the stability of the Higgs field that permeates the Universe. In this review I will briefly summarize the experimental techniques used at the Tevatron and the LHC experiments throughout the years to measure the top quark mass with ever improving accuracy, and highlight the recent progress in combining all measurements in a single world average combination. As experimental measurements became more precise, the question of their theoretical interpretation has become important. The difficulty of relating the measured quantity to the fundamental top mass parameter has inspired alternative measurement methods that extract the top mass in complementary ways. I wil...

  8. Superlative Quantifiers as Modifiers of Meta-Speech Acts

    Directory of Open Access Journals (Sweden)

    Ariel Cohen

    2010-12-01

    Full Text Available The superlative quantifiers, at least and  at most, are commonly assumed  to have the same truth-conditions as the comparative quantifiers  more than and  fewer than. However, as Geurts & Nouwen (2007  have demonstrated, this is wrong, and several theories have been proposed to account for them. In this paper we propose that superlative quantifiers are illocutionary operators; specifically, they modify meta-speech acts. Meta speech-acts are operators that do not express a speech act, but a willingness to make or refrain from making a certain speech  act. The classic example is speech act denegation, e.g. I don't promise to come, where the speaker is explicitly refraining from performing the speech act of promising What denegations do is to delimit the future development of conversation, that is, they delimit future admissible speech acts. Hence we call them meta-speech acts. They are not moves in a game, but rather commitments to behave in certain ways in the future. We formalize the notion of meta speech acts as commitment development spaces, which are rooted graphs: The root of the graph describes the commitment development up to the current point in conversation; the continuations from the  root describe the admissible future directions. We define and formalize the meta-speech act GRANT, which indicates that the speaker, while not necessarily subscribing to a proposition, refrains from asserting its negation. We propose that superlative quantifiers are quantifiers over GRANTs. Thus, Mary petted at least three rabbits means that the minimal number n such that the speaker GRANTs that Mary petted  n rabbits is n = 3. In other words, the speaker denies that Mary petted two, one, or no rabbits, but GRANTs that she petted more. We formalize this interpretation of superlative quantifiers in terms of commitment development spaces, and show how the truth conditions that are derived from it are partly entailed and partly conversationally

  9. Aseismic creep along the North Anatolian Fault quantified by coupling microstructural strain and chemical analyses

    Science.gov (United States)

    Kaduri, Maor; Gratier, Jean-Pierre; Renard, François; Çakir, Ziyadin; Lasserre, Cécile

    2017-04-01

    stronger (less deformed) than polymineralic ones; (3) strain measurements allow to evaluate the cumulated geological displacement accommodated by aseismic creep and the relative ratio between seismic and aseismic displacement for each section of an active fault. These relations allow to quantify more accurately the aseismic creep processes and their evolution with time along the North Anatolian Fault which are controlled by a superposition of two kinds of mechanisms: (1) stress driven mass transfer (pressure solution and metamorphism) that control local and regional mass transfer and associated rheology evolution and (2) grain boundary sliding along weak mineral interfaces (initially weak minerals or more often transformed by deformation-related reactions).

  10. Origins of mass

    Science.gov (United States)

    Wilczek, Frank

    2012-10-01

    Newtonian mechanics posited mass as a primary quality of matter, incapable of further elucidation. We now see Newtonian mass as an emergent property. That mass-concept is tremendously useful in the approximate description of baryon-dominated matter at low energy — that is, the standard "matter" of everyday life, and of most of science and engineering — but it originates in a highly contingent and non-trivial way from more basic concepts. Most of the mass of standard matter, by far, arises dynamically, from back-reaction of the color gluon fields of quantum chromodynamics (QCD). Additional quantitatively small, though physically crucial, contributions come from the intrinsic masses of elementary quanta (electrons and quarks). The equations for massless particles support extra symmetries — specifically scale, chiral, and gauge symmetries. The consistency of the standard model relies on a high degree of underlying gauge and chiral symmetry, so the observed non-zero masses of many elementary particles ( W and Z bosons, quarks, and leptons) requires spontaneous symmetry breaking. Superconductivity is a prototype for spontaneous symmetry breaking and for mass-generation, since photons acquire mass inside superconductors. A conceptually similar but more intricate form of all-pervasive ( i.e. cosmic) superconductivity, in the context of the electroweak standard model, gives us a successful, economical account of W and Z boson masses. It also allows a phenomenologically successful, though profligate, accommodation of quark and lepton masses. The new cosmic superconductivity, when implemented in a straightforward, minimal way, suggests the existence of a remarkable new particle, the so-called Higgs particle. The mass of the Higgs particle itself is not explained in the theory, but appears as a free parameter. Earlier results suggested, and recent observations at the Large Hadron Collider (LHC) may indicate, the actual existence of the Higgs particle, with mass m H

  11. The social gradient in birthweight at term: quantification of the mediating role of maternal smoking and body mass index

    DEFF Research Database (Denmark)

    Mortensen, Laust H; Diderichsen, Finn; Smith, George Davey

    2009-01-01

    Maternal education is associated with the birthweight of offspring. We sought to quantify the role of maternal body mass index (BMI) and smoking as intermediary variables between maternal education and birthweight at term.......Maternal education is associated with the birthweight of offspring. We sought to quantify the role of maternal body mass index (BMI) and smoking as intermediary variables between maternal education and birthweight at term....

  12. Fit by Bits: An Explorative Study of Sports Physiotherapists' Perception of Quantified Self Technologies.

    Science.gov (United States)

    Allouch, Somaya Ben; van Velsen, Lex

    2018-01-01

    Our aim was to determine sport physiotherapists' attitudes towards Quantified Self technology usage and adoption and to analyze factors that may influence this attitude. A survey was used to study a sample in the Netherlands. Assessment was made of attitudes of towards Quantified Self technology usage by clients of therapists, by therapists themselves and intention to adopt Quantified Self technology. Results show that the uptake of Quantified Self technology by sports physiotherapists is rather low but that the intention to adopt Quantified Self technology by sports physiotherapists is quite high. These results can provide a foundation to provide an infrastructure for sports physiotherapists to fulfill their wishes with regard to Quantified Self technology.

  13. Dynamical Mass Generation.

    Science.gov (United States)

    Mendel Horwitz, Roberto Ruben

    1982-03-01

    In the framework of the Glashow-Weinberg-Salem model without elementary scalar particles, we show that masses for fermions and intermediate vector bosons can be generated dynamically. The mechanism is the formation of fermion-antifermion pseudoscalar bound states of zero total four momentum, which form a condensate in the physical vacuum. The force responsible for the binding is the short distance part of the net Coulomb force due to photon and Z exchange. Fermions and bosons acquire masses through their interaction with this condensate. The neutrinos remain massless because their righthanded components have no interactions. Also the charge -1/3 quarks remain massless because the repulsive force from the Z exchange dominates over the Coulomb force. To correct this, we propose two possible modifications to the theory. One is to cut off the Z exchange at very small distances, so that all fermions except the neutrinos acquire masses, which are then, purely electromagnetic in origin. The other is to introduce an additional gauge boson that couples to all quarks with a pure vector coupling. To make this vector boson unobservable at usual energies, at least two new fermions must couple to it. The vector boson squared masses receive additive contributions from all the fermion squared masses. The photon remains massless and the masses of the Z and W('(+OR -)) bosons are shown to be related through the Weinberg angle in the conventional way. Assuming only three families of fermions, we obtain estimates for the top quark mass.

  14. Probabilistic Mass Growth Uncertainties

    Science.gov (United States)

    Plumer, Eric; Elliott, Darren

    2013-01-01

    Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.

  15. Top quark mass measurement

    International Nuclear Information System (INIS)

    Maki, Tuula; Helsinki Inst. of Phys.; Helsinki U. of Tech.

    2008-01-01

    The top quark is the heaviest elementary particle. Its mass is one of the fundamental parameters of the standard model of particle physics, and an important input to precision electroweak tests. This thesis describes three measurements of the top-quark mass in the dilepton decay channel. The dilepton events have two neutrinos in the final state; neutrinos are weakly interacting particles that cannot be detected with a multipurpose experiment. Therefore, the signal of dilepton events consists of a large amount of missing energy and momentum carried off by the neutrinos. The top-quark mass is reconstructed for each event by assuming an additional constraint from a top mass independent distribution. Template distributions are constructed from simulated samples of signal and background events, and parameterized to form continuous probability density functions. The final top-quark mass is derived using a likelihood fit to compare the reconstructed top mass distribution from data to the parameterized templates. One of the analyses uses a novel technique to add top mass information from the observed number of events by including a cross-section-constraint in the likelihood function. All measurements use data samples collected by the CDF II detector

  16. Organ mass measurements

    International Nuclear Information System (INIS)

    Kawamura, H.

    1998-01-01

    The term, anatomical measurements, in the context of this Co-ordinated Research Programme refers to measurements of masses of internal organs, although the human body is composed of internal organs and tissues such as skeleton, muscle, skin and adipose. The mass of an organ containing a radionuclide (source organ), and the mass of a target organ which absorbs energy of the radiation, are essential parameters in the ICRP dosimetric model derived from the MIRD method. Twelve specific organs of interest were proposed at the Coordinated Research Programme Project Formulation Meeting (PFM) in 1988. A slightly different set of thirteen organs with potential significance for radiation protection were selected for study at the Research Co-ordination Meeting held at the Bhabha Atomic Research Centre in 1991. The dimensions of the organs could also be useful information, but were considered unimportant for internal dose assessment. Due to the strong concern about the unified method for collecting organ mass data at the PFM, a guide-line was established stressing the need for organ data from subjects that were healthy and normal, at least until shortly before death, or from sudden death cases, following the Japanese experience. In this report, masses of nine to thirteen organs are presented from seven participating countries. Three participants have also reported the organ masses as fractions of the total body mass

  17. Quantifying the relationship between financial news and the stock market.

    Science.gov (United States)

    Alanyali, Merve; Moat, Helen Susannah; Preis, Tobias

    2013-12-20

    The complex behavior of financial markets emerges from decisions made by many traders. Here, we exploit a large corpus of daily print issues of the Financial Times from 2(nd) January 2007 until 31(st) December 2012 to quantify the relationship between decisions taken in financial markets and developments in financial news. We find a positive correlation between the daily number of mentions of a company in the Financial Times and the daily transaction volume of a company's stock both on the day before the news is released, and on the same day as the news is released. Our results provide quantitative support for the suggestion that movements in financial markets and movements in financial news are intrinsically interlinked.

  18. Fuzzy Entropy Method for Quantifying Supply Chain Networks Complexity

    Science.gov (United States)

    Zhang, Jihui; Xu, Junqin

    Supply chain is a special kind of complex network. Its complexity and uncertainty makes it very difficult to control and manage. Supply chains are faced with a rising complexity of products, structures, and processes. Because of the strong link between a supply chain’s complexity and its efficiency the supply chain complexity management becomes a major challenge of today’s business management. The aim of this paper is to quantify the complexity and organization level of an industrial network working towards the development of a ‘Supply Chain Network Analysis’ (SCNA). By measuring flows of goods and interaction costs between different sectors of activity within the supply chain borders, a network of flows is built and successively investigated by network analysis. The result of this study shows that our approach can provide an interesting conceptual perspective in which the modern supply network can be framed, and that network analysis can handle these issues in practice.

  19. Quantifying human response capabilities towards tsunami threats at community level

    Science.gov (United States)

    Post, J.; Mück, M.; Zosseder, K.; Wegscheider, S.; Taubenböck, H.; Strunz, G.; Muhari, A.; Anwar, H. Z.; Birkmann, J.; Gebert, N.

    2009-04-01

    Decision makers at the community level need detailed information on tsunami risks in their area. Knowledge on potential hazard impact, exposed elements such as people, critical facilities and lifelines, people's coping capacity and recovery potential are crucial to plan precautionary measures for adaptation and to mitigate potential impacts of tsunamis on society and the environment. A crucial point within a people-centred tsunami risk assessment is to quantify the human response capabilities towards tsunami threats. Based on this quantification and spatial representation in maps tsunami affected and safe areas, difficult-to-evacuate areas, evacuation target points and evacuation routes can be assigned and used as an important contribution to e.g. community level evacuation planning. Major component in the quantification of human response capabilities towards tsunami impacts is the factor time. The human response capabilities depend on the estimated time of arrival (ETA) of a tsunami, the time until technical or natural warning signs (ToNW) can be received, the reaction time (RT) of the population (human understanding of a tsunami warning and the decision to take appropriate action), the evacuation time (ET, time people need to reach a safe area) and the actual available response time (RsT = ETA - ToNW - RT). If RsT is larger than ET, people in the respective areas are able to reach a safe area and rescue themselves. Critical areas possess RsT values equal or even smaller ET and hence people whin these areas will be directly affected by a tsunami. Quantifying the factor time is challenging and an attempt to this is presented here. The ETA can be derived by analyzing pre-computed tsunami scenarios for a respective area. For ToNW we assume that the early warning center is able to fulfil the Indonesian presidential decree to issue a warning within 5 minutes. RT is difficult as here human intrinsic factors as educational level, believe, tsunami knowledge and experience

  20. Quantifying cardiovascular disease risk factors in patients with psoriasis

    DEFF Research Database (Denmark)

    Miller, I M; Skaaby, T; Ellervik, C

    2013-01-01

    BACKGROUND: In a previous meta-analysis on categorical data we found an association between psoriasis and cardiovascular disease and associated risk factors. OBJECTIVES: To quantify the level of cardiovascular disease risk factors in order to provide additional data for the clinical management...... of the increased risk. METHODS: This was a meta-analysis of observational studies with continuous outcome using random-effects statistics. A systematic search of studies published before 25 October 2012 was conducted using the databases Medline, EMBASE, International Pharmaceutical Abstracts, PASCAL and BIOSIS......·65 mmol L(-1) )] and a higher HbA1c [1·09 mmol mol(-1) , 95% CI 0·87-1·31, P controls are significant, and therefore relevant to the clinical management of patients with psoriasis....