Sample records for half-life measurements based

  1. Half Life Measurements in {sup 155}Gd

    Energy Technology Data Exchange (ETDEWEB)

    Malmskog, S.G.


    In the literature there exists a definite difference for the half life of the 86.5 keV level in Gd depending on whether {sup 155}Eu or {sup 155}Tb sources have been used. Using a good energy resolution electron-electron coincidence spectrometer and a {sup 155}Eu source, a half life of 6.48 {+-} 0.26 nsec was obtained for the 86.5 keV level. This is in agreement with the values previously measured with {sup 155}Tb sources. The half life of the 105.4 keV level was measured to be 1.12 {+-} 0.05 nsec.

  2. Precise Half Life Measurement of ^26Si (United States)

    Iacob, V. E.; Golovko, V.; Goodwin, J.; Hardy, J. C.; Nica, N.; Park, H. I.; Trache, L.; Tribble, R. E.


    As part of our program to test the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix via 0^+->0^+superallowed β transitions, we recently measured the half-life of ^26Si. The radioactive ^26Si beam was obtained with a ^27Al primary beam at 30A MeV, which bombarded a cryogenic hydrogen target held at a pressure of 2.0 atm. From the reaction products, a high-purity ^26Si beam at 25A MeV was selected with the MARS spectrograph. The beam was then extracted in air, passed through a 0.3-mm-thick BC-404 plastic scintillator and a set of Al degraders, which had been adjusted so that the radioactive nuclei stopped in the center of the 76-μm-thick aluminized-mylar tape of our fast tape-transport system. We collected ^26Si nuclei for 1.3 s; then the beam was switched off and the activity was moved in less than 0.2 s to the center of a 4π proportional counter, located in a well-shielded region. The observed decays were then multi-scaled over a 44 s time span. To ensure an unbiased result, we split the experiment into many runs, each differing from the others in its discriminator threshold, detector bias or dominant dead-time setting. The analysis of these separate runs showed no systematic bias with these parameters. Our preliminary result agrees with the currently accepted (average) value, and the full analysis is expected to yield an uncertainty of 0.05% or better.

  3. Measurement of the half-life of [sup 45]Ca

    Energy Technology Data Exchange (ETDEWEB)

    Los Arcos, J.M. (Metrologia de Radiaciones, Inst. de Investigacion Basica, CIEMAT, Madrid (Spain)); Rodriguez, L. (Metrologia de Radiaciones, Inst. de Investigacion Basica, CIEMAT, Madrid (Spain)); Roteta, M. (Metrologia de Radiaciones, Inst. de Investigacion Basica, CIEMAT, Madrid (Spain)); Garcia-Torano, E. (Metrologia de Radiaciones, Inst. de Investigacion Basica, CIEMAT, Madrid (Spain))


    The currently accepted value of 163.8[+-]1.8 d for the [sup 45]Ca half-life is based on the two most accurate measurements of 162.6[+-]0.1 d and 165.1[+-]0.7 d, which are nevertheless discrepant. New measurements have been carried out using a liquid scintillation spectrometer to follow the decay of [sup 45]Ca for three half-lives. A set of eight liquid sources prepared with [sup 45]Ca-labelled calcium chloride, HDEHP and 2-ethylhexanoate were measured in Hisafe II, Ultima-Gold and Instagel. Solid sources of [sup 45]Ca were sandwiched between two sheets of thin metallized Mylar, and measured with a proportional counter and a Si detector to test their reliability and consistency. The new half-life value of 162.67[+-]0.25 d is in good agreement with the lower value of the two measurements previously reported. (orig.)

  4. Precise Half-Life Measurement of ^46V (United States)

    Park, H. I.; Hardy, J. C.; Iacob, V. E.; Chen, L.; Goodwin, J.; Nica, N.; Simmons, E.; Trache, L.; Tribble, R. E.


    The ^46V is one of the key superallowed transitions contributing to precision tests of the conserved vector current hypothesis and the unitarity of the Cabibbo-Kobayashi-Maskawa matrix. Recent Penning-trap QEC measurements of the superallowed β decay of ^46V showed an earlier reaction-based result to be wrong and raised the Ft value by nearly three standard deviations from the average of all other well-known superallowed transitions. This anomaly raised the possibility of systematic effects for all reaction-based Q- value measurements and led to a theoretical reexamination of the isospin-symmetry-breaking corrections for superallowed decays. The improved corrections removed the anomalous result of ^46V and restored agreement among the corrected Ft values. Throughout these changes, the previously accepted half- life of ^46V was assumed to be completely correct. We have now tested this assumption by measuring a new precise half-life of ^46V. The preliminary result, 422.67(10) ms, agrees with but is more precise than previous values.

  5. Precision Half-life Measurement of 25Al (United States)

    Long, Jacob; Ahn, Tan; Allen, Jacob; Bardayan, Daniel; Becchetti, Fredrich; Blankstein, Drew; Brodeur, Maxime; Burdette, Daniel; Frentz, Bryce; Hall, Matthew; Kelly, James; Kolata, James; O'Malley, Patrick; Schultz, Bradley; Strauss, Sabrina; Valverde, Adrian; TwinSol Collaboration


    In recent years, precision measurements have led to considerable advances in several areas of physics, including fundamental symmetry. Precise determination of ft values for superallowed mixed transitions between mirror nuclides could provide an avenue to test the theoretical corrections used to extract the Vud matrix element from superallowed pure Fermi transitions. Calculation of the ft value requires the half-life, branching ratio, and Q value. 25Al decay is of particular interest as its half-life is derived from a series of conflicting measurements, and the largest uncertainty on the ft value stems from the half-life uncertainty. The life-time was determined by the β counting of implanted 25Al on a Ta foil that was removed from the beam for counting. The 25Al beam was produced by a transfer reaction and separated by the TwinSol facility of the Nuclear Science Laboratory of the University of Notre Dame. The 25Al results will be presented with preliminary results of more recent half-life measurements. The National Science Foundation.

  6. Measurement of 47K Half-Life at GRIFFIN (United States)

    Beadle, Zachary; Smith, Jenna


    The doubly magic nucleus 48Ca is both a neutron-rich benchmark for new ab initio nuclear structure calculations and a potential neutrinoless double beta decay parent. The adjacent decay of 47K to 47Ca is a simpler decay, but requires a more robust nuclear structure calculation. TRIUMF's GRIFFIN (Gamma Ray Infrastructure For Fundamental Investigations of Nuclei) array is a set of 16 HPGe clovers at the ISAC-I accelerator. This setup allows for the analysis of short-lived isotopes by delivering them to GRIFFIN shortly after their production in ISAC-I and measuring their decay radiation with GRIFFIN and associated auxiliary detectors. This poster presents the use of GRIFFIN, with the additional SCEPTAR (SCintillating Electron-Positron Tagging ARray) auxiliary detector, to improve the precision of the half-life of 47K as part of a more detailed decay spectroscopy investigation. This study was supported in part by Galakatos Funds and the Science Research Fellowship.

  7. High Precision Half-Life Measurement of ^38Ca (United States)

    Park, H. I.; Hardy, J. C.; Iacob, V. E.; Chen, L.; Goodwin, J.; Horvat, V.; Nica, N.; Trache, L.; Tribble, R. E.


    The measured ft values for superallowed 0^+ -> 0^+ nuclear β decay can be used to test the Conserved Vector Current (CVC) hypothesis and the unitarity of the Cabbibo-Kobayashi-Maskawa (CKM) matrix. One of the essential elements of this test is the calculated radiative and isospin-symmetry breaking corrections that must be applied to experimental data [1]. Some of these corrections depend on nuclear structure and their uncertainties can, in principle, be reduced by improving the precision of the experimental ft values. The case of ^38Ca is particularly interesting since its structure-dependent correction is calculated to be one of the largest in the sd shell. The QEC value of the ^38Ca decay is already well measured [2] and we have now measured its half-life to better than 0.1% precision. Preliminary results will be presented.[4pt] [1] I.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008).[0pt] [2] R. Ringle et al., Phys. Rev. C 75, 055503 (2007). )

  8. Half-life and Branching ratio measurements of T = 1/2 Mirror Nuclei (United States)

    Shidling, P. D.; Melconian, D.; Behling, S.; Fenker, B.; Hardy, J. C.; Horvat, V.; Iacob, V. E.; McCleskey, E.; McCleskey, M.; Mehlman, M.; Park, H. I.; Roeder, B.


    The β-decay transitions between T = 1/2 isospin doublets in mirror nuclei requires the measurement of decay rate and angular correlation to measure the comparative half-life. The motivation for improving the precision of the ft value is to determine the standard-model prediction for the correlation parameters with better precision. We have measured the half-life of 21Na and 37K, and the ground-state branching ratio of 37K to improve the precision of the ft value. For 37K, the ft value was limited by 0.6% uncertainty in the half-life and 0.14% uncertainty in the ground-state branching ratio. The precision of the present half-life measurement is nearly an order of magnitude improvement over the previously accepted world average. In case of 21Na, the ft value was limited by 0.13% uncertainty in the half-life. Another motivation for the half-life measurement comes from the fact that the half-life has only been measured thrice nearly 40 years ago. The measurements were carried out at the Cyclotron Institute, Texas A&M University. An overview of both experiments and results will be presented.

  9. High-precision half-life measurement for the superallowed Fermi β+ emitter 22Mg (United States)

    Dunlop, M. R.; Svensson, C. E.; Ball, G. C.; Leslie, J. R.; Andreoiu, C.; Bernier, N.; Bidaman, H.; Bildstein, V.; Bowry, M.; Burbadge, C.; Caballero-Folch, R.; Varela, A. Diaz; Dunlop, R.; Garnsworthy, A. B.; Garrett, P. E.; Hackman, G.; Jigmeddorj, B.; Leach, K. G.; MacLean, A. D.; Olaizola, B.; Measures, J.; Natzke, C.; Saito, Y.; Smith, J. K.; Turko, J.; Zidar, T.


    A high-precision half-life measurement for the superallowed Fermi β+ emitter 22Mg was performed at the TRIUMF-ISAC facility using a 4 π proportional gas counter. The result of T1 /2=3.87400 ±0.00079 s is a factor of 3 more precise than the previously adopted world average and resolves a discrepancy between the two previously published 22Mg half-life measurements.

  10. Re-measurement of the half-life of sup 7 sup 9 Se

    CERN Document Server

    Jiang Song Sheng; Diao Li Jun; Li Chun Shen; GouJingRu; Wu Shao Yon


    A new attempt has been made for the re-measurement of the half-life of sup 7 sup 9 Se. We made two major improvements over our earlier sup 7 sup 9 Se half-life determination (Nucl. Instr. and Meth. B 123 (1997) 403). Firstly, the half-life of sup 7 sup 9 Se was measured relative to the precisely known half-life of sup 7 sup 5 Se, rather than an absolute measurement of sup 7 sup 9 Se/Se. Secondly, the Projectile X-ray Detection technique was used for the separation of sup 7 sup 9 Se from its isobar, sup 7 sup 9 Br, rather than measuring sup 8 sup 1 Br for the deduction of sup 7 sup 9 Br interference, and this technique was also used for separation of sup 7 sup 5 Se and its isobar, sup 7 sup 5 As. A detailed description of the sample preparations, experimental setup and measurements are given. The re-measured half-life of sup 7 sup 9 Se was (2.95+-0.38)x10 sup 5 a, about a factor of 3 lower than the previous value, 1.1x10 sup 6 a. The problems in the previous measurement are discussed.

  11. The half-life of 198Au : High-precision measurement shows no temperature dependence (United States)

    Goodwin, J. R.; Golovko, V. V.; Iacob, V. E.; Hardy, J. C.


    We have measured the half-life of the β -decay of 198Au in a metallic environment, both at low temperature (19K), and also at room temperature. We find the half-lives at both temperatures to be the same within 0.04%, a finding that contradicts a recent report of a 3.6±1.0 % difference in the 198Au half-life when measured at essentially the same two temperatures. Our results for the half-life, 2.6949±0.0009 d at room temperature and 2.6953±0.0008 d at 19K, also agree well with previous precision room temperature measurements.

  12. High-Precision Half-Life Measurement for the Superallowed β+ Emitter 22Mg (United States)

    Dunlop, Michelle


    High precision measurements of the Ft values for superallowed Fermi beta transitions between 0+ isobaric analogue states allow for stringent tests of the electroweak interaction. These transitions provide an experimental probe of the Conserved-Vector-Current hypothesis, the most precise determination of the up-down element of the Cabibbo-Kobayashi-Maskawa matrix, and set stringent limits on the existence of scalar currents in the weak interaction. To calculate the Ft values several theoretical corrections must be applied to the experimental data, some of which have large model dependent variations. Precise experimental determinations of the ft values can be used to help constrain the different models. The uncertainty in the 22Mg superallowed Ft value is dominated by the uncertainty in the experimental ft value. The adopted half-life of 22Mg is determined from two measurements which disagree with one another, resulting in the inflation of the weighted-average half-life uncertainty by a factor of 2. The 22Mg half-life was measured with a precision of 0.02% via direct β counting at TRIUMF's ISAC facility, leading to an improvement in the world-average half-life by more than a factor of 3.

  13. Precise half-life measurement of the superallowed β+ emitter K38m (United States)

    Ball, G. C.; Boisvert, G.; Bricault, P.; Churchman, R.; Dombsky, M.; Lindner, T.; MacDonald, J. A.; Vandervoort, E.; Bishop, S.; D'Auria, J. M.; Hardy, J. C.; Iacob, V. E.; Leslie, J. R.; Mak, H.-B.


    The half-life of K38m has been measured to be 924.46(14) ms, a result that is a factor of two more precise than any of the five previous measurements of this quantity. The previous results are not consistent with one another, but our result agrees well with the two most recent ones. The derived ft value for K38m is now one of the three most precisely known superallowed ft values.

  14. Precision half-life measurement of the β+ decay of K37 (United States)

    Shidling, P. D.; Melconian, D.; Behling, S.; Fenker, B.; Hardy, J. C.; Iacob, V. E.; McCleskey, E.; McCleskey, M.; Mehlman, M.; Park, H. I.; Roeder, B. T.


    The half-life of K37 has been measured to be 1.23651(94)s, a value nearly an order of magnitude more precise than the best previously reported. The β+ decay of K37 occurs mainly via a superallowed branch to the ground state of its T=1/2 mirror, Ar37. This transition has been used recently, together with similar transitions from four other nuclei, as a method for determining Vud, but the precision of its ft value was limited by the relatively large half-life uncertainty. Our result corrects that situation. Another motivation for improving the ft value was to determine the standard-model prediction for the β-decay correlation parameters, which will be compared to those currently being measured by the TRINAT Collaboration at TRIUMF. The new ft value, 4605(8)s, is now limited in precision by the 97.99(14)% ground-state branching ratio.

  15. Precise half-life measurement of the superallowed β+ emitter C10 (United States)

    Iacob, V. E.; Hardy, J. C.; Golovko, V.; Goodwin, J.; Nica, N.; Park, H. I.; Trache, L.; Tribble, R. E.


    The half-life of C10 has been measured to be 19.310(4) s, a result with 0.02% precision, which is a factor of three improvement over the best previous result. Since C10 is the lightest superallowed 0+→0+ β+ emitter, its ft value has the greatest weight in setting an upper limit on the possible presence of scalar currents.

  16. Precise half-life measurement of the superallowed β+ emitter 46V (United States)

    Park, H. I.; Hardy, J. C.; Iacob, V. E.; Chen, L.; Goodwin, J.; Nica, N.; Simmons, E.; Trache, L.; Tribble, R. E.


    The half-life of 46V has been measured to be 422.66(6) ms, which is a factor of two more precise than the best previous measurement. Our result is also consistent with the previous measurements, with no repeat of the disagreement recently encountered with QEC values measured for the same transition. The Ft value for the 46V superallowed transition, incorporating all world data, is determined to be 3074.1(26) s, a result consistent with the average Ft¯ value of 3072.08(79) s established from the 13 best-known superallowed transitions.

  17. Precise half-life measurement of the superallowed β+ emitter 38Ca (United States)

    Park, H. I.; Hardy, J. C.; Iacob, V. E.; Banu, A.; Chen, L.; Golovko, V. V.; Goodwin, J.; Horvat, V.; Nica, N.; Simmons, E.; Trache, L.; Tribble, R. E.


    The half-life of 38Ca, a TZ=-1 superallowed 0+→ 0+β+ emitter, has been measured to be 443.77(36) ms. In our experiment, pure sources of 38Ca were produced and the decay positrons were detected in a high-efficiency 4π proportional gas counter. Since the β+ decay of 38Ca feeds 38Km, which is itself a superallowed β+ emitter, the data were analyzed as a linked parent-daughter decay. Our result, with a precision of 0.08%, is a factor of 5 improvement on the best previous result.

  18. High-Precision Half-life Measurements for the Superallowed β+ Emitter 14O

    Directory of Open Access Journals (Sweden)

    Laffoley A. T.


    Full Text Available The half-life of 14O, a superallowed Fermi β+ emitter, has been determined via simultaneous γ and β counting experiments at TRIUMF’s Isotope Separator and Accelerator facility. Following the implantation of 14O samples at the center of the 8π spectrometer, a γ counting measurement was performed by detecting the 2313 keV γ-rays emitted from the first excited state of the daughter 14N using 20 high-purity germanium (HPGe detectors. A simultaneous β counting experiment was performed using a fast plastic scintillator positioned directly behind the implantation site. The results, T½(γ = 70:632 ± 0:094 s and T½(β = 70:610 ± 0:030 s, are consistent with one another and, together with eight previous measurements, establish a new average for the 14O half-life of T½ = 70:619 ± 0:011 s with a reduced χ2 of 0.99.

  19. Precise half-life measurement of the {sup 26}Si ground state

    Energy Technology Data Exchange (ETDEWEB)

    Matea, I.; Blank, B.; Giovinazzo, J.; Huikari, J.; Pedroza, J.L. [Universite Bordeaux 1-UMR 5797 CNRS/IN2P3, Centre d' Etudes Nucleaires de Bordeaux Gradignan (France); Souin, J.[UMR 5797 CNRS/IN2P3, Centre d' Etudes Nucleaires de Bordeaux Gradignan (France); Instituto Estructura de la Materia, CSIC, Madrid (Spain); Aeystoe, J.; Elomaa, V.V.; Eronen, T.; Hager, U.; Hakala, J.; Jokinen, A.; Kankainen, A.; Moore, I.D.; Rahaman, S.; Rissanen, J.; Ronkainen, J.; Saastamoinen, A.; Sonoda, T.; Weber, C. [University of Jyvaeskylae, Department of Physics, P.O. Box 35, Jyvaeskylae (Finland); Delahaye, P. [CERN, Geneva 23 (Switzerland)


    The {beta}-decay half-life of {sup 26}Si was measured with a relative precision of 1.4.10{sup -3}. The measurement yields a value of 2.2283(27) s which is in good agreement with previous measurements but has a precision that is better by a factor of 4. In the same experiment, we have also measured the non-analogue branching ratios and could determine the super-allowed one with a precision of 3%. The experiment was done at the Accelerator Laboratory of the University of Jyvaeskylae where we used the IGISOL technique with the JYFLTRAP facility to separate pure samples of {sup 26}Si. (orig.)

  20. Precise half-life measurements for $^{38}$Ca and $^{39}$Ca

    CERN Document Server

    Blank, B; Demonchy, C-E; Borge, M J G; Matea, I; Munoz, F; Huikari, J; Dominguez-Reyes, R; Plaisir, C; Sturm, S; Canchel, G; Delahaye, P; Audirac, L; Fraile, L M; Serani, L; Lunney, D; Pedroza, J-L; Bey, A; Souin, J; Hui, Tran Trong; Delalee, F; Tengblad, O; Wenander, F


    The half-lives of Ca-38 and Ca-39 have been measured at ISOLDE of CERN. The REXTRAP facility was used to prepare ultra-clean samples of radioactive nuclei for precision decay spectroscopy. Ca-38 is one of the T-z = -1, 0(+). 0(+) beta-emitting nuclides used to determine the vector coupling constant of the weak interaction and the V-ud quark-mixing matrix element. The result obtained, T-1/2 = 443.8(19) ms, is four times more precise than the average of previous measurements. For Ca-39, a half-life of T-1/2 = 860.7(10) ms is obtained, a result in agreement with the average value from the literature.

  1. Precise half-life measurements for {sup 38}Ca and {sup 39}Ca

    Energy Technology Data Exchange (ETDEWEB)

    Blank, B.; Bey, A.; Matea, I.; Souin, J.; Audirac, L.; Canchel, G.; Delalee, F.; Demonchy, C.E.; Giovinazzo, J.; Trong Hui, Tran; Huikari, J.; Munoz, F.; Pedroza, J.L.; Plaisir, C.; Serani, L. [Universite Bordeaux 1 - UMR 5797 CNRS/IN2P3, Centre d' etudes nucleaires de Bordeaux Gradignan, Gradignan (France); Borge, M.J.G.; Dominguez-Reyes, R.; Tengblad, O. [CSIC, Instituto de Estructura de la Materia, Madrid (Spain); Delahaye, P.; Wenander, F. [CERN, Geneva (Switzerland); Fraile, L.M. [Universidad Complutense, Grupo de Fisica Nuclear, Facultad CC. Fisicas, Madrid (Spain); Lunney, D. [Universite de Paris Sud, Centre de Spectrometrie Nucleaire et de Spectrometrie de Masse, (CNRS/IN2P3), Orsay (France); Sturm, S. [Johannes Gutenberg University, Department of Physics, Mainz (Germany)


    The half-lives of {sup 38}Ca and {sup 39}Ca have been measured at ISOLDE of CERN. The REXTRAP facility was used to prepare ultra-clean samples of radioactive nuclei for precision decay spectroscopy. {sup 38}Ca is one of the T{sub z}=-1,0{sup +} {yields}0{sup +}{beta} -emitting nuclides used to determine the vector coupling constant of the weak interaction and the V{sub ud} quark-mixing matrix element. The result obtained, T{sub 1/2}=443.8(19) ms, is four times more precise than the average of previous measurements. For {sup 39}Ca, a half-life of T{sub 1/2}=860.7(10) ms is obtained, a result in agreement with the average value from the literature. (orig.)

  2. Precise half-life measurement of the superallowed β+ emitter Si26 (United States)

    Iacob, V. E.; Hardy, J. C.; Banu, A.; Chen, L.; Golovko, V. V.; Goodwin, J.; Horvat, V.; Nica, N.; Park, H. I.; Trache, L.; Tribble, R. E.


    We measured the half-life of the superallowed 0+→0+ β+ emitter Si26 to be 2245.3(7) ms. We used pure sources of Si26 and employed a high-efficiency gas counter, which was sensitive to positrons from both this nuclide and its daughter Al26m. The data were analyzed as a linked parent-daughter decay. To contribute meaningfully to any test of the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, the ft value of a superallowed transition must be determined to a precision of 0.1% or better. With a precision of 0.03%, the present result is more than sufficient to be compatible with that requirement. Only the branching ratio now remains to be measured precisely before a ±0.1% ft value can be obtained for the superallowed transition from Si26.

  3. Development of a time-variable nuclear pulser for half life measurements

    Energy Technology Data Exchange (ETDEWEB)

    Zahn, Guilherme S.; Domienikan, Claudio; Carvalhaes, Roberto P. M.; Genezini, Frederico A. [Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP. P.O. Box 11049, Sao Paulo, 05422-970 (Brazil)


    In this work a time-variable pulser system with an exponentially-decaying pulse frequency is presented, which was developed using the low-cost, open-source Arduino microcontroler plataform. In this system, the microcontroller produces a TTL signal in the selected rate and a pulse shaper board adjusts it to be entered in an amplifier as a conventional pulser signal; both the decay constant and the initial pulse rate can be adjusted using a user-friendly control software, and the pulse amplitude can be adjusted using a potentiometer in the pulse shaper board. The pulser was tested using several combinations of initial pulse rate and decay constant, and the results show that the system is stable and reliable, and is suitable to be used in half-life measurements.

  4. Precision Measurement of the 6He Half-Life and the Weak Axial Current in Nuclei

    CERN Document Server

    Knecht, A; Zumwalt, D W; Delbridge, B G; Garcia, A; Mueller, P; Swanson, H E; Towner, I S; Utsuno, S; Williams, W; Wrede, C


    Studies of 6He beta decay along with tritium can play an important role in testing ab-initio nuclear wave-function calculations and may allow for fixing low-energy constants in effective field theories. Here, we present an improved determination of the 6He half-life to a relative precision of 3x10^(-4). Our value of 806.89 \\pm 0.11(stat)^{+0.23}_{-0.19}(syst) ms resolves a major discrepancy between previous measurements. Calculating the statistical rate function we determined the ft-value to be 803.04 ^{+0.26}_{-0.23} s. The extracted Gamow-Teller matrix element agrees within a few percent with ab-initio calculations.

  5. Precision measurement of the half-life of $^{109}$In in large and small lattice environments

    CERN Multimedia

    We propose to undertake high precision measurements of the half-life of $^{109}$In in large and small lattice environments to study the effect of compression on the electron capture nuclear decay rate. Such studies are of general interest having implications in many areas ranging from astrophysics to geophysics. At present, very little data is available on the change of electron capture decay rate under compression and the available data seems to indicate that the observed increase of the electron capture decay rate under compression is much greater than the predictions of the best available density functional calculations as obtained from TB-LMTO or WIEN2K codes. The proposed experiment should generate more data thus clarifying the experimental situation.

  6. Half-life measurement of the medical radioisotope 177Lu produced from the 176Yb(n,γ reaction

    Directory of Open Access Journals (Sweden)

    Ferreira K.M.


    Full Text Available 177Lu is a medium energy beta-emitter commonly used in Nuclear Medicine for radiotherapeutic applications. In this work, the half-life of 177Lu has been measured using a re-entrant ionisation chamber over a period of 82 days (approximately 12 half-lives. Unlike the majority of previous studies, the material used in this work was produced via the 176Yb(n,γ177Yb reaction followed by the β-decay to 177Lu, producing insignificant quantities of 177mLu. This has resulted in the most precise half-life measurement of 177Lu to date. A half-life of 6.6430 (11 days has been determined. This value is in statistical agreement with the currently recommended half-life of 6.6463 (15 days (z-score = 1.8.

  7. Towards a measurement of the half-life of {sup 60}Fe for stellar and early Solar System models

    Energy Technology Data Exchange (ETDEWEB)

    Ostdiek, K.; Anderson, T. [University of Notre Dame, Notre Dame, IN 46556 (United States); Bauder, W. [University of Notre Dame, Notre Dame, IN 46556 (United States); Argonne National Laboratory, 9700 South Cass Avenue, Lemont, IL 60439 (United States); Bowers, M.; Collon, P. [University of Notre Dame, Notre Dame, IN 46556 (United States); Dressler, R. [Paul Scherrer Institute – Laboratory for Radiochemistry and Environmental Chemistry, 5232 Villigen (Switzerland); Greene, J. [Argonne National Laboratory, 9700 South Cass Avenue, Lemont, IL 60439 (United States); Kutschera, W. [Vienna Environmental Research Accelerator Laboratory, Waehringer Strasse 17, 1090 Vienna (Austria); Lu, W. [University of Notre Dame, Notre Dame, IN 46556 (United States); Paul, M. [Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Robertson, D. [University of Notre Dame, Notre Dame, IN 46556 (United States); Schumann, D. [Paul Scherrer Institute – Laboratory for Radiochemistry and Environmental Chemistry, 5232 Villigen (Switzerland); Skulski, M. [University of Notre Dame, Notre Dame, IN 46556 (United States); Wallner, A. [The Australian National University, Canberra, ACT 0200 (Australia)


    Radioisotopes, produced in stars and ejected into the Interstellar Medium, are important for constraining stellar and early Solar System (ESS) models. In particular, the half-life of the radioisotope, {sup 60}Fe, can have an impact on calculations for the timing for ESS events, the distance to nearby Supernovae, and the brightness of individual, non-steady-state {sup 60}Fe gamma ray sources in the Galaxy. A half-life measurement has been undertaken at the University of Notre Dame and measurements of the {sup 60}Fe/{sup 56}Fe concentration of our samples using Accelerator Mass Spectrometry has begun. This result will be coupled with an activity measurement of the isomeric decay in {sup 60}Co, which is the decay product of {sup 60}Fe. Preliminary half-life estimates of (2.53 ± 0.24) × 10{sup 6} years seem to confirm the recent measurement by Rugel et al. (2009).

  8. High-precision half-life and branching-ratio measurements for superallowed Fermi β+ emitters at TRIUMF – ISAC

    Directory of Open Access Journals (Sweden)

    Laffoley A. T.


    Full Text Available A program of high-precision half-life and branching-ratio measurements for superallowed Fermi β emitters is being carried out at TRIUMF’s Isotope Separator and Accelerator (ISAC radioactive ion beam facility. Recent half-life measurements for the superallowed decays of 14O, 18Ne, and 26Alm, as well as branching-ratio measurements for 26Alm and 74Rb are reported. These results provide demanding tests of the Standard Model and the theoretical isospin symmetry breaking (ISB corrections in superallowed Fermi β decays.

  9. Half-life of the electron-capture decay of Ru97: Precision measurement shows no temperature dependence (United States)

    Goodwin, J. R.; Golovko, V. V.; Iacob, V. E.; Hardy, J. C.


    We have measured the half-life of the electron-capture (ec) decay of Ru97 in a metallic environment, both at low temperature (19K), and also at room temperature. We find the half-lives at both temperatures to be the same within 0.1%. This demonstrates that a recent claim that the ec decay half-life for Be7 changes by 0.9%±0.2% under similar circumstances certainly cannot be generalized to other ec decays. Our results for the half-life of Ru97, 2.8370(14) d at room temperature and 2.8382(14) d at 19 K, are consistent with, but much more precise than, previous room-temperature measurements. In addition, we have also measured the half-lives of the β--emitters Ru103 and Rh105 at both temperatures, and found them also to be unchanged.

  10. Precise measurement of the 222Rn half-life: A probe to monitor the stability of radioactivity

    Directory of Open Access Journals (Sweden)

    E. Bellotti


    Full Text Available We give the results of a study on the 222Rn decay we performed in the Gran Sasso Laboratory (LNGS by detecting the gamma rays from the radon progeny. The motivation was to monitor the stability of radioactivity measuring several times per year the half-life of a short lifetime (days source instead of measuring over a long period the activity of a long lifetime (tens or hundreds of years source. In particular, we give a possible reason of the large periodical fluctuations in the count rate of the gamma rays due to radon inside a closed canister which has been described in literature and which has been attributed to a possible influence of a component in the solar irradiation affecting the nuclear decay rates. We then provide the result of four half-life measurements we performed underground at LNGS in the period from May 2014 to January 2015 with radon diffused into olive oil. Briefly, we did not measure any change of the 222Rn half-life with a 8⋅10−5 precision. Finally, we provide the most precise value for the 222Rn half-life: 3.82146(16stat(4syst days.

  11. First measurement of the β-decay half-life of 206Au (United States)

    Morales, A. I.; Benzoni, G.; Al-Dahan, N.; Vergani, S.; Podolyák, Zs.; Regan, P. H.; Swan, T. P. D.; Valiente-Dobón, J. J.; Bracco, A.; Boutachkov, P.; Crespi, F. C. L.; Gerl, J.; Górska, M.; Pietri, S.; Walker, P. M.; Wollersheim, H.-J.


    The β decay of the N = 127 isotone 206Au has been investigated at the Gesellschaft für Schwerionenforschung laboratory within the rare isotope investigations at GSI Collaboration. From the experimental data, both its half-life and the level structure of the N = 126 daughter nucleus 206Hg have been extracted. On the basis of the new results, the systematics of Au β-decay half-lives beyond the N = 126 shell closure is discussed. In addition, the interplay between allowed Gamow-Teller and first-forbidden transitions in the N > 126, Z < 82 mass region is reviewed.

  12. Crosslinking of micropatterned collagen-based nerve guides to modulate the expected half-life. (United States)

    Salvatore, L; Madaghiele, M; Parisi, C; Gatti, F; Sannino, A


    The microstructural, mechanical, compositional, and degradative properties of a nerve conduit are known to strongly affect the regenerative process of the injured peripheral nerve. Starting from the fabrication of micropatterned collagen-based nerve guides, according to a spin-casting process reported in the literature, this study further investigates the possibility to modulate the degradation rate of the scaffolds over a wide time frame, in an attempt to match different rates of nerve regeneration that might be encountered in vivo. To this aim, three different crosslinking methods, that is, dehydrothermal (DHT), carbodiimide-based (EDAC), and glutaraldehyde-based (GTA) crosslinking, were selected. The elastically effective degree of crosslinking, attained by each method and evaluated according to the classical rubber elasticity theory, was found to significantly tune the in vitro half-life (t1/2 ) of the matrices, with an exponential dependence of the latter on the crosslink density. The high crosslinking efficacy of EDAC and GTA treatments, respectively threefold and fourfold when compared to the one attained by DHT, led to a sharp increase of the corresponding in vitro half-lives (ca., 10, 172, and 690 h, for DHT, EDAC, and GTA treated matrices, respectively). As shown by cell viability assays, the cytocompatibility of both DHT and EDAC treatments, as opposed to the toxicity of GTA, suggests that such methods are suitable to crosslink collagen-based scaffolds conceived for clinical use. In particular, nerve guides with expected high residence times in vivo might be produced by finely controlling the biocompatible reaction(s) adopted for crosslinking. © 2014 Wiley Periodicals, Inc.

  13. Activity measurement of 60Fe through the decay of 60mCo and confirmation of its half-life

    CERN Document Server

    Ostdiek, Karen; Bauder, William; Bowers, Matthew; Clark, Adam; Collon, Philippe; Dressler, Rugard; Greene, John; Kutschera, Walter; Lu, Wenting; Nelson, Austin; Paul, Michael; Robertson, Daniel; Schumann, Dorothea; Skulski, Michael


    The half-life of the neutron-rich nuclide, $^{60}\\text{Fe}$ has been in dispute in recent years. A measurement in 2009 published a value of $(2.62 \\pm 0.04)\\times10^{6}$ years, almost twice that of the previously accepted value from 1984 of $(1.49 \\pm 0.27)\\times10^{6}$ years. This longer half-life was confirmed in 2015 by a new measurement, resulting in a value of $(2.50 \\pm 0.12)\\times10^{6}$ years. All three half-life measurements used the grow-in of the $\\gamma$-ray lines in $^{60}\\text{Ni}$ from the decay of the ground state of $^{60\\text{g}}\\text{Co}$ (t$_{1/2}$=5.27 years) to determine the activity of a sample with a known number of $^{60}\\text{Fe}$ atoms. In contrast, the work presented here measured the $^{60}\\text{Fe}$ activity directly with the 58.6 keV $\\gamma$-ray line from the short-lived isomeric state of $^{60\\text{m}}\\text{Co}$ (t$_{1/2}$=10.5 minutes), thus being independent of any possible contamination from long-lived $^{60\\text{g}}\\text{Co}$. A fraction of the material from the 2015 exper...

  14. Precise measurement of the 222Rn half-life: a probe to monitor the stability of radioactivity

    CERN Document Server

    Bellotti, E; Di Carlo, G; Laubenstein, M; Menegazzo, R


    We give the results of a study on the 222Rn decay we performed in the Gran Sasso Laboratory (LNGS) by detecting the gamma rays from the radon progeny. The motivation was to monitor the stability of radioactivity measuring several times per year the half-life of a short lifetime (days) source instead of measuring over a long period the activity of a long lifetime (tens or hundreds of years) source. In particular, we give the reason of the large periodical fluctuations in the count rate of the gamma rays due to radon inside a closed canister which has been described in literature and which has been attributed to a possible influence of a component in the solar irradiation affecting the nuclear decay rates. We then provide the result of four half-life measurements we performed underground at LNGS in the period from May 2014 to January 2015 with radon diffused into olive oil. Briefly, we did not measure any change of the 222Rn half-life with a 8*10^-5 precision. Finally, we provide the most precise value for the ...

  15. Measurement of the half-life of Au198 in a nonmetal: High-precision measurement shows no host-material dependence (United States)

    Goodwin, J. R.; Nica, N.; Iacob, V. E.; Dibidad, A.; Hardy, J. C.


    We have measured the half-life of the β- decay of Au198 to be 2.6948(9) d, with the nuclide sited in an insulating environment. Comparing this result with the half-life we measured previously with a metallic environment, we find the half-lives in both environments to be the same within 0.04%, thus contradicting a prediction that screening from a “plasma” of quasifree electrons in a metal increases the half-life by as much as 7%.

  16. Screening and ranking of POPs for global half-life: QSAR approaches for prioritization based on molecular structure. (United States)

    Gramatica, Paola; Papa, Ester


    Persistence in the environment is an important criterion in prioritizing hazardous chemicals and in identifying new persistent organic pollutants (POPs). Degradation half-life in various compartments is among the more commonly used criteria for studying environmental persistence, but the limited availability of experimental data or reliable estimates is a serious problem. Available half-life data for degradation in air, water, sediment, and soil, for a set of 250 organic POP-type chemicals, were combined in a multivariate approach by principal component analysis to obtain a ranking of the studied organic pollutants according to their relative overall half-life. A global half-life index (GHLI) applicable for POP screening purposes is proposed. The reliability of this index was verified in comparison with multimedia model results. This global index was then modeled as a cumulative end-point using a QSAR approach based on few theoretical molecular descriptors, and a simple and robust regression model externally validated for its predictive ability was derived. The application of this model could allow a fast preliminary identification and prioritization of not yet known POPs, just from the knowledge of their molecular structure. This model can be applied a priori also in the chemical design of safer and alternative non-POP compounds.

  17. Surfactant phosphatidylcholine half-life and pool size measurements in premature baboons developing bronchopulmonary dysplasia

    NARCIS (Netherlands)

    D.J. Janssen; V.P. Carnielli (Virgilio); P.E. Cogo (Paola); S.R. Seidner; I.H.I. Luijendijk; J.L.D. Wattimena (Josias); A.H. Jobe (Alan); L.J.I. Zimmermann (Luc)


    textabstractBecause minimal information is available about surfactant metabolism in bronchopulmonary dysplasia, we measured half-lives and pool sizes of surfactant phosphatidylcholine in very preterm baboons recovering from respiratory distress syndrome and developing

  18. Half-life measurements for neutral and highly-charged {alpha}-emitters

    Energy Technology Data Exchange (ETDEWEB)

    Farinon, Fabio [GSI, Darmstadt (Germany); Justus-Liebig Universitaet, Giessen (Germany); Collaboration: E073-Collaboration


    The influence of the bound electron cloud on the {alpha}-decay constant {lambda} has been discussed theoretically since the late 50s. Tiny changes in Q-values and {alpha}-decay half-lives of fully stripped ions are expected and can provide information on the electron screening energy, thereby deducing reliable reaction rates in stellar environments. Recently, the measurements of {alpha}-decay half-lives are feasible also for highly-charged radioactive nuclides. Using a {sup 238}U beam at relativistic energies at the present FRS-ESR facility at GSI it is possible to produce, efficiently separate and store highly charged {alpha}-emitters. {sup 213}Fr{sup 86+} have been investigated by using the Schottky Mass Spectrometry technique. In order to establish a solid reference data set, lifetime measurements of the corresponding neutral atoms have been performed directly at the FRS by implanting the separated ions into an active silicon stopper. These results are reported.

  19. Half-life measurement of short-lived 44 + 44 94mRu using isochronous mass spectrometry (United States)

    Zeng, Q.; Wang, M.; Zhou, X. H.; Zhang, Y. H.; Tu, X. L.; Chen, X. C.; Xu, X.; Litvinov, Yu. A.; Xu, H. S.; Blaum, K.; Chen, R. J.; Fu, C. Y.; Ge, Z.; Huang, W. J.; Li, H. F.; Liu, J. H.; Mei, B.; Shuai, P.; Si, M.; Sun, B. H.; Sun, M. Z.; Wang, Q.; Xiao, G. Q.; Xing, Y. M.; Yamaguchi, T.; Yan, X. L.; Yang, J. C.; Yuan, Y. J.; Zang, Y. D.; Zhang, P.; Zhang, W.; Zhou, X.


    Decay of the 8+ isomer in fully stripped ions 44 + 94Ru is observed during its circulation in the experimental Cooler Storage Ring (CSRe) at the Heavy Ion Research Facility in Lanzhou (HIRFL). The 44 + 94Ru ions were produced via projectile fragmentation and stored in CSRe tuned into the isochronous ion-optical mode. The timing signals of the ions, passing through a time-of-flight detector, were consecutively registered and used to determine the variation of revolution time as a function of revolution number. A sudden change of the revolution time at a specific revolution was identified as a fingerprint of the 44 + 94Ru isomer decay. The isomeric half-life was deduced to be 102(17) μ s , which agrees well with the theoretical expectation by blocking the internal conversion decay of the isomer. Our work proved the feasibility of studying decays of short-lived isomers in high atomic charge states using the isochronous mass spectrometry. In addition, 44 + 94 mRu represents the shortest-lived nuclear state whose mass has ever been measured directly.

  20. Measurement of the two-neutrino double-beta decay half-life of ^{130}Te with the CUORE-0 experiment (United States)

    Alduino, C.; Alfonso, K.; Artusa, D. R.; Avignone, F. T.; Azzolini, O.; Banks, T. I.; Bari, G.; Beeman, J. W.; Bellini, F.; Bersani, A.; Biassoni, M.; Brofferio, C.; Bucci, C.; Camacho, A.; Caminata, A.; Canonica, L.; Cao, X. G.; Capelli, S.; Cappelli, L.; Carbone, L.; Cardani, L.; Carniti, P.; Casali, N.; Cassina, L.; Chiesa, D.; Chott, N.; Clemenza, M.; Copello, S.; Cosmelli, C.; Cremonesi, O.; Creswick, R. J.; Cushman, J. S.; D'Addabbo, A.; Dafinei, I.; Davis, C. J.; Dell'Oro, S.; Deninno, M. M.; Di Domizio, S.; Di Vacri, M. L.; Drobizhev, A.; Fang, D. Q.; Faverzani, M.; Feintzeig, J.; Fernandes, G.; Ferri, E.; Ferroni, F.; Fiorini, E.; Franceschi, M. A.; Freedman, S. J.; Fujikawa, B. K.; Giachero, A.; Gironi, L.; Giuliani, A.; Gladstone, L.; Gorla, P.; Gotti, C.; Gutierrez, T. D.; Haller, E. E.; Han, K.; Hansen, E.; Heeger, K. M.; Hennings-Yeomans, R.; Hickerson, K. P.; Huang, H. Z.; Kadel, R.; Keppel, G.; Kolomensky, Yu. G.; Leder, A.; Ligi, C.; Lim, K. E.; Liu, X.; Ma, Y. G.; Maino, M.; Marini, L.; Martinez, M.; Maruyama, R. H.; Mei, Y.; Moggi, N.; Morganti, S.; Mosteiro, P. J.; Napolitano, T.; Nones, C.; Norman, E. B.; Nucciotti, A.; O'Donnell, T.; Orio, F.; Ouellet, J. L.; Pagliarone, C. E.; Pallavicini, M.; Palmieri, V.; Pattavina, L.; Pavan, M.; Pessina, G.; Pettinacci, V.; Piperno, G.; Pira, C.; Pirro, S.; Pozzi, S.; Previtali, E.; Rosenfeld, C.; Rusconi, C.; Sangiorgio, S.; Santone, D.; Scielzo, N. D.; Singh, V.; Sisti, M.; Smith, A. R.; Taffarello, L.; Tenconi, M.; Terranova, F.; Tomei, C.; Trentalange, S.; Vignati, M.; Wagaarachchi, S. L.; Wang, B. S.; Wang, H. W.; Wilson, J.; Winslow, L. A.; Wise, T.; Woodcraft, A.; Zanotti, L.; Zhang, G. Q.; Zhu, B. X.; Zimmermann, S.; Zucchelli, S.


    We report on the measurement of the two-neutrino double-beta decay half-life of ^{130}Te with the CUORE-0 detector. From an exposure of 33.4 kg year of TeO_2, the half-life is determined to be T_{1/2}^{2ν } = [8.2 ± 0.2 (stat.) ± 0.6 (syst.)] × 10^{20} year. This result is obtained after a detailed reconstruction of the sources responsible for the CUORE-0 counting rate, with a specific study of those contributing to the ^{130}Te neutrinoless double-beta decay region of interest.

  1. Measurement of the two-neutrino double-beta decay half-life of {sup 130}Te with the CUORE-0 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Alduino, C.; Avignone, F.T.; Chott, N.; Creswick, R.J.; Rosenfeld, C.; Wilson, J. [University of South Carolina, Department of Physics and Astronomy, Columbia, SC (United States); Alfonso, K.; Hickerson, K.P.; Huang, H.Z.; Liu, X.; Trentalange, S.; Zhu, B.X. [University of California, Department of Physics and Astronomy, Los Angeles, CA (United States); Artusa, D.R. [University of South Carolina, Department of Physics and Astronomy, Columbia, SC (United States); INFN-Laboratori Nazionali del Gran Sasso, Assergi, L' Aquila (Italy); Azzolini, O.; Camacho, A.; Keppel, G.; Palmieri, V.; Pira, C. [INFN-Laboratori Nazionali di Legnaro, Legnaro, Padova (Italy); Banks, T.I.; Drobizhev, A.; Freedman, S.J.; Hennings-Yeomans, R.; O' Donnell, T.; Wagaarachchi, S.L. [University of California, Department of Physics, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Bari, G.; Deninno, M.M. [INFN-Sezione di Bologna, Bologna (Italy); Beeman, J.W. [Lawrence Berkeley National Laboratory, Materials Science Division, Berkeley, CA (United States); Bellini, F.; Cardani, L.; Casali, N.; Cosmelli, C.; Ferroni, F. [Sapienza Universita di Roma, Dipartimento di Fisica, Rome (Italy); INFN-Sezione di Roma, Rome (Italy); Bersani, A.; Caminata, A. [INFN-Sezione di Genova, Genova (Italy); Biassoni, M.; Carbone, L.; Cremonesi, O.; Ferri, E.; Giachero, A.; Pessina, G.; Previtali, E.; Rusconi, C. [INFN-Sezione di Milano Bicocca, Milan (Italy); Brofferio, C.; Capelli, S.; Carniti, P.; Cassina, L.; Chiesa, D.; Clemenza, M.; Faverzani, M.; Fiorini, E.; Gironi, L.; Gotti, C.; Maino, M.; Nucciotti, A.; Pavan, M.; Pozzi, S.; Sisti, M.; Terranova, F.; Zanotti, L. [Universita di Milano-Bicocca, Dipartimento di Fisica, Milan (Italy); INFN-Sezione di Milano Bicocca, Milan (Italy); Bucci, C.; Cappelli, L.; D' Addabbo, A.; Di Vacri, M.L.; Gorla, P.; Pattavina, L.; Pirro, S. [INFN-Laboratori Nazionali del Gran Sasso, Assergi, L' Aquila (Italy); Canonica, L. [INFN-Laboratori Nazionali del Gran Sasso, Assergi, L' Aquila (Italy); Massachusetts Institute of Technology, Cambridge, MA (United States); Cao, X.G.; Fang, D.Q.; Ma, Y.G.; Wang, H.W.; Zhang, G.Q. [Chinese Academy of Sciences, Shanghai Institute of Applied Physics, Shanghai (China); Copello, S.; Di Domizio, S.; Fernandes, G.; Marini, L.; Pallavicini, M. [INFN-Sezione di Genova, Genova (Italy); Universita di Genova, Dipartimento di Fisica, Genova (Italy); Cushman, J.S.; Davis, C.J.; Heeger, K.M.; Lim, K.E.; Maruyama, R.H. [Yale University, Department of Physics, New Haven, CT (United States); Dafinei, I.; Morganti, S.; Mosteiro, P.J.; Orio, F.; Pettinacci, V.; Tomei, C.; Vignati, M. [INFN-Sezione di Roma, Rome (Italy); Dell' Oro, S. [INFN-Laboratori Nazionali del Gran Sasso, Assergi, L' Aquila (Italy); INFN-Gran Sasso Science Institute, L' Aquila (Italy); Feintzeig, J.; Fujikawa, B.K.; Mei, Y.; Smith, A.R. [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Franceschi, M.A.; Ligi, C.; Napolitano, T.; Piperno, G. [INFN-Laboratori Nazionali di Frascati, Frascati, Rome (Italy); Giuliani, A.; Tenconi, M. [Universite Paris-Saclay, CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Orsay (France); Gladstone, L.; Leder, A.; Winslow, L.A. [Massachusetts Institute of Technology, Cambridge, MA (United States); Gutierrez, T.D. [California Polytechnic State University, Physics Department, San Luis Obispo, CA (United States); Haller, E.E. [Lawrence Berkeley National Laboratory, Materials Science Division, Berkeley, CA (United States); University of California, Department of Materials Science and Engineering, Berkeley, CA (United States); Han, K. [Yale University, Department of Physics, New Haven, CT (United States); Shanghai Jiao Tong University, Department of Physics and Astronomy, Shanghai (China); Hansen, E. [University of California, Department of Physics and Astronomy, Los Angeles, CA (United States); Massachusetts Institute of Technology, Cambridge, MA (United States); Kadel, R. [Lawrence Berkeley National Laboratory, Physics Division, Berkeley, CA (United States); Kolomensky, Yu.G. [University of California, Department of Physics, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Physics Division, Berkeley, CA (United States); Martinez, M. [Sapienza Universita di Roma, Dipartimento di Fisica, Rome (Italy); INFN-Sezione di Roma, Rome (Italy); Universidad de Zaragoza, Laboratorio de Fisica Nuclear y Astroparticulas, Zaragoza (Spain); Moggi, N. [INFN-Sezione di Bologna, Bologna (Italy); Alma Mater Studiorum-Universita di Bologna, Dipartimento di Scienze per la Qualita della Vita, Bologna (Italy); Nones, C. [Service de Physique des Particules, CEA/Saclay, Gif-sur-Yvette (France); Norman, E.B.; Wang, B.S. [Lawrence Livermore National Laboratory, Livermore, CA (United States); University of California, Department of Nuclear Engineering, Berkeley, CA (United States); Ouellet, J.L. [University of California, Department of Physics, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Massachusetts Institute of Technology, Cambridge, MA (United States); Pagliarone, C.E. [INFN-Laboratori Nazionali del Gran Sasso, Assergi, L' Aquila (Italy); Universita degli Studi di Cassino e del Lazio Meridionale, Dipartimento di Ingegneria Civile e Meccanica, Cassino (Italy); Sangiorgio, S.; Scielzo, N.D. [Lawrence Livermore National Laboratory, Livermore, CA (United States); Santone, D. [INFN-Laboratori Nazionali del Gran Sasso, Assergi, L' Aquila (Italy); Universita dell' Aquila, Dipartimento di Scienze Fisiche e Chimiche, L' Aquila (Italy); Singh, V. [University of California, Department of Physics, Berkeley, CA (US); Taffarello, L. [INFN-Sezione di Padova, Padova (IT); Wise, T. [Yale University, Department of Physics, New Haven, CT (US); University of Wisconsin, Department of Physics, Madison, WI (US); Woodcraft, A. [University of Edinburgh, SUPA, Institute for Astronomy, Edinburgh (GB); Zimmermann, S. [Lawrence Berkeley National Laboratory, Engineering Division, Berkeley, CA (US); Zucchelli, S. [INFN-Sezione di Bologna, Bologna (IT); Alma Mater Studiorum-Universita di Bologna, Dipartimento di Fisica e Astronomia, Bologna (IT)


    We report on the measurement of the two-neutrino double-beta decay half-life of {sup 130}Te with the CUORE-0 detector. From an exposure of 33.4 kg year of TeO{sub 2}, the half-life is determined to be T{sub 1/2}{sup 2ν} = [8.2 ± 0.2 (stat.) ± 0.6 (syst.)] x 10{sup 20} year. This result is obtained after a detailed reconstruction of the sources responsible for the CUORE-0 counting rate, with a specific study of those contributing to the {sup 130}Te neutrinoless double-beta decay region of interest. (orig.)

  2. Digital beta counting and pulse-shape analysis for high-precision nuclear beta decay half-life measurements: Tested on Alm26 (United States)

    Chen, L.; Hardy, J. C.; Bencomo, M.; Horvat, V.; Nica, N.; Park, H. I.


    A digital β-counting method has been developed for high-precision nuclear β-decay half-life experiments that use a gas proportional counter. An 8-bit, 1-GS/s sampling-rate digitizer was used to record the waveforms from the detector and a software filter was designed, tested and applied successfully to discriminate genuine β-decay events from spurious signals by pulse-shape analysis. The method of using a high-speed digitizer for precision β counting is described in detail. We have extensively tested the digitizer and the off-line filter by analyzing saved waveforms from the decay of Alm26 acquired at rates up to 10,000 per second. The half-life we obtain for Alm26 is 6345.30±0.90 ms, which agrees well with previous published measurements and is as precise as the best of them. This work demonstrates the feasibility of applying a high-speed digitizer and off-line digital signal processing techniques for high-precision nuclear β-decay half-life measurements.

  3. Measurement of the Double-Beta Decay Half-Life and Search for the Neutrinoless Double-Beta Decay of $^{48}{\\rm Ca}$ with the NEMO-3 Detector

    CERN Document Server

    :,; Augier, C; Bakalyarov, A M; Baker, J D; Barabash, A S; Basharina-Freshville, A; Blondel, S; Blot, S; Bongrand, M; Brudanin, V; Busto, J; Caffrey, A J; Calvez, S; Cascella, M; Cerna, C; Cesar, J P; Chapon, A; Chauveau, E; Chopra, A; Duchesneau, D; Durand, D; Egorov, V; Eurin, G; Evans, J J; Fajt, L; Filosofov, D; Flack, R; Garrido, X; Gómez, H; Guillon, B; Guzowski, P; Hodák, R; Huber, A; Hubert, P; Hugon, C; Jullian, S; Klimenko, A; Kochetov, O; Konovalov, S I; Kovalenko, V; Lalanne, D; Lang, K; Lebedev, V I; Lemière, Y; Noblet, T Le; Liptak, Z; Liu, X R; Loaiza, P; Lutter, G; Mamedov, F; Marquet, C; Mauger, F; Morgan, B; Mott, J; Nemchenok, I; Nomachi, M; Nova, F; Nowacki, F; Ohsumi, H; Pahlka, R B; Perrot, F; Piquemal, F; Povinec, P; Přidal, P; Ramachers, Y A; Remoto, A; Reyss, J L; Richards, B; Riddle, C L; Rukhadze, E; Rukhadze, N I; Saakyan, R; Salazar, R; Sarazin, X; Shitov, Yu; Simard, L; Šimkovic, F; Smetana, A; Smolek, K; Smolnikov, A; Söldner-Rembold, S; Soulé, B; Štekl, I; Suhonen, J; Sutton, C S; Szklarz, G; Thomas, J; Timkin, V; Torre, S; Tretyak, Vl I; Tretyak, V I; Umatov, V I; Vanushin, I; Vilela, C; Vorobel, V; Waters, D; Zhukov, S V; Žukauskas, A


    The NEMO-3 experiment at the Modane Underground Laboratory has investigated the double-$\\beta$ decay of $^{48}{\\rm Ca}$. Using $5.25$\\,yr of data recorded with a $6.99\\,{\\rm g}$ sample of $^{48}{\\rm Ca}$, approximately $150$ double-$\\beta$ decay candidate events have been selected with a signal-to-background ratio greater than $3$. The half-life for the two-neutrino double-$\\beta$ decay of $^{48}{\\rm Ca}$ has been measured to be \\mbox{$T^{2\

  4. Characterization of the liquid argon veto of the GERDA experiment and its application for the measurement of the {sup 76}Ge half-life

    Energy Technology Data Exchange (ETDEWEB)

    Wegmann, Anne Christin


    The search for neutrinoless double-beta decay (0νββ) is one of the most active fields in modern particle physics as the observation of this process would prove lepton number violation and imply new physics beyond the Standard Model of particle physics. The GERDA experiment searches for this decay by operating bare Germanium detectors, enriched in the ββ isotope {sup 76}Ge, in liquid argon. For the first time, a ββ-experiment combines the excellent properties of semiconductor Germanium detectors with an active background suppression technique based on the simultaneous detection of liquid argon scintillation light by photomultiplier tubes and silicon photomultipliers coupled to scintillating fibers (LAr veto). The LAr veto has been successfully operated during the first six months of Phase II of the experiment and yielded - in combination with a Germanium detector pulse shape discrimination technique - a background index of (0.7{sup +1.1}{sub -0.5}).10{sup -3} ((cts)/(kg.keV.yr)). With an ultimate exposure of 100 kg.yr this will allow for a 0νββ-decay half-life sensitivity of the Gerda Phase II experiment of 10{sup 26} yr. Double-beta decay under the emission of two neutrinos (2νββ) is a second-order process but which is allowed by the Standard Model. The excellent background reduction of the LAr veto results in an unprecedented signal-to-background ratio of 30:1 in the energy region dominated by 2νββ-decay of {sup 76}Ge. The remaining background after LAr veto is estimated using the suppression factor from calibration source measurements and results in a measurement of T{sup 2ν}{sub 1/2}=(1.98±0.02(stat)±0.05(syst)).10{sup 21} yr and T{sub 1/2}{sup 2ν}=(1.92 ±0.02(stat)±0.11(syst)).10{sup 21} yr based on two different detector designs and give uncertainties on the detector parameters but both with improved systematic uncertainties in comparison to earlier measurements.

  5. Half-life determination for 27Mg (United States)

    Zahn, G. S.; Genezini, F. A.


    In this work, the half-life of the short-lived magnesium radionuclide 27Mg was measured by following the activity of samples after they were irradiated in the IEA-R1 reactor. An exponential decay function was then fitted to the results using the counts from a 60Co source as livetime chronometer; the individual half-life values obtained for each irradiation were compiled using both the usual unweighted and σ-2-weighted averages, as well as the robust averages obtained using the Normalized Residuals and the Rajeval techniques. The final halflive values obtained aren't compatible with the ENSDF compilation values, but have a similar uncertainty; analysis of the experimental literature values, all from the 50’s-60’s, show that further measurements should be undertaken in order to achieve a more robust consensus value for this half-life.

  6. Precision measurement of the half-life and the $\\beta$-decay Q value of the superallowed 0$^{+}\\rightarrow$ 0$^{+}\\beta$-decay of $^{38}$Ca

    CERN Multimedia


    We propose to study the $\\beta$-decay of $^{38}$Ca. In a first instance, we intend to perform a high-precision study of the half-life of this nucleus as well as a measurement of its $\\beta$-decay Q-value with ISOLTRAP. At a later stage, we propose to study its decay branches to determine the super-allowed branching ratio with high precision. These measurements are essential to improve our understanding of the theoretical corrections (in particular the $\\delta$c correction factor) needed to calculate the universal Ft value from the ft value determined for individual nuclei. For this nucleus, the correction factor is predicted to increase significantly as compared to the nine well-studied nuclei between $^{10}$C and $^{54}$Co and the model calculations used to determine the corrections, in particular the shell-model calculations, are well under control in this mass region. Therefore, the T$_{Z}$= -1 nuclei between A=18 and A=38 are ideal test cases for the correction factors which limit today the precision on t...

  7. Precision measurement of the half-life and branching ratio of the T=1/2 mirror $\\beta$-decay of $^{37}$K

    CERN Multimedia

    We propose to study the T=1/2 mirror $\\beta$-decay of $^{37}$K. Nuclear mirror $\\beta$-decay is a competitive means to test the electroweak model by means of the high-precision measurement of V$_{ud}$ element of the CKM quark mixing matrix. One key ingredient to obtain V$_{ud}$ is the force of the transition, Ft, which has to be determined with a relative precision below 10$^{−3}$. This quantity is related to the half-life T$_{1/2}$ of the decaying nucleus, the branching ratio BR for this decay and the mass difference between the mother and daughter nucleus (Q value). Another important feature is the mixing ratio $\\rho$ between the Fermi and the Gamow-Teller character of the transition. In most cases, $\\rho$ is the major contributor to the uncertainty on Ft. Available data concerning T$_{1/2}$ and BR of $^{37}$K suffer from a lack of precision that will be easily reduced by a dedicated experiment.

  8. Half-life of 51Mn (United States)

    Graves, Stephen A.; Ellison, Paul A.; Valdovinos, Hector F.; Barnhart, Todd E.; Nickles, Robert J.; Engle, Jonathan W.


    The half-life of 51Mn was measured by serial gamma spectrometry of the 511-keV annihilation photon following decay by β+ emission. Data were collected every 100 seconds for 100,000-230,000 seconds within each measurement (n =4 ). The 511-keV incidence rate was calculated from the 511-keV spectral peak area and count duration, corrected for detector dead time and radioactive decay. Least-squares regression analysis was used to determine the half-life of 51Mn while accounting for the presence of background contaminants, notably 55Co. The result was 45.59 ±0.07 min, which is the highest precision measurement to date and disagrees with the current Nuclear Data Sheets value by over 6 σ .

  9. First measurements on how pressure affects the half-life of 22Na: Comparison to theory and analog to 40K (United States)

    Lee, K. K.; Nelson, R. O.; Rundberg, R.; Steinle-Neumann, G.


    Radioactive decay plays a central role in planetary sciences as appropriate decay schemes are used to date geological and astronomical processes and radioactivity provides an important source of heat in planetary bodies, both in their early history during accretion and differentiation and also over geological times. The most important isotopes that currently heat the Earth are 40K, 232Th, 235U and 238U. As radioactive decay is a nuclear process it is considered to be insensitive to external factors such as pressure or chemical environment. This has been shown to be true for α, β+ and β- processes, however, electron capture decay is dependent on the electron charge density at the nucleus of a compound, which is sensitive to the external environment. Using high-resolution Ge gamma-ray detectors to make relative measurements with 137Cs and the positron decay of 22Na, we measure how pressure affects the half-life of 22Na due to electron-capture decay. Our systematics look favorable for observing this small effect. We will compare our preliminary measurements with complementary ab-initio all-electron computations using the linearized augmented plane wave method (LAPW). Using 22Na as an analog for 40K, our results suggest that the pressure effect for 40K, combined with the opposing effects of high temperatures, will have little, discernible effect on the heat production in the deep Earth as our predicted changes are smaller than the uncertainties in the total decay constant for 40K. This work was supported in part by the Carnegie/DOE Alliance Center (CDAC), through the Stewardship Science Academic Alliances Program of the U.S. Department of Energy. The LANSCE facility is operated, and portions of this work were performed, by Los Alamos National Security, LLC, funded by the U.S. Department of Energy under Contract No. DE-AC52- 06NA25396.

  10. Half-life of 14O

    Energy Technology Data Exchange (ETDEWEB)

    Burke, Jason T.; Vetter, Paul A.; Freedman, Stuart J.; Fujikawa,Brian K.; Winter, Wesley T.


    We have measured the half-life of 14O, a superallowed (0+\\rightarrow 0+) \\beta decay isotope. The 14O was produced by the12C(3He,n)14O reaction using a carbon aerogel target. A low-energy ionbeam of 14O was mass separated and implanted in a thin beryllium foil.The beta particles were counted with plastic scintillator detectors. Wefind \\tau 1/2 = 70.696 +- 0.037\\sigma. This result is 2.0\\sigma higherthan an average value from six earlier experiments, but agrees moreclosely with the most recent previous measurment.

  11. Half-Life of $^{14}$O

    CERN Document Server

    Burke, J T; Fujikawa, B K; Vetter, P A; Winter, W T


    We have measured the half-life of $^{14}$O, a superallowed (0+ $\\to$ 0+) $\\beta$ decay isotope. The $^{14}$O was produced by the $^{12}$C($^{3}$He,n)$^{14}$O reaction using a carbon aerogel target. A low-energy ion beam of $^{14}$O was mass separated and implanted in a thin beryllium foil. The beta particles were counted with plastic scintillator detectors. We find $\\tau_{1/2} = 70.696 \\pm 0.037$ s. This result is 2.0$\\sigma$ higher than an average value from six earlier experiments, but agrees more closely with the most recent previous measurment.

  12. Theory of nuclear half-life determination by statistical sampling (United States)

    Silverman, M. P.


    A remarkable method for measuring half-lives of radioactive nuclei was proposed several years ago that entailed statistical sampling of the source activity. A histogram of half-life estimates, calculated from pairs of activity measurements separated in time, took the unexpected form of a nearly perfect Cauchy distribution, the midpoint of which corresponded very closely to the true value of the half-life. No theoretical justification of the method was given. In this article I derive the exact probability density function (pdf) of the two-point half-life estimates, show how (and under what conditions) a Cauchy distribution emerges from the exact pdf —which, mathematically, shows no resemblance to a Cauchy function— and discuss the utility of the statistical sampling method. The analysis shows that the exact pdf, under the conditions leading to an empirical Cauchy lineshape, is an unbiased estimator of the true half-life.

  13. Half-life of 31Si

    CERN Document Server

    D'Agostino, Giancarlo; Mana, Giovanni; Oddone, Massimo


    Half-life values are widely used in nuclear chemistry to model the exponential decay of the quantified radionuclides. The analysis of existing data reveals a general lack of information on the performed experiments and an almost complete absence of uncertainty budgets. This is the situation for 31Si, the radionuclide produced via neutron capture reaction recently used to quantify trace amounts of 30Si in a sample of the silicon material highly enriched in 28Si and used for the determination of the Avogadro constant. In order to improve the quality of the now recommended 157.36(26) min value, we carried out repeated observations of the 31Si decay rate via gamma-ray spectrometry measurements. This paper reports the result we obtained, including details of the experiment and the evaluation of the uncertainty.

  14. Measurement of the double-β decay half-life and search for the neutrinoless double-β decay of 48Ca with the NEMO-3 detector (United States)

    Waters, David; Vilela, Cristóvão; NEMO-3 Collaboration


    Neutrinoless double-β decay is a powerful probe of lepton number violating processes that may arise from Majorana terms in neutrino masses, or from supersymmetric, left-right symmetric, and other extensions of the Standard Model. Of the candidate isotopes for the observation of this process, 48Ca has the highest Qββ -value, resulting in decays with energies significantly above most naturally occurring backgrounds. The nucleus also lends itself to precise matrix element calculations within the nuclear shell model. We present the world’s best measurement of the two-neutrino double-β decay of 48Ca, obtained by the NEMO-3 collaboration using 5.25 yr of data recorded with a 6.99 g sample of isotope, yielding ≈ 150 events with a signal to background ratio larger than 3. Neutrinoless modes of double-β decay are also investigated, with no evidence of new physics. Furthermore, these results indicate that two-neutrino double-β decay would be the main source of background for similar future searches using 48Ca with significantly larger exposures.

  15. Improving the Precision of the Half Life of 34Ar (United States)

    Iacob, V. E.; Hardy, J. C.; Bencomo, M.; Chen, L.; Horvat, V.; Nica, N.; Park, H. I.


    Currently, precise ft-values measured for superallowed 0+ -->0+ β transitions provide the most accurate value for Vud, the up-down quark mixing element of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. This enables the most demanding test of CKM unitarity, one of the pillars of the Standard Model. Further improvements in precision are possible if the ft values for pairs of mirror 0+ -->0+ transitions can be measured with 0.1% precision or better. The decays of 34Ar and 34Cl are members of such a mirror pair, but so far the former is not known with sufficient precision. Since our 2006 publication of the half-life of 34Ar, we have improved significantly our acquisition and analysis techniques, adding refinements that have led to increased accuracy. The 34Cl half-life is about twice that of 34Ar. This obscures the 34Ar contribution to the decay in measurements such as ours, which detected the decay positrons and was thus unable to differentiate between the parent and daughter decays. We report here two experiments aiming to improve the half-life of 34Ar: The first detected positrons as in but with improved controls; the second measured γ rays in coincidence with positrons, thus achieving a clear separation of 34Ar decay from 34Cl.

  16. Investigating the relationship between the half-life decay of the height and the coefficient of restitution of bouncing balls using a microcomputer-based laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Amrani, D, E-mail: djilali.amrani@etsmtl.c [Physics Laboratory, Service des Enseignements Generaux, Ecole de Technologie Superieure, University of Quebec, 1111 rue Notre-Dame Ouest, Montreal, Quebec H3C 1K3 (Canada)


    This pedagogical activity is aimed at students using a computer-learning environment with advanced tools for data analysis. It investigates the relationship between the coefficient of restitution and the way the heights of different bouncing balls decrease in a number of bounces with time. The time between successive ball bounces, or time-of-flight, is used to determine the initial height and the coefficient of restitution due to the ball's impact on a hard horizontal surface. The measurement techniques and the results obtained are pedagogically useful for undergraduate students during the manipulation and analysis of laboratory experiments dealing with the physics of bouncing balls.

  17. Experimental implementation and proof of principle for a radionuclidic purity test solely based on half-life measurement

    DEFF Research Database (Denmark)

    Jørgensen, Thomas; Jensen, Mikael


    In this paper we present the results of an experimental implementation of the method (Jorgensen et al., 2012) for testing the radionuclidic purity (RNP) of F-18 compounds.The overall limitations of the experimental methods and their possible impacts on RNP detectability have been identified. We...

  18. Half-life determination for 108Ag and 110Ag (United States)

    Zahn, Guilherme S.; Genezini, Frederico A.


    In this work, the half-life of the short-lived silver radionuclides 108Ag and 110Ag were measured by following the activity of samples after they were irradiated in the IEA-R1 reactor. The results were then fitted using a non-paralizable dead time correction to the regular exponential decay and the individual half-life values obtained were then analyzed using both the Normalized Residuals and the Rajeval techniques, in order to reach the most exact and precise final values. To check the validity of dead-time correction, a second correction method was also employed by means of counting a long-lived 60Co radioactive source together with the samples as a livetime chronometer. The final half-live values obtained using both dead-time correction methods were in good agreement, showing that the correction was properly assessed. The results obtained are partially compatible with the literature values, but with a lower uncertainty, and allow a discussion on the last ENSDF compilations' values.

  19. Half-life determination of the ground state decay of ¹¹¹Ag. (United States)

    Collins, S M; Harms, A V; Regan, P H


    The radioactive decay half-life of the β(-)-emitter (111)Ag has been measured using decay transitions identified using a high purity germanium γ-ray spectrometer. The time series of measurements of the net peak areas of the 96.8 keV, 245.4 keV and 342.1 keV γ-ray emissions following the β(-) decay of (111)Ag were made over approximately 23 days, i.e. ~3 half-life periods. The measured half-life of the ground state decay of (111)Ag was determined as 7.423 (13) days which is consistent with the Evaluated Nuclear Structure Data File (ENSDF) recommended half-life of 7.45 (1) days at k=2. Utilising all available experimental half-life values, a revised recommended half-life of 7.452 (12) days has been determined. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  20. Determination of environmental dependence of the &-circ; decay half-life of ^198Au (United States)

    Dibidad, A.; Goodwin, J.; Hardy, J.


    A series of articles by the C. Rolfs group [1] claimed changes in the half-lives of isotopes undergoing α, β^-, β^+, and electron-capture decays as the temperature reduced to 12 K from room temperature. These isotopes were contained in metallic, conductive environments, such as Au, Cu, and Pd, but it was also suggested that the half-life is different in an insulator. One publication [1] reported the half-life of ^198Au in a gold metal environment to change by 3.6 ±1.0% between room temperature and 12 K. Until then, radioactive half-lives were considered independent of environmental factors. We repeated the measurements of the ^198Au half-life in a gold metal environment under similar conditions as ref. [1] and demonstrated [2] that the half-life is the same at both temperatures within 0.04%, two orders of magnitude below the original claims. In the experiment reported here, we measured the half-life of ^198Au in an insulated environment -- gold (III) oxide -- at room temperature. Preliminary results indicate there is no difference in the measured half-life in an insulator as compared in a conductor. [4pt] [1] T. Spillane et al, Eur. Phys. J. A 31, 203 (2007) [0pt] [2] J.R. Goodwin et al, Eur. Phys. J. A 34, 271 (2007)

  1. Determination of the half-life of 213Fr with high precision (United States)

    Fisichella, M.; Musumarra, A.; Farinon, F.; Nociforo, C.; Del Zoppo, A.; Figuera, P.; La Cognata, M.; Pellegriti, M. G.; Scuderi, V.; Torresi, D.; Strano, E.


    High-precision measurement of half-life and Qα value of neutral and highly charged α emitters is a major subject of investigation currently. In this framework, we recently pushed half-life measurements of neutral emitters to a precision of a few per mil. This result was achieved by using different techniques and apparatuses at Istituto Nazionale di Fisica Nucleare Laboratori Nazionali del Sud (INFN-LNS) and GSI Darmstadt. Here we report on 213Fr half-life determination [T1/2(213Fr) = 34.14±0.06 s] at INFN-LNS, detailing the measurement protocol used. Direct comparison with the accepted value in the literature shows a discrepancy of more than three sigma. We propose this new value as a reference, discussing previous experiments.

  2. Designing of peptides with desired half-life in intestine-like environment

    KAUST Repository

    Sharma, Arun


    Background: In past, a number of peptides have been reported to possess highly diverse properties ranging from cell penetrating, tumor homing, anticancer, anti-hypertensive, antiviral to antimicrobials. Owing to their excellent specificity, low-toxicity, rich chemical diversity and availability from natural sources, FDA has successfully approved a number of peptide-based drugs and several are in various stages of drug development. Though peptides are proven good drug candidates, their usage is still hindered mainly because of their high susceptibility towards proteases degradation. We have developed an in silico method to predict the half-life of peptides in intestine-like environment and to design better peptides having optimized physicochemical properties and half-life.Results: In this study, we have used 10mer (HL10) and 16mer (HL16) peptides dataset to develop prediction models for peptide half-life in intestine-like environment. First, SVM based models were developed on HL10 dataset which achieved maximum correlation R/R2 of 0.57/0.32, 0.68/0.46, and 0.69/0.47 using amino acid, dipeptide and tripeptide composition, respectively. Secondly, models developed on HL16 dataset showed maximum R/R2 of 0.91/0.82, 0.90/0.39, and 0.90/0.31 using amino acid, dipeptide and tripeptide composition, respectively. Furthermore, models that were developed on selected features, achieved a correlation (R) of 0.70 and 0.98 on HL10 and HL16 dataset, respectively. Preliminary analysis suggests the role of charged residue and amino acid size in peptide half-life/stability. Based on above models, we have developed a web server named HLP (Half Life Prediction), for predicting and designing peptides with desired half-life. The web server provides three facilities; i) half-life prediction, ii) physicochemical properties calculation and iii) designing mutant peptides.Conclusion: In summary, this study describes a web server \\'HLP\\' that has been developed for assisting scientific

  3. Confirmation of the Precise Half Life of ^26Si (United States)

    Iacob, V.; Golovko, V.; Goodwin, J.; Hardy, J. C.; Nica, N.; Park, H. I.; Trache, L.; Tribble, R. E.


    Precise ft-values (with uncertainties below 0.1%) for superallowed 0^+->0^+ β transitions provide a demanding test of the Standard Model via the unitarity of the Cabibbo-Kobayashi-Maskawa matrix. Our preliminary report of such a measurement for the half-life of ^26Si [1], was consistent with the previously accepted value but turned out to be higher than a subsequent result published in 2008 [2]. This prompted us to repeat the measurement described in [1] with increased statistics and with a strong focus on all experimental details that could have generated a biased result. We collected more than 200 million ^26Si nuclei in 60 separate runs, which differed from one another in their discriminator threshold, detector bias or dominant dead-time setting. We repeatedly verified and confirmed the stability of the source purity and detector response function. The analysis of these separate runs shows no systematic bias with these parameters and confirms our initial result [1]. The discrepancy between our result and that of reference [2] can be accounted for by the latter's neglect [3] of the difference in beta-detection efficiencies between the parent and daughter decays. Our preliminary result is 2245(3) ms, with the final analysis expected to yield an uncertainty of 0.05% or better. [1] V. Iacob et al., Bulletin APS 53, (12) DNP-Meeting 2008 [2] I. Matea et al., EPJ A37, 151 (2008) [3] B. Blank, private communication

  4. A formula for half-life of proton radioactivity (United States)

    Zhang, Zhi-Xing; Dong, Jian-Min


    We present a formula for proton radioactivity half-lives of spherical proton emitters with the inclusion of the spectroscopic factor. The coefficients in the formula are calibrated with the available experimental data. As an input to calculate the half-life, the spectroscopic factor that characterizes the important information on nuclear structure should be obtained with a nuclear many-body approach. This formula is found to work quite well, and in better agreement with experimental measurements than other theoretical models. Therefore, it can be used as a powerful tool in the investigation of proton emission, in particular for experimentalists. Supported by National Natural Science Foundation of China (11435014, 11405223, 11675265, 11575112), the 973 Program of China (2013CB834401, 2013CB834405), National Key Program for S&T Research and Development (2016YFA0400501), the Knowledge Innovation Project (KJCX2-EW-N01) of Chinese Academy of Sciences, the Funds for Creative Research Groups of China (11321064) and the Youth Innovation Promotion Association of Chinese Academy of Sciences

  5. Dependence of the half-life of 221Fr on the implantation environment

    DEFF Research Database (Denmark)

    Olaizola, B.; Fraile, L.M.; Riisager, Karsten


    The possible dependence of the half-life of 221Fr on the solid-state environment has been investigated by the simultaneous measurement of implanted 221Fr ions in an insulator (Si) and a metallic substrate (Au) at the ISOLDE facility at CERN. Our results indicate that, if existing, the difference...

  6. The elimination half-life of benzodiazepines and fall risk: two prospective observational studies

    NARCIS (Netherlands)

    de Vries, O.J.; Peeters, G.M.E.E.; Elders, P.J.M.; Sonnenberg, C.M.; Muller, M.; Deeg, D.J.H.; Lips, P.T.A.M.


    Background: the STOPP criteria advise against the use of long-acting benzodiazepines (LBs). Objective: to study whether LBs are associated with a higher fall risk than short-acting benzodiazepines (SBs) (elimination half-life ≤10 h). Methods: we used base-line data and prospective fall follow-up

  7. PASylation of Murine Leptin Leads to Extended Plasma Half-Life and Enhanced in Vivo Efficacy. (United States)

    Morath, Volker; Bolze, Florian; Schlapschy, Martin; Schneider, Sarah; Sedlmayer, Ferdinand; Seyfarth, Katrin; Klingenspor, Martin; Skerra, Arne


    Leptin plays a central role in the control of energy homeostasis and appetite and, thus, has attracted attention for therapeutic approaches in spite of its limited pharmacological activity owing to the very short circulation in the body. To improve drug delivery and prolong plasma half-life, we have fused murine leptin with Pro/Ala/Ser (PAS) polypeptides of up to 600 residues, which adopt random coil conformation with expanded hydrodynamic volume in solution and, consequently, retard kidney filtration in a similar manner as polyethylene glycol (PEG). Relative to unmodified leptin, size exclusion chromatography and dynamic light scattering revealed an approximately 21-fold increase in apparent size and a much larger molecular diameter of around 18 nm for PAS(600)-leptin. High receptor-binding activity for all PASylated leptin versions was confirmed in BIAcore measurements and cell-based dual-luciferase assays. Pharmacokinetic studies in mice revealed a much extended plasma half-life after ip injection, from 26 min for the unmodified leptin to 19.6 h for the PAS(600) fusion. In vivo activity was investigated after single ip injection of equimolar doses of each leptin version. Strongly increased and prolonged hypothalamic STAT3 phosphorylation was detected for PAS(600)-leptin. Also, a reduction in daily food intake by up to 60% as well as loss in body weight of >10% lasting for >5 days was observed, whereas unmodified leptin was merely effective for 1 day. Notably, application of a PASylated superactive mouse leptin antagonist (SMLA) led to the opposite effects. Thus, PASylated leptin not only provides a promising reagent to study its physiological role in vivo but also may offer a superior drug candidate for clinical therapy.

  8. Superior serum half life of albumin tagged TNF ligands

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Nicole [Division of Molecular Internal Medicine, Department of Internal Medicine II, University Hospital Wuerzburg, Roentgenring 11, 97070 Wuerzburg (Germany); Schneider, Britta; Pfizenmaier, Klaus [Institute of Cell Biology and Immunology, University of Stuttgart, Allmandring 31, 70569 Stuttgart (Germany); Wajant, Harald, E-mail: [Division of Molecular Internal Medicine, Department of Internal Medicine II, University Hospital Wuerzburg, Roentgenring 11, 97070 Wuerzburg (Germany)


    Due to their immune stimulating and apoptosis inducing properties, ligands of the TNF family attract increasing interest as therapeutic proteins. A general limitation of in vivo applications of recombinant soluble TNF ligands is their notoriously rapid clearance from circulation. To improve the serum half life of the TNF family members TNF, TWEAK and TRAIL, we genetically fused soluble variants of these molecules to human serum albumin (HSA). The serum albumin-TNF ligand fusion proteins were found to be of similar bioactivity as the corresponding HSA-less counterparts. Upon intravenous injection (i.v.), serum half life of HSA-TNF ligand fusion proteins, as determined by ELISA, was around 15 h as compared to approximately 1 h for all of the recombinant control TNF ligands without HSA domain. Moreover, serum samples collected 6 or 24 h after i.v. injection still contained high TNF ligand bioactivity, demonstrating that there is only limited degradation/inactivation of circulating HSA-TNF ligand fusion proteins in vivo. In a xenotransplantation model, significantly less of the HSA-TRAIL fusion protein compared to the respective control TRAIL protein was required to achieve inhibition of tumor growth indicating that the increased half life of HSA-TNF ligand fusion proteins translates into better therapeutic action in vivo. In conclusion, our data suggest that genetic fusion to serum albumin is a powerful and generally applicable mean to improve bioavailability and in vivo activity of TNF ligands.

  9. High-precision half-life determination for 21Na using a 4 π gas-proportional counter (United States)

    Finlay, P.; Laffoley, A. T.; Ball, G. C.; Bender, P. C.; Dunlop, M. R.; Dunlop, R.; Hackman, G.; Leslie, J. R.; MacLean, A. D.; Miller, D.; Moukaddam, M.; Olaizola, B.; Severijns, N.; Smith, J. K.; Southall, D.; Svensson, C. E.


    A high-precision half-life measurement for the superallowed β+ transition between the isospin T =1 /2 mirror nuclei 21Na and 21Ne has been performed at the TRIUMF-ISAC radioactive ion beam facility yielding T1 /2=22.4506 (33 ) s, a result that is a factor of 4 more precise than the previous world-average half-life for 21Na and represents the single most precisely determined half-life for a transition between mirror nuclei to date. The contribution to the uncertainty in the 21Na F tmirror value due to the half-life is now reduced to the level of the nuclear-structure-dependent theoretical corrections, leaving the branching ratio as the dominant experimental uncertainty.

  10. Determination of the half-life of {sup 105m}Rh

    Energy Technology Data Exchange (ETDEWEB)

    Kronenberg, A.; Siemon, K.; Weber, R.; Esterlund, R.A.; Patzelt, P


    Following a fast chemical separation of Ru isotopes from a fission-product mixture, {sup 105m}Rh was periodically extracted from its precursor (4.44-h {sup 105}Ru) for measurements of its half-life. The new value for the T{sub 1/2} of {sup 105m}Rh of (43.0{+-}0.3) s resolves the long-standing conflict in the literature between the two earlier measured values of 45 and 30 s.

  11. A New Method to Determine the Half-Life for Penicillin Using Microcalorimeter (United States)

    Li, Z. X.; Zhao, W. W.


    The dissolution process of penicillin in normal saline and isotonic glucose solution was reported using a microcalorimeter. Both the integral and differential heats of solution were measured. The quantitative relationships between the amount of heat released and the quantity of dissolved penicillin were established. Meanwhile, the kinetics and the half-life of the dissolution processes as well as the enthalpy of solution, the entropy of dissolution, and the free energy of dissolution were determined. The results showed that a change of the solvent from normal saline to isotonic glucose solution had little effect on the half-life of penicillin in the dissolution process, and there was no significant difference between the stabilities of penicillin in isotonic glucose solution and normal saline. Moreover, the dissolution process of penicillin in isotonic glucose solution followed the first-order kinetics. These results could provide a theoretical basis for the clinical applications of penicillin.

  12. A half-life the divided life of Bruno Pontecorvo, physicist or spy

    CERN Document Server

    Close, Frank


    Bruno Pontecorvo dedicated his career to hunting for the Higgs boson of his day: the neutrino, a nearly massless particle considered essential to the process of nuclear fission. His work on the Manhattan project under Enrico Fermi confirmed his reputation as a brilliant physicist and helped usher in the nuclear age. He should have won a Nobel Prize, but late in the summer of 1950 he vanished. At the height of the Cold War, Pontecorvo had disappeared behind the Iron Curtain. In Half-Life, physicist and historian Frank Close offers a heretofore untold history of Pontecorvo’s life, based on unprecedented access to his friends, family, and colleagues. With all the elements of a Cold War thriller—classified atomic research, an infamous double agent, a kidnapping by Soviet operatives—Half-Life is a history of particle physics at perhaps its most powerful: when it created the bomb.

  13. Development of controller of acquisition and sample positioner for activation for use in measurements of short half-life radioisotopes; Desenvolvimento de dispositivo movimentador automatizado de amostras com vista a aplicacao em medidas de radioisotopos que possuem curto tempo de meia-vida

    Energy Technology Data Exchange (ETDEWEB)

    Secco, Marcello


    High resolution gamma spectroscopy measurements have several applications. Those involving short half-life radioisotope measurements may present low precision problems when the radioactive source is far from detector end cup and in the very high activity situations also can present accuracy loss due to dead time and pile-up effects. A way to overcome these problems is changing the source detector distance as the activity is decreasing, and thereby maximizing the statistical counting. In the present study, the Controller of Acquisition and Sample Positioner for Activation (CASPA) was developed. It is a low cost and weight device, made with low atomic number materials designed to assist gamma spectroscopy measurements, which is able to control the distance between the source and the detector, even allowing that there is a change of this distance during the measurement process. Because it is automated it optimizes the time of the operator, who has complete freedom to program their routine measurements in the device besides minimizing the radiation dose in the operator. An interface that allow the user control the CASPA system and to program the acquisition system was created. Tests aiming to optimize the operation of CASPA system were carried out and the safety of the CASPA operation was verified, it was not presented any failure during their tests. It was applied the repeatability tests by the acquisition {sup 60}Co standard source and was found that the positioning of automated system has reproduced the results of static system with a 95% of confidence level. (author)

  14. Determination of the half-life of sup 1 sup 0 sup 5 sup m Rh

    CERN Document Server

    Kronenberg, A K; Weber, R; Esterlund, R A; Patzelt, P


    Following a fast chemical separation of Ru isotopes from a fission-product mixture, sup 1 sup 0 sup 5 sup m Rh was periodically extracted from its precursor (4.44-h sup 1 sup 0 sup 5 Ru) for measurements of its half-life. The new value for the T sub 1 sub / sub 2 of sup 1 sup 0 sup 5 sup m Rh of (43.0+-0.3) s resolves the long-standing conflict in the literature between the two earlier measured values of 45 and 30 s.

  15. Changing Paradigm of Hemophilia Management: Extended Half-Life Factor Concentrates and Gene Therapy. (United States)

    Kumar, Riten; Dunn, Amy; Carcao, Manuel


    Management of hemophilia has evolved significantly in the last century-from recognition of the causative mechanism in the 1950s to commercially available clotting factor concentrates in the 1960s. Availability of lyophilized concentrates in the 1970s set the stage for home-based therapy, followed by introduction of virally attenuated plasma-derived, and then recombinant factor concentrates in the 1980s and 1990s, respectively. The subsequent years saw a paradigm shift in treatment goals from on-demand therapy to prophylactic factor replacement starting at an early age, to prevent hemarthrosis becoming the standard of care for patients with severe hemophilia. In the developed world, the increasing use of home-based prophylactic regimens has significantly improved the quality of life, and life expectancy of patients with severe hemophilia. Seminal developments in the past 5 years, including the commercial availability of extended half-life factor concentrates and the publication of successful results of gene therapy for patients with hemophilia B, promise to further revolutionize hemophilia care over the next few decades. In this review, we summarize the evolution of management for hemophilia, with a focus on extended half-life factor concentrates and gene therapy. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  16. Half-life determination for {sup 108}Ag and {sup 110}Ag

    Energy Technology Data Exchange (ETDEWEB)

    Zahn, Guilherme S.; Genezini, Frederico A. [Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP, P.O. Box 11049, São Paulo, 05422-970 (Brazil)


    In this work, the half-life of the short-lived silver radionuclides {sup 108}Ag and {sup 110}Ag were measured by following the activity of samples after they were irradiated in the IEA-R1 reactor. The results were then fitted using a non-paralizable dead time correction to the regular exponential decay and the individual half-life values obtained were then analyzed using both the Normalized Residuals and the Rajeval techniques, in order to reach the most exact and precise final values. To check the validity of dead-time correction, a second correction method was also employed by means of counting a long-lived {sup 60}Co radioactive source together with the samples as a livetime chronometer. The final half-live values obtained using both dead-time correction methods were in good agreement, showing that the correction was properly assessed. The results obtained are partially compatible with the literature values, but with a lower uncertainty, and allow a discussion on the last ENSDF compilations' values.

  17. Half-life and branching ratios for the β decay of {sup 38}Ca

    Energy Technology Data Exchange (ETDEWEB)

    Blank, B.; Ascher, P.; Audirac, L.; Bacquias, A.; Canchel, G.; Daudin, L.; Gerbaux, M.; Giovinazzo, J.; Grevy, S.; Kurtukian Nieto, T.; Munoz, F.; Roche, M.; Serani, L.; Smirnova, N.; Souin, J. [Universite de Bordeaux, Centre d' Etudes Nucleaires de Bordeaux Gradignan (France); Thomas, J.C.; Caceres, L.; Oliveira Santos, F. de [CEA/DSM-CNRS/IN2P3, Grand Accelerateur National d' Ions Lourds, Caen (France); Didierjean, F. [CNRS/IN2P3/Universite de Strasbourg, Institut Pluridisciplinaire Hubert Curien, Strasbourg (France); Matea, I. [CNRS/IN2P3/Universite Paris-Sud, Institut de Physique Nucleaire, Orsay (France)


    In an experiment at the LISE3 facility of GANIL, we have studied with high precision the 0{sup +} → 0{sup +} β decay of {sup 38}Ca. The LISE3 facility allowed to produce close to pure samples of the nuclide of interest. We measured the half-life of this nucleus to be 443.63(35)ms, whereas the super-allowed branching ratio was determined to be 77.14(35)%. Both data are in nice agreement with previous high-precision measurements and thus improve the overall precision of the experimental inputs to determine the corrected Ft value for this nucleus. We also compare the experimental Gamow-Teller strength distribution with theoretical shell-model predictions. Finally, future opportunities at LISE3 are discussed. (orig.)

  18. Biological half-life of iodine in adults with intact thyroid function and in athyreotic persons

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, G.H.; Hauck, B.M.; Chamberlain, M.J


    A joint project between the Human Monitoring Laboratory (HML) and the Ottawa Hospital has measured the retention of {sup 131}I in patients who have received the radioiodine diagnostically. Thirty-nine subjects with intact thyroid glands and nine athyreotic subjects were measured in the HML's whole-body/thyroid counter to determine the retention of {sup 131}I following its medical administration. The average biological half-life of {sup 131}I in 26 euthyroid subjects was found to be 66.1{+-}6.3 days which may be statistically significantly lower than the ICRP recommended value of 80 days. Nine hyperthyroid patients had a mean biological half-life of 38.2{+-}8.6 days and in three hypothyroid patients the corresponding value was 29.3{+-}8.8 days. Thyroid {sup 131}I uptake was measured in a conventional clinical fashion at the Ottawa Hospital Civic campus 24 h after oral administration of the radioiodine using a collimated thick sodium iodide detector placed over the neck arteriorly. Measured values were 0.144{+-}0.009, 0.314{+-}0.035 and 0.045{+-}0.010 of the administered dose in euthyroid, hyperthyroid and hypothyroid patients respectively. The euthyroid range at the hospital is 0.06-0.22. Uptake was significantly lower for the euthyroid group than the ICRP value of 0.3. The radioiodine retention in athyreotic subjects followed a two compartment model with biological half-lives of 1.0{+-}0.2 days and 18.4{+-}1.1. days. (author)

  19. Effect of parent and daughter deformation on half-life time in exotic ...

    Indian Academy of Sciences (India)

    Taking Coulomb and proximity potential as interacting barrier for post-scission region we calculated half-life time for different modes of exotic decay treating parent and fragments as spheres and these values are compared with experimental data. We studied the effect of deformation of parent and daughter on half-life time ...

  20. A biomimetic approach for enhancing the in vivo half-life of peptides. (United States)

    Penchala, Sravan C; Miller, Mark R; Pal, Arindom; Dong, Jin; Madadi, Nikhil R; Xie, Jinghang; Joo, Hyun; Tsai, Jerry; Batoon, Patrick; Samoshin, Vyacheslav; Franz, Andreas; Cox, Trever; Miles, Jesse; Chan, William K; Park, Miki S; Alhamadsheh, Mamoun M


    The tremendous therapeutic potential of peptides has not yet been realized, mainly owing to their short in vivo half-life. Although conjugation to macromolecules has been a mainstay approach for enhancing protein half-life, the steric hindrance of macromolecules often harms the binding of peptides to target receptors, compromising the in vivo efficacy. Here we report a new strategy for enhancing the in vivo half-life of peptides without compromising their potency. Our approach involves endowing peptides with a small molecule that binds reversibly to the serum protein transthyretin. Although there are a few molecules that bind albumin reversibly, we are unaware of designed small molecules that reversibly bind other serum proteins and are used for half-life extension in vivo. We show here that our strategy was effective in enhancing the half-life of an agonist for GnRH receptor while maintaining its binding affinity, which was translated into superior in vivo efficacy.

  1. Initial fate of prions upon peripheral infection: half-life, distribution, clearance, and tissue uptake (United States)

    Urayama, Akihiko; Morales, Rodrigo; Niehoff, Michael L.; Banks, William A.; Soto, Claudio


    Prion diseases are infectious neurodegenerative disorders associated with the misfolded prion protein (PrPSc), which appears to be the sole component of the infectious agent (termed prion). To produce disease, prions have to be absorbed into the body and reach sufficient quantities in the brain. Very little is known about the biological mechanisms controlling the initial fate of prions. Here, we studied the systemic pharmacokinetics and biodistribution of PrPSc in vivo. After an intravenous injection of highly purified radiolabeled or native unlabeled PrPSc, the protein was eliminated rapidly from the serum (half-life of 3.24 h), mostly through tissue uptake. The quantity of intact PrPSc reaching the brain was ∼0.2% of the injected dose per gram of brain tissue (ID/g). The highest levels were found in liver (∼20% ID/g), spleen (∼13% ID/g), and kidney (∼7.4% ID/g). Cell surface PrPC does not appear to play a role in PrPSc pharmacokinetics, since the infectious protein distributed similarly in wild-type and PrP-null mice. To measure tissue uptake kinetics and biodistribution accurately, vascular space in tissues was measured with radioactively labeled albumin coinjected with radioactively labeled PrPSc. Our results provide a fundamental pharmacokinetic characterization of PrPSc in vivo, which may be relevant to estimate tissue risks and mechanisms of prion neuroinvasion and to identify novel therapeutic strategies.—Urayama, A., Morales, R., Niehoff, M. L., Banks, W. A., Soto, C. Initial fate of prions upon peripheral infection: half-life, distribution, clearance, and tissue uptake PMID:21555356

  2. Unexpectedly long half-life of metformin elimination in cases of metformin accumulation. (United States)

    Kajbaf, F; Bennis, Y; Hurtel-Lemaire, A-S; Andréjak, M; Lalau, J-D


    In a study of the oral administration of a single dose of metformin to healthy participants, the estimated half-life (t½ ) for the elimination of the drug from erythrocytes was found to be 23.4 h (compared with 2.7 h for metformin in plasma). However, these pharmacokinetic indices have not been well defined in metformin accumulation. We systematically reviewed all the data on plasma and erythrocyte metformin assays available in our centre. We then selected patients with a plasma metformin concentration ≥ 5 mg/l and in whom the metformin concentration had been remeasured once or more at least 5 days after admission. Twelve patients met the aforementioned criteria. All but one of these patients displayed generally severe lactic acidosis on admission (mean ± sd pH and lactate: 6.88 ± 0.35 and 14.8 ± 6.56 mmol/l, respectively) and 11 were treated with dialysis. The mean ± sd time interval between the first and last blood sample collections for metformin measurement was 8.3 ± 3.2 days (range 5-14 days). Five days after the first sample had been collected, metformin was still detectable in plasma and in erythrocytes in all patients. Metformin remained detectable for up to 13 days (both in plasma and in erythrocytes). The estimated mean terminal t½ for metformin in plasma and erythrocytes was 51.9 and 43.4 h, respectively. The prolonged elimination of accumulated metformin (even after dialysis therapy) challenges the traditional view that the drug clears rapidly because of a short half-life in plasma. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.

  3. Estimation of the Biological Half-Life of Methylmercury Using a Population Toxicokinetic Model

    Directory of Open Access Journals (Sweden)

    Seongil Jo


    Full Text Available Methylmercury is well known for causing adverse health effects in the brain and nervous system. Estimating the elimination constant derived from the biological half-life of methylmercury in the blood or hair is an important part of calculating guidelines for methylmercury intake. Thus, this study was conducted to estimate the biological half-life of methylmercury in Korean adults. We used a one-compartment model with a direct relationship between methylmercury concentrations in the blood and daily dietary intake of methylmercury. We quantified the between-person variability of the methylmercury half-life in the population, and informative priors were used to estimate the parameters in the model. The population half-life of methylmercury was estimated to be 80.2 ± 8.6 days. The population mean of the methylmercury half-life was 81.6 ± 8.4 days for men and 78.9 ± 8.6 days for women. The standard deviation of the half-life was estimated at 25.0 ± 8.6 days. Using the direct relationship between methylmercury concentrations in blood and methylmercury intake, the biological half-life in this study was estimated to be longer than indicated by the earlier studies that have been used to set guideline values.

  4. Thirty years after Chernobyl: Long-term determination of (137)Cs effective half-life in the lichen Stereocaulon vesuvianum. (United States)

    Savino, F; Pugliese, M; Quarto, M; Adamo, P; Loffredo, F; De Cicco, F; Roca, V


    It has been widely shown that nuclear fallout includes substances, which accumulate in organisms such as crustaceans, fish, mushrooms and lichens, helping to evaluate the activity concentration of contaminants accumulated on a long time. In this context, radiocaesium deposited in soil following the Chernobyl accident on 26 April 1986 is known to have remained persistently available for plant uptake in many areas of Europe. Studies on the lichen Stereocaulon vesuvianum show the plant's high capacity to retain radionuclides from the substrate and the air. After the Chernobyl accident, starting from September 1986, at the Radioactivity Laboratory (LaRa) of the University of Naples Federico II, four monitoring campaigns to evaluate the activity concentration of four isotopes of the two elements caesium and ruthenium ((134)Cs, (137)Cs, (103)Ru and (106)Ru) were carried out until 1999. This study allowed the effective half-life of (134)Cs and (137)Cs to be estimated. Twenty-eight years after the accident, in December 2014, a further sampling was carried out; only (137)Cs was revealed beyond the detection limits, measuring activity concentrations ranging from 20 to 40 Bq/kg, while the other radionuclides were no longer observed due to their shorter half-life. The last sampling allowed more precise determination of the effective half-life of (137)Cs (6.2 ± 0.1 year), due to the larger dataset on a large time period. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Real-time RT-PCR analysis of mRNA decay: half-life of Beta-actin mRNA in human leukemia CCRF-CEM and Nalm-6 cell lines

    Directory of Open Access Journals (Sweden)

    Barredo Julio C


    Full Text Available Abstract Background We describe an alternative method to determine mRNA half-life (t1/2 based on the Real-Time RT-PCR procedure. This approach was evaluated by using the β-actin gene as a reference molecule for measuring of mRNA stability. Results Human leukemia Nalm-6 and CCRF-CEM cells were treated with various concentrations of Actinomycin D to block transcription and aliquots were removed periodically. Total RNA was isolated and quantified using the RiboGreen® fluorescent dye with the VersaFluor Fluorometer System. One μg of total RNA was reverse transcribed and used as template for the amplification of a region of the β-actin gene (231 bp. To generate the standard curve, serial ten-fold dilutions of the pBactin-231 vector containing the cDNA amplified fragment were employed, β-actin mRNAs were quantified by Real-Time RT-PCR using the SYBR® Green I fluorogenic dye and data analyzed using the iCycle iQ system software. Using this method, the β-actin mRNA exhibited a half-life of 6.6 h and 13.5 h in Nalm-6 and CCRF-CEM cells, respectively. The t1/2 value obtained for Nalm-6 is comparable to those estimated from Northern blot studies, using normal human leukocytes (5.5 h. Conclusions We have developed a rapid, sensitive, and reliable method based on Real-Time RT-PCR for measuring mRNA half-life. Our results confirm that β-actin mRNA half-life can be affected by the cellular growth rate.

  6. Variation in absorption and half-life of hydrocortisone influence plasma cortisol concentrations. (United States)

    Hindmarsh, Peter C; Charmandari, Evangelia


    Hydrocortisone therapy should be individualized in congenital adrenal hyperplasia (CAH) patients to avoid over and under replacement. We have assessed how differences in absorption and half-life of cortisol influence glucocorticoid exposure. Forty-eight patients (21 M) aged between 6·1 and 20·3 years with CAH due to CYP21A2 deficiency were studied. Each patient underwent a 24-h plasma cortisol profile with the morning dose used to calculate absorption parameters along with an intravenous (IV) hydrocortisone (15 mg/m(2) body surface area) bolus assessment of half-life. Parameters derived were maximum plasma concentration (Cmax ), time of maximum plasma concentration (tmax ), time to attaining plasma cortisol concentration cortisol. Mean half-life was 76·5 ± 5·2 (range 40-225·3) min, Cmax 780·7 ± 61·6 nmol/l and tmax 66·7 (range 20-118) min. Time taken to a plasma cortisol concentration less than 100 nmol/l was 289 (range 140-540) min. Those with a fast half-life and slow tmax took longest to reach a plasma cortisol concentration less than 100 nmol/l (380 ± 34·6 min), compared to those with a slow half-life and fast tmax (298 ± 34·8 min) and those with a fast half-life and fast tmax (249·5 ± 14·4 min) (One-way anovaF = 4·52; P = 0·009). Both rate of absorption and half-life of cortisol in the circulation play important roles in determining overall exposure to oral glucocorticoid. Dose regimens need to incorporate estimates of these parameters into determining the optimum dosing schedule for individuals. © 2014 John Wiley & Sons Ltd.

  7. Using gamma distribution to determine half-life of rotenone, applied in freshwater

    Energy Technology Data Exchange (ETDEWEB)

    Rohan, Maheswaran, E-mail: [Department of Biostatistics and Epidemiology, Auckland University of Technology, Auckland (New Zealand); Fairweather, Alastair; Grainger, Natasha [Science and Capability, Department of Conservation, Hamilton (New Zealand)


    Following the use of rotenone to eradicate invasive pest fish, a dynamic first-order kinetic model is usually used to determine the half-life and rate at which rotenone dissipated from the treated waterbody. In this study, we investigate the use of a stochastic gamma model for determining the half-life and rate at which rotenone dissipates from waterbodies. The first-order kinetic and gamma models produced similar values for the half-life (4.45 days and 5.33 days respectively) and days to complete dissipation (51.2 days and 52.48 days respectively). However, the gamma model fitted the data better and was more flexible than the first-order kinetic model, allowing us to use covariates and to predict a possible range for the half-life of rotenone. These benefits are particularly important when examining the influence that different environmental factors have on rotenone dissipation and when trying to predict the rate at which rotenone will dissipate during future operations. We therefore recommend that in future the gamma distribution model is used when calculating the half-life of rotenone in preference to the dynamic first-order kinetics model. - Highlights: • We investigated the use of the gamma model to calculate the half-life of rotenone. • Physical and environmental variables can be incorporated into the model. • A method for calculating the range around a mean half-life is presented. • The model is more flexible than the traditionally used first-order kinetic model.

  8. Movement Complexity and Neuromechanical Factors Affect the Entropic Half-Life of Myoelectric Signals (United States)

    Hodson-Tole, Emma F.; Wakeling, James M.


    Appropriate neuromuscular functioning is essential for survival and features underpinning motor control are present in myoelectric signals recorded from skeletal muscles. One approach to quantify control processes related to function is to assess signal variability using measures such as Sample Entropy. Here we developed a theoretical framework to simulate the effect of variability in burst duration, activation duty cycle, and intensity on the Entropic Half-Life (EnHL) in myoelectric signals. EnHLs were predicted to be signal amplitude and activation duty cycle. Comparison with myoelectic data from rats walking and running at a range of speeds and inclines confirmed the range of EnHLs, however, the direction of EnHL change in response to altered locomotor demand was not correctly predicted. The discrepancy reflected different associations between the ratio of the standard deviation and mean signal intensity (Ist:It¯) and duty factor in simulated and physiological data, likely reflecting additional information in the signals from the physiological data (e.g., quiescent phase content; variation in action potential shapes). EnHL could have significant value as a novel marker of neuromuscular responses to alterations in perceived locomotor task complexity and intensity. PMID:28974932

  9. α-decay half-life calculations of superheavy nuclei using artificial neural networks (United States)

    Bayram, Tuncay; Akkoyun, Serkan; Okan Kara, S.


    Investigations of superheavy elements (SHE) have received much attention in the last two decades, due to the successful syntheses of SHE. In particular, α-decay of SHEs has a great importance because most synthesized SHE have a-decay and the experimentalists have evaluated the theoretical predictions of the a-decay half-life during the experimental design. Because of this, the correct prediction of α-decay half-life is important to investigate superheavy nuclei as well as heavy nuclei. In this work, artificial neural networks (ANN) have been employed on experimental a-decay half-lives of superheavy nuclei. Statistical modeling of a-decay half-life of superheavy nuclei have been found as to be successful.

  10. CYP2A6 Genotype but not Age Determines Cotinine Half-life in Infants and Children


    Dempseyl, Delia A.; Sambol, Nancy C.; Jacob, Peyton; Hoffmann, E.; Tyndale, Rachel F.; Fuentes-Afflick, Elena; Benowitz, Neal L.


    The formation of cotinine, the main proximate metabolite and a biomarker of nicotine exposure, is mediated primarily by CYP2A6. Our aim was to determine if higher cotinine levels in young children exposed to secondhand smoke (SHS) are a result of age-related differences in pharmacokinetics. Forty-nine participants, 2 to 84 months old, received oral deuterium-labeled cotinine, with daily urine samples for up to 10 days for cotinine half-life measurement. DNA from saliva was used for CYP2A6 gen...

  11. Intrinsically Disordered Segments Affect Protein Half-Life in the Cell and during Evolution

    Directory of Open Access Journals (Sweden)

    Robin van der Lee


    Full Text Available Precise control of protein turnover is essential for cellular homeostasis. The ubiquitin-proteasome system is well established as a major regulator of protein degradation, but an understanding of how inherent structural features influence the lifetimes of proteins is lacking. We report that yeast, mouse, and human proteins with terminal or internal intrinsically disordered segments have significantly shorter half-lives than proteins without these features. The lengths of the disordered segments that affect protein half-life are compatible with the structure of the proteasome. Divergence in terminal and internal disordered segments in yeast proteins originating from gene duplication leads to significantly altered half-life. Many paralogs that are affected by such changes participate in signaling, where altered protein half-life will directly impact cellular processes and function. Thus, natural variation in the length and position of disordered segments may affect protein half-life and could serve as an underappreciated source of genetic variation with important phenotypic consequences.

  12. Intrinsically disordered segments and the evolution of protein half-life (United States)

    Babu, M.


    Precise turnover of proteins is essential for cellular homeostasis and is primarily mediated by the proteasome. Thus, a fundamental question is: What features make a protein an efficient substrate for degradation? Here I will present results that proteins with a long terminal disordered segment or internal disordered segments have a significantly shorter half-life in yeast. This relationship appears to be evolutionarily conserved in mouse and human. Furthermore, upon gene duplication, divergence in the length of terminal disorder or variation in the number of internal disordered segments results in significant alteration of the half-life of yeast paralogs. Many proteins that exhibit such changes participate in signaling, where altered protein half-life will likely influence their activity. We suggest that variation in the length and number of disordered segments could serve as a remarkably simple means to evolve protein half-life and may serve as an underappreciated source of genetic variation with important phenotypic consequences. MMB acknowledges the Medical Research Council for funding his research program.

  13. Effect of parent and daughter deformation on half-life time in exotic ...

    Indian Academy of Sciences (India)

    scission region we calculated half-life time for different modes of exotic decay treating parent and fragments as spheres and these values are compared with experimental data. We studied the effect of deformation of parent and daughter on ...

  14. Safety and Efficacy of BAY 94-9027, a Prolonged-Half-Life Factor VIII

    DEFF Research Database (Denmark)

    Reding, M T; Ng, H J; Poulsen, Lone Hvitfeldt


    BACKGROUND: BAY 94-9027 is a B-domain-deleted prolonged-half-life recombinant factor VIII (FVIII) conjugates in a site-specific manner with polyethylene glycol. OBJECTIVE: Assess efficacy and safety of BAY 94-9027 for prophylaxis and treatment of bleeds in patients with severe hemophilia A PATIEN...

  15. Advances in therapeutic Fc engineering - modulation of IgG associated effector functions and serum half-life

    Directory of Open Access Journals (Sweden)

    Abhishek Saxena


    Full Text Available Today monoclonal immunoglobulin gamma (IgG antibodies have become a major option in cancer therapy especially for the patients with advanced or metastatic cancers. Efficacy of monoclonal antibodies (mAbs are achieved through both its antigen binding fragment (Fab and crystallizable fragment (Fc. Fab can specifically recognize tumor associated antigen (TAA and thus modulate TAA-linked downstream signaling pathways that may lead to inhibition of tumor growth, induction of tumor apoptosis and differentiation. The Fc region can further improve mAbs’ efficacy by mediating effector functions such as antibody-dependent cellular cytotoxicity (ADCC, complement-dependent cytotoxicity (CDC and antibody dependent cell-mediated phagocytosis (ADCP. Moreover, Fc is the region interacting with the neonatal Fc receptor (FcRn in a pH-dependent manner that can slow down IgG’s degradation and extend its serum half-life. Loss of the antibody Fc region dramatically shortens its serum half-life and weakens its anti-cancer effects. Given the essential roles that the Fc region plays in the modulation of the efficacy of mAb in cancer treatment, Fc engineering has been extensively studied in the past years. This review focuses on the recent advances in therapeutic Fc engineering that modulates its related effector functions and serum half-life. We also discuss the progress made in aglycosylated mAb development that may substantially reduce cost of manufacture but maintain similar efficacies as conventional glycosylated mAb. Finally, we highlight several Fc engineering based mAbs under clinical trials.

  16. The elimination half-life of benzodiazepines and fall risk: two prospective observational studies. (United States)

    de Vries, Oscar J; Peeters, Geeske; Elders, Petra; Sonnenberg, Caroline; Muller, Majon; Deeg, Dorly J H; Lips, Paul


    the STOPP criteria advise against the use of long-acting benzodiazepines (LBs). to study whether LBs are associated with a higher fall risk than short-acting benzodiazepines (SBs) (elimination half-life ≤ 10 h). we used base-line data and prospective fall follow-up from the Longitudinal Aging Study Amsterdam, a longitudinal cohort study including 1,509 community-dwelling older persons (Study 1) and from a separate fall prevention study with 564 older persons after a fall (Study 2). Time to the first fall after inclusion and number of falls in the first year after inclusion were the primary endpoints. both in Study 1 and Study 2 the use of SBs was associated with time to the first fall, hazard ratio (HR) 1.62 (95% CI: 1.03-2.56) and HR 1.64 (95% CI: 1.19-2.26),respectively. LBs were not significantly associated with time to first fall, HR 1.40 (0.85-2.31) and HR 1.08 (0.72-1.62). In both studies, the use of SBs was also associated with number of falls, odds ratio (OR) 1.28 (95% CI: 1.01-1.61) and OR 1.37 (95% CI: 1.10-1.70). LBs were not significantly associated with number of falls, OR 1.23 (0.96-1.57) and 1.10 (0.82-1.48). the use of SBs is not associated with a lower fall risk compared with LBs. The use of both SBs and LBs by old persons should be strongly discouraged.

  17. Determining thyroid {sup 131}I effective half-life for the treatment planning of Graves' disease

    Energy Technology Data Exchange (ETDEWEB)

    Willegaignon, Jose; Sapienza, Marcelo T.; Barberio Coura Filho, George; Buchpiguel, Carlos A. [Cancer Institute of Sao Paulo State (ICESP), Clinical Hospital, School of Medicine, University of Sao Paulo, Sao Paulo 01246-000 (Brazil); Nuclear Medicine Service, Department of Radiology, School of Medicine, University of Sao Paulo, Sao Paulo 01246-000 (Brazil); Traino, Antonio C. [Unit of Medical Physics, Azienda Ospedaliero-Universitaria Pisana, Pisa 56126 (Italy)


    Purpose: Thyroid {sup 131}I effective half-life (T{sub eff}) is an essential parameter in patient therapy when accurate radiation dose is desirable for producing an intended therapeutic outcome. Multiple {sup 131}I uptake measurements and resources from patients themselves and from nuclear medicine facilities are requisites for determining T{sub eff}, these being limiting factors when implementing the treatment planning of Graves' disease (GD) in radionuclide therapy. With the aim of optimizing this process, this study presents a practical, propitious, and accurate method of determining T{sub eff} for dosimetric purposes. Methods: A total of 50 patients with GD were included in this prospective study. Thyroidal {sup 131}I uptake was measured at 2-h, 6-h, 24-h, 48-h, 96-h, and 220-h postradioiodine administration. T{sub eff} was calculated by considering sets of two measured points (24-48-h, 24-96-h, and 24-220-h), sets of three (24-48-96-h, 24-48-220-h, and 24-96-220-h), and sets of four (24-48-96-220-h). Results: When considering all the measured points, the representative T{sub eff} for all the patients was 6.95 ({+-}0.81) days, whereas when using such sets of points as (24-220-h), (24-96-220-h), and (24-48-220-h), this was 6.85 ({+-}0.81), 6.90 ({+-}0.81), and 6.95 ({+-}0.81) days, respectively. According to the mean deviations 2.2 ({+-}2.4)%, 2.1 ({+-}2.0)%, and 0.04 ({+-}0.09)% found in T{sub eff}, calculated based on all the measured points in time, and with methods using the (24-220-h), (24-48-220-h), and (24-96-220-h) sets, respectively, no meaningful statistical difference was noted among the three methods (p > 0.500, t test). Conclusions: T{sub eff} obtained from only two thyroid {sup 131}I uptakes measured at 24-h and 220-h, besides proving to be sufficient, accurate enough, and easily applicable, attributes additional major cost-benefits for patients, and facilitates the application of the method for dosimetric purposes in the treatment planning of

  18. Half Life of the Doubly-magic r-Process Nucleus 78Ni

    CERN Document Server

    Hosmer, P T; Aprahamian, A; Arndt, O; Clément, R; Estrade, A; Kratz, K L; Liddick, S N; Mantica, P F; Müller, W F; Montes, F; Morton, A C; Ouellette, M S; Pellegrini, E; Pfeiffer, B; Reeder, P; Santi, P; Steiner, M; Stolz, A; Tomlin, B E; Walters, W B; Wöhr, A


    Nuclei with magic numbers serve as important benchmarks in nuclear theory. In addition, neutron-rich nuclei play an important role in the astrophysical rapid neutron-capture process (r-process). 78Ni is the only doubly-magic nucleus that is also an important waiting point in the r-process, and serves as a major bottleneck in the synthesis of heavier elements. The half-life of 78Ni has been experimentally deduced for the first time at the Coupled Cyclotron Facility of the National Superconducting Cyclotron Laboratory at Michigan State University, and was found to be 110 (+100 -60) ms. In the same experiment, a first half-life was deduced for 77Ni of 128 (+27 -33) ms, and more precise half-lives were deduced for 75Ni and 76Ni of 344 (+20 -24) ms and 238 (+15 -18) ms respectively.

  19. Increased Functional Half-life of Fibroblast Growth Factor-1 by Recovering a Vestigial Disulfide Bond

    Directory of Open Access Journals (Sweden)

    Jihun Lee


    Full Text Available The fibroblast growth factor (FGF family of proteins contains an absolutely conserved Cys residue at position 83 that is present as a buried free cysteine. We have previously shown that mutation of the structurally adjacent residue, Ala66, to cysteine results in the formation of a stabilizing disulfide bond in FGF-1. This result suggests that the conserved free cysteine residue at position 83 in the FGF family of proteins represents a vestigial half-cystine. Here, we characterize the functional half-life and mitogenic activity of the oxidized form of the Ala66Cys mutation to identify the effect of the recovered vestigial disulfide bond between Cys83 and Cys66 upon the cellular function of FGF-1. The results show that the mitogenic activity of this mutant is significantly increased and that its functional half-life is greatly extended. These favorable effects are conferred by the formation of a disulfide bond that simultaneously increases thermodynamic stability of the protein and removes a reactive buried thiol at position 83. Recovering this vestigial disulfide by introducing a cysteine at position 66 is a potentially useful protein engineering strategy to improve the functional half-life of other FGF family members.

  20. An Advance in Prescription Opioid Vaccines: Overdose Mortality Reduction and Extraordinary Alteration of Drug Half-Life. (United States)

    Kimishima, Atsushi; Wenthur, Cody J; Zhou, Bin; Janda, Kim D


    Prescription opioids (POs) such as oxycodone and hydrocodone are highly effective medications for pain management, yet they also present a substantial risk for abuse and addiction. The consumption of POs has been escalating worldwide, resulting in tens of thousands of deaths due to overdose each year. Pharmacokinetic strategies based upon vaccination present an attractive avenue to suppress PO abuse. Herein, the preparation of two active PO vaccines is described that were found to elicit high-affinity antiopioid antibodies through a structurally congruent drug-hapten design. Administration of these vaccines resulted in a significant blockade of opioid analgesic activity, along with an unprecedented increase in drug serum half-life and protection against lethal overdose.

  1. An albumin-oligonucleotide assembly for potential combinatorial drug delivery and half-life extension applications

    DEFF Research Database (Denmark)

    Kuhlmann, Matthias; Hamming, Jonas Bohn Refslund; Voldum, Anders


    The long blood circulatory property of human serum albumin, due to engagement with the cellular recycling neonatal Fc receptor (FcRn), is an attractive drug half-life extension enabling technology. This work describes a novel site-specific albumin double-stranded (ds) DNA assembly approach......, in which the 3' or 5' end maleimide-derivatized oligodeoxynucleotides are conjugated to albumin cysteine at position 34 (cys34) and annealed with complementary strands to allow single site-specific protein modification with functionalized ds oligodeoxynucleotides. Electrophoretic gel shift assays...

  2. Estimating the biological half-life for radionuclides in homoeothermic vertebrates: a simplified allometric approach

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, N.A. [Lancaster Environment Centre, NERC Centre for Ecology and Hydrology, Lancaster (United Kingdom); Vives i Batlle, J. [Belgian Nuclear Research Centre, Mol (Belgium)


    The application of allometric, or mass-dependent, relationships within radioecology has increased with the evolution of models to predict the exposure of organisms other than man. Allometry presents a method of addressing the lack of empirical data on radionuclide transfer and metabolism for the many radionuclide-species combinations which may need to be considered. However, sufficient data across a range of species with different masses are required to establish allometric relationships and this is not always available. Here, an alternative allometric approach to predict the biological half-life of radionuclides in homoeothermic vertebrates which does not require such data is derived. Biological half-life values are predicted for four radionuclides and compared to available data for a range of species. All predictions were within a factor of five of the observed values when the model was parameterised appropriate to the feeding strategy of each species. This is an encouraging level of agreement given that the allometric models are intended to provide broad approximations rather than exact values. However, reasons why some radionuclides deviate from what would be anticipated from Kleiber's law need to be determined to allow a more complete exploitation of the potential of allometric extrapolation within radioecological models. (orig.)

  3. Model system that predicts effective half-life for radiolabeled antibody therapy. [Rats

    Energy Technology Data Exchange (ETDEWEB)

    Klein, J.L.; Ling, M.N.; Leichner, P.K.; Kopher, K.A.; Rostock, R.A.; Order, S.E.


    Radiolabeled antibodies to tumor associated proteins localize in both experimental and clinical cancers. In the therapeutic applications of radiolabeled antibody, tumor effective half-life along with the concentration of isotope deposited and energies of the isotope used, determine the tumor dose. Antibodies directed against the same antigenic specificity but derived from different species have varied tumor and whole body effective half-lives and as a result, achieve different tumor doses. In vitro testing does not evaluate the in vivo differences in effective half-life that affect tumor dose. The authors have developed an animal model to evaluate the effective half-live and biodistribution of radiolabeled immunoglobulin (IgG) from diverse species. In both the experimental model and in the clinical trials, radiolabeled immunospecific and normal IgG derived from monkey, rabbit, and porcine sources had the longest effective half-lives, goat and sheep had intermediate effective half-lives, and chicken and turkey had the shortest effective half-lives. These species have been immunized for clinical use. The value of this model system is that it appears to be an effective in vivo preclinical screen for tumor effective half-live of antibodies and IgG from diverse species, thus guiding potential clinical use.

  4. Standard Operating Procedure for Using the NAFTA Guidance to Calculate Representative Half-life Values and Characterizing Pesticide Degradation (United States)

    Results of the degradation kinetics project and describes a general approach for calculating and selecting representative half-life values from soil and aquatic transformation studies for risk assessment and exposure modeling purposes.

  5. Going Beyond the Binary : The body, Sexuality and Identity in Shelley Jackson’s Half Life: a novel


    Liu, Linjing


    The thesis focuses on Shelley Jackson’s Half Life: a Novel with efforts directed towards a literary interpretation considering relevant issues within the context of gender and feminist theory. The argument rests upon four basic units: the theoretical framework at the outset, the question of the body next, thirdly an investigation of sexuality, and finally a consideration of identity. In Jackson’s Half Life: a Novel the non-dualist thinking underlies a deliberate play of dualism. To go beyond ...

  6. Half-life of each dioxin and PCB congener in the human body

    Energy Technology Data Exchange (ETDEWEB)

    Ogura, Isamura [National Institute of Advanced Industrial Science and Technology, Tsukuba (Japan)


    It is well known that dioxin and PCB congeners accumulate in the human body. For assessing their toxicological risk, it is important to know the half-life of each congener in the human body. This study summarizes the overall half-lives of congeners in humans as reported in the literature, and compares them with the half-lives due to fecal and sebum excretions, as estimated by data on the concentrations of congeners in feces and sebum in the literature. In addition, the overall half-lives of congeners for the general Japanese population were estimated from the data on dietary intakes and concentrations in the human body reported by the municipalities.

  7. ORIGEN-S Decay Data Library and Half-Life Uncertainties

    CERN Document Server

    Hermann, O W


    The results of an extensive update of the decay data of the ORIGEN-S library are presented in this report. The updated decay data were provided for both the ORIGEN-S and ORIGEN2 libraries in the same project. A complete edit of the decay data plus the available half-life uncertainties are included in Appendix A. A detailed description of the types of data contained in the library, the format of the library, and the data sources are also presented. Approximately 24% of the library nuclides are stable, 66% were updated from ENDF/B-VI, about 8% were updated from ENSDF, and the remaining 2% were not updated. Appendix B presents a listing of percentage changes in decay heat from the old to the updated library for all nuclides containing a difference exceeding 1% in any parameter.

  8. Half-life measurements of isomeric states populated in projectile fragmentation (United States)

    Bowry, M.; Podolyák, Zs.; Kurcewicz, J.; Pietri, S.; Bunce, M.; Regan, P. H.; Farinon, F.; Geissel, H.; Nociforo, C.; Prochazka, A.; Weick, H.; Allegro, P.; Benlliure, J.; Benzoni, G.; Boutachkov, P.; Gerl, J.; Gorska, M.; Gottardo, A.; Gregor, N.; Janik, R.; Knöbel, R.; Kojouharov, I.; Kubo, T.; Litvinov, Y. A.; Merchan, E.; Mukha, I.; Naqvi, F.; Pfeiffer, B.; Pfützner, M.; Plaß, W.; Pomorski, M.; Riese, B.; Ricciardi, M. V.; Schmidt, K.-H.; Schaffner, H.; Kurz, N.; Denis Bacelar, A. M.; Bruce, A. M.; Farrelly, G. F.; Alkhomashi, N.; Al-Dahan, N.; Scheidenberger, C.; Sitar, B.; Spiller, P.; Stadlmann, J.; Strmen, P.; Sun, B.; Takeda, H.; Tanihata, I.; Terashima, S.; Valiente Dobon, J. J.; Winfield, J. S.; Wollersheim, H.-J.; Woods, P. J.


    The half-lives of excited isomeric states observed in 195Au, 201Tl and 215Rn are reported for the first time. Delayed γ-rays were correlated with nuclei produced in the projectile fragmentation of relativistic 238U ions, unambiguously identified in terms of their atomic number (Z) and mass-to-charge ratio (A/Q) after traversing an in-flight separator. The observation of a long-lived isomeric state in 195Au with t1/2 = 16-4+8 μs is presented. Two shorter-lived isomeric states were detected in 201Tl and 215Rn with t1/2 = 95-21+39 and 57-12+21 ns respectively. In total 24 isomeric states were identified in different nuclei from Pt to Rn (A ˜ 200) during the current study, the majority of which were previously reported. The wealth of spectroscopic data provides the opportunity to determine the isomeric ratios over a wide range of Z, A and angular momentum (I ħ) of the reaction products. In particular, high-spin states with I ≳ 18 ħ provide a robust test of theoretical models of fragmentation.

  9. Kinetic modeling and half life study on bioremediation of crude oil dispersed by Corexit 9500

    Energy Technology Data Exchange (ETDEWEB)

    Zahed, Mohammad Ali [School of Civil Engineering, Universiti Sains Malaysia, Engineering Campus, 14300 Nibong Tebal, Penang (Malaysia); Aziz, Hamidi Abdul, E-mail: [School of Civil Engineering, Universiti Sains Malaysia, Engineering Campus, 14300 Nibong Tebal, Penang (Malaysia); Isa, Mohamed Hasnain [Civil Engineering Department, Universiti Teknologi PETRONAS, 31750 Tronoh, Perak (Malaysia); Mohajeri, Leila; Mohajeri, Soraya [School of Civil Engineering, Universiti Sains Malaysia, Engineering Campus, 14300 Nibong Tebal, Penang (Malaysia); Kutty, Shamsul Rahman Mohamed [Civil Engineering Department, Universiti Teknologi PETRONAS, 31750 Tronoh, Perak (Malaysia)


    Hydrocarbon pollution in marine ecosystems occurs mainly by accidental oil spills, deliberate discharge of ballast waters from oil tankers and bilge waste discharges; causing site pollution and serious adverse effects on aquatic environments as well as human health. A large number of petroleum hydrocarbons are biodegradable, thus bioremediation has become an important method for the restoration of oil polluted areas. In this research, a series of natural attenuation, crude oil (CO) and dispersed crude oil (DCO) bioremediation experiments of artificially crude oil contaminated seawater was carried out. Bacterial consortiums were identified as Acinetobacter, Alcaligenes, Bacillus, Pseudomonas and Vibrio. First order kinetics described the biodegradation of crude oil. Under abiotic conditions, oil removal was 19.9% while a maximum of 31.8% total petroleum hydrocarbons (TPH) removal was obtained in natural attenuation experiment. All DCO bioreactors demonstrated higher and faster removal than CO bioreactors. Half life times were 28, 32, 38 and 58 days for DCO and 31, 40, 50 and 75 days for CO with oil concentrations of 100, 500, 1000 and 2000 mg/L, respectively. The effectiveness of Corexit 9500 dispersant was monitored in the 45 day study; the results indicated that it improved the crude oil biodegradation rate.

  10. [Standard ballpoint pens disappear from the coats with a half-life of nine days]. (United States)

    Subhi, Yousif; Brynskov, Troels


    Ballpoint pens are important tools for the daily work of the physicians. However, a practical challenge seems to be that they disappear just when one needs them most. We call this the ballpoint pen paradox. We have studied the fate of 60 ballpoint pens in an outpatient clinical setting. Five physicians, five nurses and five medical secretaries were equipped with four barcode-tagged ballpoint pens. During follow-ups, we systematically searched the participants' coat and the department for the barcode-tagged ballpoint pens. We illustrated the migration of the ballpoint pens using a diagram and tested fit of linear and exponential trend line. The ballpoint pens displayed a tremendous migration in the department. Disappearance from the coat was exponentially with a half-life of 9 days - only 23% were left after 18 days. Disappearance from the department was linear with a 3% loss per day - only 42% were left after 18 days. Ballpoint pens have a high migration and turnover rate. The fate is unknown for a great amount of ballpoint pens. They are treated carelessly, which finally results in waste of time and resources. It may be that the ballpoint pen paradox can be diminished by phasing out ballpoint pens of low quality. not relevant. not relevant.

  11. Increased half-life and enhanced potency of Fc-modified human PCSK9 monoclonal antibodies in primates.

    Directory of Open Access Journals (Sweden)

    Yijun Shen

    Full Text Available Blocking proprotein convertase subtilisin kexin type 9 (PCSK9 binding to low-density lipoprotein receptor (LDLR can profoundly lower plasma LDL levels. Two anti-PCKS9 monoclonal antibodies (mAbs, alirocumab and evolocumab, were approved by the FDA in 2015. The recommended dose is 75 mg to 150 mg every two weeks for alirocumab and 140mg every two weeks or 420 mg once a month for evolocumab. This study attempted to improve the pharmacokinetic properties of F0016A, an IgG1 anti-PCKS9 mAb, to generate biologically superior molecules. We engineered several variants with two or three amino acid substitutions in the Fc fragment based on prior knowledge. The Fc-modified mAbs exhibited increased binding to FcRn, resulting in prolonged serum half-life and enhanced efficacy in vivo. These results demonstrate that Fc-modified anti-PCKS9 antibodies may enable less frequent or lower dosing of antibodies by improved recycling into the blood.

  12. Estimation of biological half-life of tritium in coastal region of India. (United States)

    Singh, Vishwanath P; Pai, R K; Veerender, D D; Vishnu, M S; Vijayan, P; Managanvi, S S; Badiger, N M; Bhat, H R


    The present study estimates biological half-life (BHL) of tritium by analysing routine bioassay samples of radiation workers. During 2007-2009 year, 72,100 urine bioassay samples of the workers were analysed by liquid scintillation counting technique for internal dose monitoring for tritium. Two hundred and two subjects were taken for study with minimum 3 μCiL(-1) tritium uptake in their body fluid. The BHL of tritium of subjects ranges from 1 to 16 d with an average of 8.19 d. Human data indicate that the biological retention time ranges from 4 to 18 d with an average of 10 d. The seasonal variations of the BHL of tritium are 3.09 ± 1.48, 6.87 ± 0.58 and 5.73 ± 0.76 d (mean ± SD) for summer, winter and rainy seasons, respectively, for free water tritium in the coastal region of Karnataka, India, which shows that the BHL in summer is twice that of the winter season. Also three subjects showed the BHL of 101.73-121.09 d, which reveals that organically bound tritium is present with low tritium uptake also. The BHL of tritium for all age group of workers is observed independent of age and is shorter during April to May. The distribution of cumulative probability vs. BHL of tritium shows lognormal distribution with a geometric mean of 9.11 d and geometric standard deviation of 1.77 d. The study of the subjects is fit for two-compartment model and also an average BHL of tritium is found similar to earlier studies.

  13. Brief report : enzyme inducers reduce elimination half-life after a single dose of nevirapine in healthy women

    NARCIS (Netherlands)

    L'homme, R.F.A.; Dijkema, T.; Ven, A.J.A.M. van der; Burger, D.M.


    OBJECTIVE: Single-dose nevirapine (SD-NVP) to prevent mother-to-child transmission (MTCT) of HIV is associated with development of NVP resistance, probably because of its long half-life in combination with a low genetic barrier to resistance. The objective of this study was to find enzyme inducers

  14. Extending the Serum Half-Life of G-CSF via Fusion with the Domain III of Human Serum Albumin

    Directory of Open Access Journals (Sweden)

    Shuqiang Zhao


    Full Text Available Protein fusion technology is one of the most commonly used methods to extend the half-life of therapeutic proteins. In this study, in order to prolong the half-life of Granulocyte colony stimulating factor (G-CSF, the domain III of human serum albumin (3DHSA was genetically fused to the N-terminal of G-CSF. The 3DHSA-G-CSF fusion gene was cloned into pPICZαA along with the open reading frame of the α-factor signal under the control of the AOX1 promoter. The recombinant expression vector was transformed into Pichia pastoris GS115, and the recombinant strains were screened by SDS-PAGE. As expected, the 3DHSA-G-CSF showed high binding affinity with HSA antibody and G-CSF antibody, and the natural N-terminal of 3DHSA was detected by N-terminal sequencing. The bioactivity and pharmacokinetic studies of 3DHSA-G-CSF were respectively determined using neutropenia model mice and human G-CSF ELISA kit. The results demonstrated that 3DHSA-G-CSF has the ability to increase the peripheral white blood cell (WBC counts of neutropenia model mice, and the half-life of 3DHSA-G-CSF is longer than that of native G-CSF. In conclusion, 3DHSA can be used to extend the half-life of G-CSF.

  15. Effective Half-Life of Caesium-137 in Various Environmental Media at the Savannah River Site

    Energy Technology Data Exchange (ETDEWEB)

    Paller, M. H.; Jannik, G. T.; Baker, R. A.


    During the operational history of the Savannah River Site (SRS), many different radionuclides have been released from site facilities into the SRS environment. However, only a relatively small number of pathways, most importantly 137Cs in fish and deer, have contributed significantly to doses and risks to the public. The “effective” half-lives (Te) of 137Cs (which include both physical decay and environmental dispersion) in Savannah River floodplain soil and vegetation and in fish and white-tailed deer from the SRS were estimated using long-term monitoring data. For 1974–2011, the Tes of 137Cs in Savannah River floodplain soil and vegetation were 17.0 years (95% CI = 14.2–19.9) and 13.4 years (95% CI = 10.8–16.0), respectively. These Tes were greater than in a previous study that used data collected only through 2005 as a likely result of changes in the flood regime of the Savannah River. Field analyses of 137Cs concentrations in deer collected during yearly controlled hunts at the SRS indicated an overall Te of 15.9 years (95% CI = 12.3–19.6) for 1965–2011; however, the Te for 1990–2011 was significantly shorter (11.8 years, 95% CI = 4.8–18.8) due to an increase in the rate of 137Cs removal. The shortest Tes were for fish in SRS streams and the Savannah River (3.5–9.0 years), where dilution and dispersal resulted in rapid 137Cs removal. Long-term data show that Tes are significantly shorter than the physical half-life of 137Cs in the SRS environment but that they can change over time. Therefore, it is desirable have a long period of record for calculating Tes and risky to extrapolate Tes beyond this period unless the processes governing 137Cs removal are clearly understood.

  16. Budget Impact Analysis of Prolonged Half-Life Recombinant FVIII Therapy for Hemophilia in the United States. (United States)

    McMullen, Suzanne; Buckley, Brieana; Hall, Eric; Kendter, Jon; Johnston, Karissa


    Hemophilia A is a factor VIII deficiency, associated with spontaneous, recurrent bleeding episodes. This may lead to comorbidities such as arthropathy and joint replacement, which contribute to morbidity and increased health care expenditure. Recombinant factor VIII Fc fusion protein (rFVIIIFc), a prolonged half-life factor therapy, requires fewer infusions, resulting in reduced treatment burden. Use a budget impact analysis to assess the potential economic impact of introducing rFVIIIFc to a formulary from the perspective of a private payer in the United States. The budget impact model was developed to estimate the potential economic impact of adding rFVIIIFc to a private payer formulary across a 2-year time period. The eligible patient population consisted of inhibitor-free adults with severe hemophilia A, receiving recombinant-based episodic or prophylaxis treatment regimens. Patients were assumed to switch from conventional recombinant factor treatment to rFVIIIFc. Only medication costs were included in the model. The introduction of rFVIIIFc is estimated to have a budget impact of 1.4% ($0.12 per member per month) across 2 years for a private payer population of 1,000,000 (estimated 19.7 individuals receiving treatment for hemophilia A). The introduction of rFVIIIFc is estimated to prevent 124 bleeds across 2 years at a cost of $1891 per bleed avoided. Hemophilia A is a rare disease with a low prevalence; therefore, the overall cost to society of introducing rFVIIIFc is small. Considerations for comprehensively assessing the budget impact of introducing rFVIIIFc should include episodic and prophylaxis regimens, bleed avoidance, and annual factor consumption required under alternative scenarios. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. GLP2-2G-XTEN: a pharmaceutical protein with improved serum half-life and efficacy in a rat Crohn's disease model.

    Directory of Open Access Journals (Sweden)

    Susan E Alters

    Full Text Available OBJECTIVES: Glucagon-like peptide 2 (GLP2 is an intestinal growth factor that has been shown to stimulate intestinal growth and reduce disease severity in preclinical models of short bowel syndrome and inflammatory bowel disease. Teduglutide, a recombinant human GLP2 variant (GLP2-2G, has increased half-life and stability as compared to the native GLP2 peptide, but still requires twice daily dosing in preclinical models and daily dosing in the clinic. The goal of this study was to produce and characterize the preclinical pharmacokinetic and therapeutic properties of GLP2-2G-XTEN, a novel, long-acting form of GLP2-2G. METHODOLOGY AND RESULTS: A GLP2-2G-XTEN fusion protein with extended exposure profile was produced by genetic fusion of GLP2-2G peptide to XTEN, a long, unstructured, non-repetitive, hydrophilic sequence of amino acids. The serum half-life of GLP2-2G-XTEN in mice, rats and monkeys was 34, 38 and 120 hours, respectively. Intestinotrophic effects were demonstrated in normal rats, where GLP2-2G-XTEN administration resulted in a significant increase in both small intestine weight and length. Efficacy of the GLP2-2G-XTEN protein was compared to that of GLP2-2G peptide in a rat Crohn's disease model, indomethacin-induced inflammation. Prophylactic administration of GLP2-2G-XTEN significantly increased the length, reduced the number of trans-ulcerations and adhesions, and reduced the TNFα content of the small intestine. GLP2-2G-XTEN demonstrated greater in vivo potency as compared to GLP2-2G peptide, and improvement in histopathology supported the GLP2-2G-XTEN treatment effects. CONCLUSIONS AND SIGNIFICANCE: GLP2-2G-XTEN is intestinotrophic and demonstrates efficacy in a rat Crohn's disease model requiring a lower molar dose and less frequent dosing relative to GLP2-2G peptide. Allometric scaling based on pharmacokinetics from mouse, rat and monkey projects a human half-life of 240 hours. These improvements in preclinical pharmacokinetics

  18. Estrogenic and antiestrogenic regulation of the half-life of covalently labeled estrogen receptor in MCF-7 breast cancer cells. (United States)

    Borrás, M; Laios, I; el Khissiin, A; Seo, H S; Lempereur, F; Legros, N; Leclercq, G


    Effect of estrogens and antiestrogens (AEs) on estrogen receptor (ER) half-life was analyzed in MCF-7 cells by assessing its progressive disappearance after covalent labeling in situ with [3H]tamoxifen aziridine ([3H]TAZ). Cells were incubated for 1 h with 20 nM [3H]TAZ either in the absence or presence of a 500-fold excess of unlabeled estradiol (E2) (non-specific binding). The entire ER population was labeled by this method as established by subsequent incubation of the cells with [125I]E2. [3H]TAZ labeled cells were maintained in culture for additional 5 h in the absence (control) or presence of increasing amounts (0.1 nM - 1 microM) of either a given estrogen (E2, estrone, diethylstilbestrol, bisphenol), a pure AE (RU 58 668, ICI 164 384) or an AE with residual estrogenic activity (RU 39 411, 4-hydroxytamoxifen, keoxifene). The progressive disappearance of nuclear and cytosolic [3H]TAZ-ER complex during 5 h incubation were assessed by their immunoprecipitation with anti-ER monoclonal antibody (H 222) followed by scintillation counting or SDS-PAGE and fluorography. Fading of labeled receptors was extremely slow (approximately 10% loss after 6 h) in absence of any hormone/antihormone indicating a long half-life of the [3H]TAZ-ER complex. Addition of estrogens as well as pure AEs led to a dramatic reduction of the half-life while AEs with residual estrogenic activity were extremely less efficient in this regard providing an explanation for the ability of latter compounds to up-regulate the receptor since they do not affect ER mRNA synthesis and stability. Receptor disappearance induced by estrogens was closely related to their binding affinity for ER. Newly synthesized ER emerged during the treatment with hormones or antihormones seems to be implicated in the phenomenon since [3H]TAZ was covalently bound and could, therefore, not be displaced by these compounds. Induction of synthesis of a short half-life peptide(s) with degradative activity was demonstrated by

  19. The functional half-life of an mRNA depends on the ribosome spacing in an early coding region

    DEFF Research Database (Denmark)

    Pedersen, Margit; Nissen, Søren; Mitarai, Namiko


    Bacterial mRNAs are translated by closely spaced ribosomes and degraded from the 5'-end, with half-lives of around 2 min at 37 °C in most cases. Ribosome-free or "naked" mRNA is known to be readily degraded, but the initial event that inactivates the mRNA functionally has not been fully described....... Here, we characterize a determinant of the functional stability of an mRNA, which is located in the early coding region. Using literature values for the mRNA half-lives of variant lacZ mRNAs in Escherichia coli, we modeled how the ribosome spacing is affected by the translation rate of the individual...... codons. When comparing the ribosome spacing at various segments of the mRNA to its functional half-life, we found a clear correlation between the functional mRNA half-life and the ribosome spacing in the mRNA region approximately between codon 20 and codon 45. From this finding, we predicted that inserts...

  20. The half-life and exposure of cefuroxime varied in newborn infants after a Caesarean section

    DEFF Research Database (Denmark)

    Zachariassen, G.; Hyldig, N.; Joergensen, J.S.


    : Healthy mothers received a single dose of cefuroxime 15–60 minutes before skin incision. One blood sample was drawn from the umbilical cord, and two blood samples were drawn from the infant after delivery. Total plasma cefuroxime (μg/mL) was measured using high-pressure liquid chromatography. Results...

  1. The mysteriously variable half-life of dissolved organic matter in aquatic ecosystems: artefact or insight? (United States)

    Evans, Chris; Fovet, Ophelie; Jones, Tim; Jones, Davey; Moldan, Filip; Futter, Martyn


    Dissolved organic matter (DOM) fluxes from land to water represent an important loss term in the terrestrial carbon balance, a major pathway in the global carbon cycle, a significant influence on aquatic light, nutrient and energy regimes, and an important concern for drinking water production. Although freshwaters are now recognised as zones of active carbon cycling, rather than passive conduits for carbon transport, evidence regarding the magnitude of, and controls on, DOM cycling in aquatic systems is incomplete and in some cases seemingly contradictory, with DOM 'half-lives' ranging from a few days to many years. Bringing together experimental, isotopic, catchment mass balance and modelling data, we suggest that apparently conflicting results can be reconciled through understanding of differences in: i) the terrestrial sources of DOM within heterogeneous landscapes, and consequent differences in its reactivity and stoichiometry; ii) experimental methodologies (i.e. which reactions are actually being measured), and iii) the extent of prior transformation of DOM upstream of the point of study. We argue that rapid photo-degradation, particularly of peat-derived DOM, is a key process in headwaters, whilst apparently slow DOM turnover in downstream, agriculturally-influenced lakes and rivers can partly be explained by the offsetting effect of in situ DOM production. This production appears to be strongly constrained by nutrient supply, thus linking DOM turnover and composition to the supply of inorganic nutrient inputs from diffuse agricultural pollution, and also providing a possible mechanistic link between aquatic DOM production and terrestrial DOM breakdown via the mineralisation and re-assimilation of organic nutrients. A more complete conceptual understanding of these interlinked processes will provide an improved understanding of the sources and fate of aquatic DOM, its role in the global carbon cycle, and the impact of anthropogenic activities, for example

  2. PASylation: a biological alternative to PEGylation for extending the plasma half-life of pharmaceutically active proteins (United States)

    Schlapschy, Martin; Binder, Uli; Börger, Claudia; Theobald, Ina; Wachinger, Klaus; Kisling, Sigrid; Haller, Dirk; Skerra, Arne


    A major limitation of biopharmaceutical proteins is their fast clearance from circulation via kidney filtration, which strongly hampers efficacy both in animal studies and in human therapy. We have developed conformationally disordered polypeptide chains with expanded hydrodynamic volume comprising the small residues Pro, Ala and Ser (PAS). PAS sequences are hydrophilic, uncharged biological polymers with biophysical properties very similar to poly-ethylene glycol (PEG), whose chemical conjugation to drugs is an established method for plasma half-life extension. In contrast, PAS polypeptides offer fusion to a therapeutic protein on the genetic level, permitting Escherichia coli production of fully active proteins and obviating in vitro coupling or modification steps. Furthermore, they are biodegradable, thus avoiding organ accumulation, while showing stability in serum and lacking toxicity or immunogenicity in mice. We demonstrate that PASylation bestows typical biologics, such as interferon, growth hormone or Fab fragments, with considerably prolonged circulation and boosts bioactivity in vivo. PMID:23754528

  3. An "Fc-Silenced" IgG1 Format With Extended Half-Life Designed for Improved Stability. (United States)

    Borrok, M Jack; Mody, Neil; Lu, Xiaojun; Kuhn, Megan L; Wu, Herren; Dall'Acqua, William F; Tsui, Ping


    Multiple mutation combinations in the IgG Fc have been characterized to tailor immune effector function or IgG serum persistence to fit desired biological outcomes for monoclonal antibody therapeutics. An unintended consequence of introducing mutations in the Fc (particularly the C H 2 domain) can be a reduction in biophysical stability which can correlate with increased aggregation propensity, poor manufacturability, and lower solubility. Herein, we characterize the changes in IgG conformational and colloidal stability when 2 sets of C H 2 mutations "TM" (L234F/L235E/P331S) and "YTE" (M252Y/S254T/T256E) are combined to generate an antibody format lacking immune receptor binding and exhibiting extended half-life. In addition to significantly lowered thermostability, we observe greater conformational flexibility for TM-YTE in C H 2, increased self-association, and poorer solubility and aggregation profiles. To improve these properties, we dissected the contributions of individual mutations within TM-YTE on thermostability and substituted destabilizing mutations with new mutations that raise thermostability. One novel combination, FQQ-YTE (L234F/L235Q/K322Q/M252Y/S254T/T256E), had significantly improved conformational and colloidal stability, and was found to retain the same biological activities as TM-YTE (extended half-life and lack of antibody-dependent cell-mediated cytotoxicity and complement-dependent cytotoxicity activity). Our engineering approach offers a way to improve the developability of antibodies containing Fc mutations while retaining tailored biological activity. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.


    Directory of Open Access Journals (Sweden)

    Samuel Enahoro Agarry


    Full Text Available In this study, comparative potential effects of commercial activated carbon (CAC and plantain peel-derived biochar (PPBC of different particle sizes and dosage to stimulate petroleum hydrocarbon biodegradation in soil were investigated. Microcosms containing soil were spiked with weathered Bonny light crude oil (WBLCO (10% w/w and amended with different particle sizes (0.02, 0.07 and 0.48 mm and dosage (20, 30 and 40 g of CAC and PPBC, respectively. The bioremediation experiments were carried out for a period of 28 days under laboratory conditions. The results showed that there was a positive relationship between the rate of petroleum hydrocarbons reduction and presence of the CAC and PPBC in crude oil contaminated soil microcosms. The WBLCO biodegradation data fitted well to the first-order kinetic model. The model revealed that WBLCO contaminated-soil microcosms amended with CAC and PPBC had higher biodegradation rate constants (k as well as lower half-life times (t1/2 than unamended soil (natural attenuation remediation system. The rate constants increased while half-life times decreased with decreased particle size and increased dosage of amendment agents. ANOVA statistical analysis revealed that WBLCO biodegradation in soil was significantly (p = 0.05 influenced by the addition of CAC and biochar amendment agents, respectively. However, Tukey’s post hoc test (at p = 0.05 showed that there was no significant difference in the bioremediation efficiency of CAC and PPBC. Thus, amendment of soils with biochar has the potential to be an inexpensive, efficient, environmentally friendly and relatively novel strategy to mitigate organic compound-contaminated soil.

  5. High precision measurement of the Ne-19 beta-decay half-life using real-time digital acquisition

    Czech Academy of Sciences Publication Activity Database

    Fontbonne, C.; Ujic, P.; de Oliveira Santos, F.; Flechard, X.; Rotaru, F.; Achouri, N. L.; Girard Alcindor, V.; Bastin, B.; Boulay, F.; Briand, J. B.; Sanchez-Benitez, A. M.; Bouzomita, H.; Borcea, C.; Borcea, R.; Blank, B.; Cerniol, B.; Celikovic, I.; Delahaye, P.; Delaunay, F.; Etasse, D.; Fremont, G.; de France, G.; Fontbonne, J. M.; Grinyer, G. F.; Harang, J.; Hommet, J.; Jevremovic, A.; Lewitowicz, M.; Martel, I.; Mrázek, Jaromír; Parlog, M.; Poincheval, J.; Ramos, D.; Spitaels, C.; Stanoiu, M.; Thomas, J. C.; Toprek, D.


    Roč. 96, č. 6 (2017), č. článku 065501. ISSN 2469-9985 Institutional support: RVO:61389005 Keywords : after-pulses * baseline fluctuations * half-lives nuclei Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 3.820, year: 2016

  6. Comparison of serum immunoglobulin G half-life in dairy calves fed colostrum, colostrum replacer or administered with intravenous bovine plasma. (United States)

    Murphy, Jacob M; Hagey, Jill V; Chigerwe, Munashe


    In calves, passive immunity of immunoglobulins can be acquired through ingestion of colostrum or colostrum replacers. Plasma can been used to supplement immunoglobulins in healthy or sick calves. Serum half-life of colostral derived immuglobulin G (IgG) is estimated to be 20 days. Half-life of IgG is important in determining response to antigens and timing of vaccination in calves. To date studies evaluating half-life of colostrum replacer or plasma derived IgG are lacking. The objectives of this study were to compare the serum half-life of IgG derived from colostrum, colostrum replacer and plasma in dairy calves reared up to 35 days of age. Thirty Jersey calves were randomly assigned to receive colostrum or colostrum replacer by oroesophageal tubing or plasma by intravenous administration. Serum samples were collected at 2, 5, 7, 10, 14, 21, 28 and 35 days. Serum IgG concentrations were determined by radial immunodiffusion. The results indicated that half-life for IgG in colostrum fed (28.5 days) or plasma transfused calves (27.3 days) was longer than colostrum replacer fed calves (19.1 days). Further studies are required to evaluate pathogen specific immunoglobulins in order to recommend vaccination timing in calves fed colostrum replacers. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. High Mdm4 levels suppress p53 activity and enhance its half-life in acute myeloid leukaemia (United States)

    Tan, Ban Xiong; Khoo, Kian Hoe; Lim, Tit Meng; Lane, David Philip


    Although p53 is found mutated in almost 50% of all cancers, p53 mutations in leukaemia are relatively rare. Acute myeloid leukaemia (AML) cells employ other strategies to inactivate their wild type p53 (WTp53), like the overexpression of the p53 negative regulators Mdm2 and Mdm4. As such, AMLs are excellent candidates for therapeutics involving the reactivation of their WTp53 to restrict and destroy cancer cells, and the Mdm2 antagonist nutlin-3 is one such promising agent. Using AML cell lines with WTp53, we identified stable and high levels of p53 in the OCI/AML-2 cell lines. We demonstrate that this nutlin-3 sensitive cell line overexpressed Mdm4 to sequester, stabilise and inhibit p53 in the cytoplasm. We also show that elevated Mdm4 competed with Mdm2-p53 interaction and therefore extended p53 half-life while preventing p53 transcriptional activity. Our results provide biochemical evidence on the dynamics of the p53-Mdm2-Mdm4 interactions in affecting p53 levels and activity, and unlike previously reported findings derived from genetically manipulated systems, AML cells with naturally high levels of Mdm4 remain sensitive to nutlin treatment. Key Points Endogenously high levels of Mdm4 inhibit and sequester p53 in AML. High levels of Mdm4 do not block function of Mdm2 inhibitors in AML. PMID:24659749

  8. Decreasing half-life of dieldrin in egg yolk following a single oral administration of aldrin to laying hens. (United States)

    Furusawa, N


    Laying hens were treated orally with a single dose of aldrin (AD) 1 mg/kg body weight. Concentrations (microgram/g) of AD or its epoxide (= dieldrin, DD) in the yolk of eggs laid for 21 days after AD treatment were determined by normal-phase high-performance liquid chromatography. The limits of determination were 0.02 microgram/g for AD and 0.03 microgram/g for DD, respectively. After AD treatment, although the low levels of AD (mean 0.02-0.03 microgram/g) were observed only during a three-day period (from 4th to 6th days), DD (mean 0.15 microgram/g) was found already on the 2nd day, indicating that the epoxidation of AD to DD in the hen's body is rapid. The highest level of DD (mean 0.40 microgram/g) was detected on the 6th day, and then DD levels decreased slowly and were detected up to the 21st day. In this decreasing phase, the half-life of DD in the yolk was estimated to be 25.6 days with a 95% confidence interval from 22.7 to 29.4 days.

  9. An algorithm for filtering detector instabilities in search of novel non-exponential decay and in conventional half-life determinations. (United States)

    Hitt, G W; Goddard, B; Solodov, A A; Bridi, D; Isakovic, A F; El-Khazali, R


    Recent reports of Solar modulation of beta-decay have reignited interest in whether or not radioactive half-lives are constants. A numerical approach for filtering instrumental effects on residuals is developed, using correlations with atmospheric conditions recorded while counting (204)Tl emissions with a Geiger-Müller counter. Half-life oscillations and detection efficiency oscillations can be separated provided their periods are substantially different. A partial uncertainty budget for the (204)Tl half-life shows significant decreases to medium-frequency instabilities correlated with pressure and temperature, which suggests that further development may aid general improvements in half-life determinations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Refractive index based measurements

    DEFF Research Database (Denmark)


    A refractive index based measurement of a property of a fluid is measured in an apparatus comprising a variable wavelength coherent light source (16), a sample chamber (12), a wavelength controller (24), a light sensor (20), a data recorder (26) and a computation apparatus (28), by - directing...

  11. Refractive index based measurements

    DEFF Research Database (Denmark)


    In a method for performing a refractive index based measurement of a property of a fluid such as chemical composition or temperature by observing an apparent angular shift in an interference fringe pattern produced by back or forward scattering interferometry, ambiguities in the measurement caused...

  12. A Novel Long-Acting Human Growth Hormone Fusion Protein (VRS-317): Enhanced In Vivo Potency and Half-Life (United States)

    Cleland, Jeffrey L; Geething, Nathan C; Moore, Jerome A; Rogers, Brian C; Spink, Benjamin J; Wang, Chai-Wei; Alters, Susan E; Stemmer, Willem P C; Schellenberger, Volker


    A novel recombinant human growth hormone (rhGH) fusion protein (VRS-317) was designed to minimize receptor-mediated clearance through a reduction in receptor binding without mutations to rhGH by genetically fusing with XTEN amino acid sequences to the N-terminus and the C-terminus of the native hGH sequence. Although in vitro potency of VRS-317 was reduced approximately 12-fold compared with rhGH, in vivo potency was increased because of the greatly prolonged exposure to the target tissues and organs. VRS-317 was threefold more potent than daily rhGH in hypophysectomized rats and fivefold more potent than daily rhGH in juvenile monkeys. In juvenile monkeys, a monthly dose of 1.4 mg/kg VRS-317 (equivalent to 0.26 mg/kg rhGH) caused a sustained pharmacodynamic response for 1 month equivalent to 0.05 mg/kg/day rhGH (1.4 mg/kg rhGH total over 28 days). In monkeys, VRS-317, having a terminal elimination half-life of approximately 110 h, was rapidly and near-completely absorbed, and was well tolerated with no observed adverse effects after every alternate week subcutaneous dosing for 14 weeks. VRS-317 also did not cause lipoatrophy in pig and monkey studies. VRS-317 is currently being studied in GH-deficient patients to confirm the observations in these animal studies. © 2012 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 101:2744–2754, 2012 PMID:22678811

  13. Correlations between changes in conformational dynamics and physical stability in a mutant IgG1 mAb engineered for extended serum half-life. (United States)

    Majumdar, Ranajoy; Esfandiary, Reza; Bishop, Steven M; Samra, Hardeep S; Middaugh, C Russell; Volkin, David B; Weis, David D


    This study compares the local conformational dynamics and physical stability of an IgG1 mAb (mAb-A) with its corresponding YTE (M255Y/S257T/T259E) mutant (mAb-E), which was engineered for extended half-life in vivo. Structural dynamics was measured using hydrogen/deuterium (H/D) exchange mass spectrometry while protein stability was measured with differential scanning calorimetry (DSC) and size exclusion chromatography (SEC). The YTE mutation induced differences in H/D exchange kinetics at both pH 6.0 and 7.4. Segments covering the YTE mutation sites and the FcRn binding epitopes showed either subtle or no observable differences in local flexibility. Surprisingly, several adjacent segments in the CH2 and distant segments in the VH, CH1, and VL domains had significantly increased flexibility in the YTE mutant. Most notable among the observed differences is increased flexibility of the 244-254 segment of the CH2 domain, where increased flexibility has been shown previously to correlate with decreased conformational stability and increased aggregation propensity in other IgG1 mAbs (e.g., presence of destabilizing additives as well as upon de-glycosylation or methionine oxidation). DSC analysis showed decreases in both thermal onset (Tonset) and unfolding (Tm1) temperatures of 7°C and 6.7°C, respectively, for the CH2 domain of the YTE mutant. In addition, mAb-E aggregated faster than mAb-A under accelerated stability conditions as measured by SEC analysis. Hence, the relatively lower physical stability of the YTE mutant correlates with increased local flexibility of the 244-254 segment, providing a site-directed mutant example that this segment of the CH2 domain is an aggregation hot spot in IgG1 mAbs.

  14. Glucagon-like Peptide 1 Conjugated to Recombinant Human Serum Albumin Variants with Modified Neonatal Fc Receptor Binding Properties. Impact on Molecular Structure and Half-Life

    DEFF Research Database (Denmark)

    Bukrinski, Jens T.; Sønderby, Pernille; Antunes, Filipa


    Glucagon-like peptide 1 (GLP-1) is a small incretin hormone stimulated by food intake, resulting in an amplification of the insulin response. Though interesting as a drug candidate for the treatment of type 2 diabetes mellitus, its short plasma half-life of less than 3 minutes limits its clinical...

  15. Fast renal trapping of porcine Luteinizing Hormone (pLH shown by 123I-scintigraphic imaging in rats explains its short circulatory half-life

    Directory of Open Access Journals (Sweden)

    Locatelli Alain


    Full Text Available Abstract Background Sugar moieties of gonadotropins play no primary role in receptor binding but they strongly affect their circulatory half-life and consequently their in vivo biopotencies. In order to relate more precisely hepatic trapping of these glycoproteic hormones with their circulatory half-life, we undertook a comparative study of the distribution and elimination of porcine LH (pLH and equine CG (eCG which exhibit respectively a short and a long half-life. This was done first by following half-lives of pLH in piglets with hepatic portal circulation shunted or not. It was expected that such a shunt would enhance the short half-life of pLH. Subsequently, scintigraphic imaging of both 123I-pLH and 123I-eCG was performed in intact rats to compare their routes and rates of distribution and elimination. Methods Native pLH or eCG was injected to normal piglets and pLH was tested in liver-shunted anæsthetized piglet. Blood samples were recovered sequentially over one hour time and the hormone concentrations were determined by a specific ELISA method. Scintigraphic imaging of 123I-pLH and 123I-eCG was performed in rats using a OPTI-CGR gamma camera. Results In liver-shunted piglets, the half-life of pLH was found to be as short as in intact piglets (5 min. In the rat, the half-life of pLH was also found to be very short (3–6 min and 123I-pLH was found to accumulate in high quantity in less than 10 min post injection at the level of kidneys but not in the liver. 123I-eCG didn't accumulate in any organ in the rats during the first hour, plasma concentrations of this gonadotropin being still elevated (80% at this time. Conclusion In both the porcine and rat species, the liver is not responsible for the rapid elimination of pLH from the circulation compared to eCG. Our scintigraphic experiments suggest that the very short circulatory half-life of LH is due to rapid renal trapping.

  16. Dexamethasone promotes granulocyte mobilization by prolonging the half-life of granulocyte-colony-stimulating factor in healthy donors for granulocyte transfusions. (United States)

    Hiemstra, Ida H; van Hamme, John L; Janssen, Machiel H; van den Berg, Timo K; Kuijpers, Taco W


    Granulocyte transfusion (GTX) is a potential approach to correcting neutropenia and relieving the increased risk of infection in patients who are refractory to antibiotics. To mobilize enough granulocytes for transfusion, healthy donors are premedicated with granulocyte-colony-stimulating factor (G-CSF) and dexamethasone. Granulocytes have a short circulatory half-life. Consequently, patients need to receive GTX every other day to keep circulating granulocyte counts at an acceptable level. We investigated whether plasma from premedicated donors was capable of prolonging neutrophil survival and, if so, which factor could be held responsible. The effects of plasma from G-CSF/dexamethasone-treated donors on neutrophil survival were assessed by annexin-V, CD16. and CXCR4 staining and nuclear morphology. We isolated an albumin-bound protein using α-chymotrypsin and albumin-depletion and further characterized it using protein analysis. The effects of dexamethasone and G-CSF were assessed using mifepristone and G-CSF-neutralizing antibody. G-CSF plasma concentrations were determined by Western blot and Luminex analyses. G-CSF/dexamethasone plasma contained a survival-promoting factor for at least 2 days. This factor was recognized as an albumin-associated protein and was identified as G-CSF itself, which was surprising considering its reported half-life of only 4.5 hours. Compared with coadministration of dexamethasone, administration of G-CSF alone to the same GTX donors led to a faster decline in circulating G-CSF levels, whereas dexamethasone itself did not induce any G-CSF, demonstrating a role for dexamethasone in increasing G-CSF half-life. Dexamethasone increases granulocyte yield upon coadministration with G-CSF by extending G-CSF half-life. This observation might also be exploited in the coadministration of dexamethasone with other recombinant proteins to modulate their half-life. © 2016 AABB.

  17. Evaluating the influence of half-life, milk:plasma partition coefficient, and volume of distribution on lactational exposure to chemicals in children. (United States)

    Verner, Marc-André; Plouffe, Laurence; Kieskamp, Kyra K; Rodríguez-Leal, Inés; Marchitti, Satori A


    Women are exposed to multiple environmental chemicals, many of which are known to transfer to breast milk during lactation. However, little is known about the influence of the different chemical-specific pharmacokinetic parameters on children's lactational dose. Our objective was to develop a generic pharmacokinetic model and subsequently quantify the influence of three chemical-specific parameters (biological half-life, milk:plasma partition coefficient, and volume of distribution) on lactational exposure to chemicals and resulting plasma levels in children. We developed a two-compartment pharmacokinetic model to simulate lifetime maternal exposure, placental transfer, and lactational exposure to the child. We performed 10,000 Monte Carlo simulations where half-life, milk:plasma partition coefficient, and volume of distribution were varied. Children's dose and plasma levels were compared to their mother's by calculating child:mother dose ratios and plasma level ratios. We then evaluated the association between the three chemical-specific pharmacokinetic parameters and child:mother dose and level ratios through linear regression and decision trees. Our analyses revealed that half-life was the most influential parameter on children's lactational dose and plasma concentrations, followed by milk:plasma partition coefficient and volume of distribution. In bivariate regression analyses, half-life explained 72% of child:mother dose ratios and 53% of child:mother level ratios. Decision trees aiming to identify chemicals with high potential for lactational exposure (ratio>1) had an accuracy of 89% for child:mother dose ratios and 84% for child:mother level ratios. Our study showed the relative importance of half-life, milk:plasma partition coefficient, and volume of distribution on children's lactational exposure. Developed equations and decision trees will enable the rapid identification of chemicals with a high potential for lactational exposure. Copyright © 2017

  18. Refractive index based measurements

    DEFF Research Database (Denmark)


    In a method for performing a refractive index based measurement of a property of a fluid such as chemical composition or temperature, a chirp in the local spatial frequency of interference fringes of an interference pattern is reduced by mathematical manipulation of the recorded light intensity...

  19. Effect of Truncating AUC at 12, 24 and 48 hr When Evaluating the Bioequivalence of Drugs with a Long Half-Life. (United States)

    Moreno, Isabel; Ochoa, Dolores; Román, Manuel; Cabaleiro, Teresa; Abad-Santos, Francisco


    Bioequivalence studies of drugs with a long half-life require long periods of time for pharmacokinetic sampling. The latest update of the European guideline allows the area under the curve (AUC) truncated at 72 hr to be used as an alternative to AUC0-t as the primary parameter. The objective of this study was to evaluate the effect of truncating the AUC at 48, 24 and 12 hr on the acceptance of the bioequivalence criterion as compared with truncation at 72 hr in bioequivalence trials. The effect of truncated AUC on the within-individual coefficient of variation (CVw) and on the ratio of the formulations was also analysed. Twenty-eight drugs were selected from bioequivalence trials. Pharmacokinetic data were analysed using WinNonLin 2.0 based on the trapezoidal method. Analysis of variance (ANOVA) was performed to obtain the ratios and 90% confidence intervals for AUC at different time-points. The degree of agreement of AUC0-72 in relation to AUC0-48 and AUC0-24, according to the Landis and Koch classification, was 'almost perfect'. Statistically significant differences were observed when the CVw of AUC truncated at 72, 48 and 24 hr was compared with the CVw of AUC0-12. There were no statistically significant differences in the AUC ratio at any time-point. Compared to AUC0-72, Pearson's correlation coefficient for mean AUC, AUC ratio and AUC CVw was worse for AUC0-12 than AUC0-24 or AUC0-48. These preliminary results could suggest that AUC truncation at 24 or 48 hr is adequate to determine whether two formulations are bioequivalent. © 2015 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  20. A novel exendin-4 human serum albumin fusion protein, E2HSA, with an extended half-life and good glucoregulatory effect in healthy rhesus monkeys

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Ling; Wang, Lin; Meng, Zhiyun; Gan, Hui; Gu, Ruolan; Wu, Zhuona; Gao, Lei; Zhu, Xiaoxia; Sun, Wenzhong; Li, Jian; Zheng, Ying; Dou, Guifang, E-mail:


    Highlights: • E2HSA has an extended half-life and good plasma stability. • E2HSA could improve glucose-dependent insulin secretion. • E2HSA has excellent glucoregulatory effects in vivo. • E2HSA could potentially be used as a new long-acting GLP-1 receptor agonist for type 2 diabetes management. - Abstract: Glucagon-like peptide-1 (GLP-1) has attracted considerable research interest in terms of the treatment of type 2 diabetes due to their multiple glucoregulatory functions. However, the short half-life, rapid inactivation by dipeptidyl peptidase-IV (DPP-IV) and excretion, limits the therapeutic potential of the native incretin hormone. Therefore, efforts are being made to develop the long-acting incretin mimetics via modifying its structure. Here we report a novel recombinant exendin-4 human serum albumin fusion protein E2HSA with HSA molecule extends their circulatory half-life in vivo while still retaining exendin-4 biological activity and therapeutic properties. In vitro comparisons of E2HSA and exendin-4 showed similar insulinotropic activity on rat pancreatic islets and GLP-1R-dependent biological activity on RIN-m5F cells, although E2HSA was less potent than exendin-4. E2HSA had a terminal elimation half-life of approximate 54 h in healthy rhesus monkeys. Furthermore, E2HSA could reduce postprandial glucose excursion and control fasting glucose level, dose-dependent suppress food intake. Improvement in glucose-dependent insulin secretion and control serum glucose excursions were observed during hyperglycemic clamp test (18 h) and oral glucose tolerance test (42 h) respectively. Thus the improved physiological characterization of E2HSA make it a new potent anti-diabetic drug for type 2 diabetes therapy.

  1. Quantitative pharmacological analyses of the interaction between flumazenil and midazolam in monkeys discriminating midazolam: Determination of the functional half life of flumazenil. (United States)

    Zanettini, Claudio; France, Charles P; Gerak, Lisa R


    The duration of action of a drug is commonly estimated using plasma concentration, which is not always practical to obtain or an accurate estimate of functional half life. For example, flumazenil is used clinically to reverse the effects of benzodiazepines like midazolam; however, its elimination can be altered by other drugs, including some benzodiazepines, thereby altering its half life. This study used Schild analyses to characterize antagonism of midazolam by flumazenil and determine the functional half life of flumazenil. Four monkeys discriminated 0.178mg/kg midazolam while responding under a fixed-ratio 10 schedule of stimulus-shock termination; flumazenil was given at various times before determination of a midazolam dose-effect curve. There was a time-related decrease in the magnitude of shift of the midazolam dose-effect curve as the interval between flumazenil and midazolam increased. The potency of flumazenil, estimated by apparent pA2 values (95% CI), was 7.30 (7.12, 7.49), 7.17 (7.03, 7.31), 6.91 (6.72, 7.10) and 6.80 (6.67, 6.92) at 15, 30, 60 and 120min after flumazenil administration, respectively. The functional half life of flumazenil, derived from potency estimates, was 57±13min. Thus, increasing the interval between flumazenil and midazolam causes orderly decreases in flumazenil potency; however, across a broad range of conditions, the qualitative nature of the interaction does not change, as indicated by slopes of Schild plots at all time points that are not different from unity. Differences in potency of flumazenil are therefore due to elimination of flumazenil and not due to pharmacodynamic changes over time. © 2013 Published by Elsevier B.V.

  2. FVIII-binding IgG modulates FVIII half-life in patients with severe and moderate hemophilia A without inhibitors. (United States)

    Hofbauer, Christoph J; Kepa, Sylvia; Schemper, Michael; Quehenberger, Peter; Reitter-Pfoertner, Sylvia; Mannhalter, Christine; Reipert, Birgit M; Pabinger, Ingrid


    The substantial variability in pharmacokinetic parameters in hemophilia patients A poses a challenge for optimal treatment with factor VIII (FVIII) products. We investigated the effect of FVIII-specific immunoglobulin G (IgG) on FVIII half-life in a cohort of 42 adult patients with severe and moderate hemophilia A without inhibitors. Fifteen (35.7%) of 42 patients tested positive for FVIII-binding IgG with titers ≥1:20 in the initial antibody screen, 9 of these 15 patients had FVIII-specific antibodies with titers ≥1:40, mostly low-to-moderate-affinity IgG1 and IgG3, and 1 had high-affinity IgG4 and later developed low-titer FVIII inhibitors. His brother with low-to-moderate-affinity IgG1 and IgG3 also later developed low-titer FVIII inhibitors. The presence of FVIII-specific IgG subclass titer ≥1:40 antibodies was significantly associated with shorter FVIII half-life (median, 7.8 hours [interquartile range, 6.6-9.2 hours]) vs 10.4 hours [interquartile range, 8.9-13.8 hours]); the regression coefficient adjusted for log age and log von Willebrand factor (VWF) antigen was -0.32 (P = .004), accounting for 16.9% of the observed variability of FVIII half-life in our cohort. Our data indicate a significant contribution of non-neutralizing FVIII-specific IgG to FVIII half-life reduction in hemophilia A patients. Thus, screening for FVIII-specific IgG could be beneficial in tailoring FVIII prophylactic regimens. © 2016 by The American Society of Hematology.

  3. PEGylation of a High-Affinity Anti-(+)Methamphetamine Single Chain Antibody Fragment Extends Functional Half-Life by Reducing Clearance. (United States)

    Reichard, Emily E; Nanaware-Kharade, Nisha; Gonzalez, Guillermo A; Thakkar, Shraddha; Owens, S Michael; Peterson, Eric C


    Methamphetamine (METH) abuse is a worldwide drug problem, yet no FDA-approved pharmacological treatments are available for METH abuse. Therefore, we produced an anti-METH single chain antibody fragment (scFv7F9Cys) as a pharmacological treatment for METH abuse. ScFv's have a short half-life due to their small size, limiting their clinical use. Thus, we examined the pharmacokinetic effects of conjugating poly(ethylene) glycol (-PEG) to scFv7F9Cys to extend its functional half-life. The affinity of scFv7F9Cys and PEG conjugates to METH was determined in vitro via equilibrium dialysis saturation binding. Pharmacokinetic and parameters of scFv7F9Cys and scFv7F9Cys-PEG20K (30 mg/kg i.v. each) and their ability to bind METH in vivo were determined in male Sprague-Dawley rats receiving a subcutaneous infusion of METH (3.2 mg/kg/day). Of three PEGylated conjugates, scFv7F9Cys-PEG20K was determined the most viable therapeutic candidate. PEGylation of scFv7F9Cys did not alter METH binding functionality in vitro, and produced a 27-fold increase in the in vivo half-life of the antibody fragment. Furthermore, total METH serum concentrations increased following scFv7F9Cys or scFv7F9Cys-PEG20K administration, with scFv7F9Cys-PEG20K producing significantly longer changes in METH distribution than scFv7F9Cys. PEGylation of scFv7F9Cys significantly increase the functional half-life of scFv7F9Cys, suggesting it may be a long-lasting pharmacological treatment option for METH abuse.

  4. A Two-pronged Binding Mechanism of IgG to the Neonatal Fc Receptor Controls Complex Stability and IgG Serum Half-life. (United States)

    Jensen, Pernille Foged; Schoch, Angela; Larraillet, Vincent; Hilger, Maximiliane; Schlothauer, Tilman; Emrich, Thomas; Rand, Kasper Dyrberg


    The success of recombinant monoclonal immunoglobulins (IgG) is rooted in their ability to target distinct antigens with high affinity combined with an extraordinarily long serum half-life, typically around 3 weeks. The pharmacokinetics of IgGs is intimately linked to the recycling mechanism of the neonatal Fc receptor (FcRn). For long serum half-life of therapeutic IgGs, the highly pH-dependent interaction with FcRn needs to be balanced to allow efficient FcRn binding and release at slightly acidic pH and physiological pH, respectively. Some IgGs, like the antibody briakinumab has an unusually short half-life of ∼8 days. Here we dissect the molecular origins of excessive FcRn binding in therapeutic IgGs using a combination of hydrogen/deuterium exchange mass spectrometry and FcRn affinity chromatography. We provide experimental evidence for a two-pronged IgG-FcRn binding mechanism involving direct FcRn interactions with both the Fc region and the Fab regions of briakinumab, and correlate the occurrence of excessive FcRn binding to an unusually strong Fab-FcRn interaction. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  5. Disease causing mutants of TDP-43 nucleic acid binding domains are resistant to aggregation and have increased stability and half-life. (United States)

    Austin, James A; Wright, Gareth S A; Watanabe, Seiji; Grossmann, J Günter; Antonyuk, Svetlana V; Yamanaka, Koji; Hasnain, S Samar


    Over the last two decades many secrets of the age-related human neural proteinopathies have been revealed. A common feature of these diseases is abnormal, and possibly pathogenic, aggregation of specific proteins in the effected tissue often resulting from inherent or decreased structural stability. An archetype example of this is superoxide dismutase-1, the first genetic factor to be linked with amyotrophic lateral sclerosis (ALS). Mutant or posttranslationally modified TAR DNA binding protein-32 (TDP-43) is also strongly associated with ALS and an increasingly large number of other neurodegenerative diseases, including frontotemporal lobar degeneration (FTLD). Cytoplasmic mislocalization and elevated half-life is a characteristic of mutant TDP-43. Furthermore, patient age at the onset of disease symptoms shows a good inverse correlation with mutant TDP-43 half-life. Here we show that ALS and FTLD-associated TDP-43 mutations in the central nucleic acid binding domains lead to elevated half-life and this is commensurate with increased thermal stability and inhibition of aggregation. It is achieved without impact on secondary, tertiary, or quaternary structure. We propose that tighter structural cohesion contributes to reduced protein turnover, increasingly abnormal proteostasis and, ultimately, faster onset of disease symptoms. These results contrast our perception of neurodegenerative diseases as misfolded proteinopathies and delineate a novel path from the molecular characteristics of mutant TDP-43 to aberrant cellular effects and patient phenotype.

  6. Albumin-binding domain from Streptococcus zooepidemicus protein Zag as a novel strategy to improve the half-life of therapeutic proteins. (United States)

    Cantante, Cátia; Lourenço, Sara; Morais, Maurício; Leandro, João; Gano, Lurdes; Silva, Nuno; Leandro, Paula; Serrano, Mónica; Henriques, Adriano O; Andre, Ana; Cunha-Santos, Catarina; Fontes, Carlos; Correia, João D G; Aires-da-Silva, Frederico; Goncalves, Joao


    Recombinant antibody fragments belong to the promising class of biopharmaceuticals with high potential for future therapeutic applications. However, due to their small size they are rapidly cleared from circulation. Binding to serum proteins can be an effective approach to improve pharmacokinetic properties of short half-life molecules. Herein, we have investigated the Zag albumin-binding domain (ABD) derived from Streptococcus zooepidemicus as a novel strategy to improve the pharmacokinetic properties of therapeutic molecules. To validate our approach, the Zag ABD was fused with an anti-TNFα single-domain antibody (sdAb). Our results demonstrated that the sdAb-Zag fusion protein was highly expressed and specifically recognizes human, rat and mouse serum albumins with affinities in the nanomolar range. Moreover, data also demonstrated that the sdAb activity against the therapeutic target (TNFα) was not affected when fused with Zag ABD. Importantly, the Zag ABD increased the sdAb half-life ∼39-fold (47min for sdAb versus 31h for sdAb-Zag). These findings demonstrate that the Zag ABD fusion is a promising approach to increase the half-life of small recombinant antibodies molecules without affecting their therapeutic efficacy. Moreover, the present study strongly suggests that the Zag ABD fusion strategy can be potentially used as a universal method to improve the pharmokinetics properties of many others therapeutics proteins and peptides in order to improve their dosing schedule and clinical effects. Copyright © 2017. Published by Elsevier B.V.

  7. Pulse shape analysis for the GERDA experiment to set a new limit on the half-life of 0νββ decay of {sup 76}Ge

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Victoria Elisabeth


    The GERDA experiment searches for neutrinoless double beta (0νββ) decay of {sup 76}Ge using high purity germanium (HPGe) detectors operated in liquid argon (LAr). The aim is to explore half-lives of the order of 10{sup 26} yr. Therefore, GERDA relies on improved active background reduction techniques such as pulse shape discrimination (PSD) in which the time structure of the germanium signals is analyzed to discriminate signal- from background-like events. Two types of HPGe detectors are operated: semi-coaxial detectors previously used in the Heidelberg-Moscow and IGEX experiments and new Broad Energy Germanium (BEGe) detectors which feature an improved energy resolution and enhanced PSD. In Phase I of the experiment, five enriched BEGe detectors were used for the first time in the search for 0νββ decay. A PSD based on a single parameter, the ratio of the maximum current amplitude over the energy A/E is applied. 83% of the background events in a 232 keV region around Q{sub ββ} are rejected with a high signal efficiency of (92.1 ± 1.9) %. The achieved background index (BI) is (5.4{sup +4.1}{sub -3.4}) . 10{sup -3} (counts)/( This is an improvement by a factor of 10 compared to previous germanium based 0νββ experiments. Phase II of the experiment includes a major upgrade: for further background rejection, the LAr cryostat is instrumented to detect argon scintillation light. Additional 25 BEGe detectors are installed. After PSD and LAr veto a BI of (0.7{sup +1.3}{sub -0.5}) . 10{sup -3} (counts)/( is achieved. This is the best BI achieved in 0νββ experiments so far. A frequentist statistical analysis is performed on the combined data collected in GERDA Phase I and the first Phase II release. A new limit on the half-life of 0νββ decay of {sup 76}Ge is set to T{sup 0ν}{sub 1/2}>5.3.10{sup 25} yr at 90% C.L., with a median sensitivity of T{sup 0ν}{sub 1/2}>4.0.10{sup 25} yr at 90% C.L.

  8. Histone deacetylase inhibitors interfere with angiogenesis by decreasing endothelial VEGFR-2 protein half-life in part via a VE-cadherin-dependent mechanism. (United States)

    Hrgovic, Igor; Doll, Monika; Pinter, Andreas; Kaufmann, Roland; Kippenberger, Stefan; Meissner, Markus


    Recent evidence suggests that histone deacetylase inhibitors (HDACi) may mediate part of their antitumor effects by interfering with tumor angiogenesis. As signalling via the vascular endothelial growth factor receptor-2 (VEGFR-2) pathway is critical for angiogenic responses during tumor progression, we explored whether established antitumor effects of HDACi are partly mediated through diminished endothelial VEGFR-2 expression. We therefore examined the potential impact of three different HDACi, trichostatin A (TSA), sodium butyrate (But) and valproic acid (VPA), on VEGFR-2 protein expression. TSA, VPA and But significantly inhibit VEGFR-2 protein expression in endothelial cells. Pertinent to these data, VEGFR-2 protein half-life is shown to be decreased in response to HDACi. Recently, it could be demonstrated that expression of VE-cadherin influences VEGFR-2 protein half-life. In our experiments, VEGFR-2 downregulation was accompanied by HDACi-induced VE-cadherin suppression. Interestingly, siRNA-mediated knockdown of VE-cadherin led to a pronounced loss of VEGFR-2 expression on the protein as well as on the mRNA level, implicating that VE-cadherin not only influences VEGFR-2 protein half-life but also the transcriptional level. To further distinguish which of the eight different histone deacetylases are responsible for the regulation of VEGFR-2 expression, specific HDAC genes were silenced by transfecting respective siRNAs. These studies revealed that HDACs 1, 4, 5 and 6 are preferentially involved in VEGFR-2 expression. Therefore, these results provide an explanation for the anti-angiogenic action of HDAC inhibitors via a VE-cadherin, HDAC 1 and HDACs 4-6-mediated suppression of VEGFR-2 expression and might be of importance in the development of new anti-angiogenic drugs. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Human IgG3 with extended half-life does not improve Fc-gamma receptor-mediated cancer antibody therapies in mice.

    Directory of Open Access Journals (Sweden)

    Rens Braster

    Full Text Available Current anti-cancer therapeutic antibodies that are used in the clinic are predominantly humanized or fully human immunoglobulin G1 (IgG1. These antibodies bind with high affinity to the target antigen and are efficient in activating the immune system via IgG Fc receptors and/or complement. In addition to IgG1, three more isotypes are present in humans, of which IgG3 has been found to be superior compared to human IgG1 in inducing antibody dependent cell cytotoxicity (ADCC, phagocytosis or activation of complement in some models. Nonetheless, no therapeutic human IgG3 mAbs have been developed due to the short in vivo half-life of most known IgG3 allotypes. In this manuscript, we compared the efficacy of V-gene matched IgG1 and IgG3 anti-tumour mAb (TA99 in mice, using natural variants of human IgG3 with short- or long half-life, differing only at position 435 with an arginine or histidine, respectively.In vitro human IgG1 and IgG3 did not show any differences in opsonisation ability of B16F10-gp75 mouse melanoma cells. IgG1, however, was superior in inducing phagocytosis of tumour cells by mouse macrophages. Similarly, in a mouse peritoneal metastasis model we did not detect an improved effect of IgG3 in preventing tumour outgrowth. Moreover, replacing the arginine at position 435 for a histidine in IgG3 to enhance half-life did not result in better suppression of tumour outgrowth compared to wild type IgG3 when injected prior to tumour cell injection.In conclusion, human IgG3 does not have improved therapeutic efficacy compared to human IgG1 in a mouse tumour model.

  10. Clinical utility and patient perspectives on the use of extended half-life rFIXFc in the management of hemophilia B

    Directory of Open Access Journals (Sweden)

    Miguelino MG


    Full Text Available Maricel G Miguelino, Jerry S Powell Division of Hematology and Oncology, University of California Davis Medical Center, Sacramento, CA, USA Abstract: Hemophilia B is an X-linked genetic disease caused by mutation of the gene for coagulation protein factor IX (FIX, with an incidence of approximately once every 30,000 male births in all populations and ethnic groups. When severe, the disease leads to spontaneous life threatening bleeding episodes. When untreated, most patients die from bleeding complications before 25 years of age. Current therapy requires frequent intravenous infusions of therapeutic recombinant or plasma-derived protein concentrates containing FIX. Most patients administer the infusions at home every few days, and must limit their physical activities to avoid abnormal bleeding when the FIX activity levels are below normal. After completing the pivotal Phase III clinical trial, a new therapeutic FIX preparation that has been engineered for an extended half-life in circulation, received regulatory approval in March 2014 in Canada and the US. This new FIX represents a major therapeutic advance for patients with hemophilia B. The half-life is prolonged due to fusion of the native FIX molecule with the normal constant region of immunoglobulin G. This fusion molecule then follows the normal immunoglobulin recirculation pathways through endothelial cells, resulting in prolonged times in circulation. In the clinical trials, over 150 patients successfully used eftrenonacog alfa regularly for more than 1 year to prevent spontaneous bleeding, to successfully treat any bleeding episodes, and to provide effective coagulation for major surgery. All infusions were well tolerated and effective, with no inhibitors detected and no safety concerns. This promising therapy should allow patients to use fewer infusions to maintain appropriate FIX activity levels in all clinical settings. Keywords: factor IX, hemophilia B, prophylaxis, genetic

  11. Comparison between the clot-protecting activity of a mutant plasminogen activator inhibitor-1 with a very long half-life and 6-aminocaproic acid. (United States)

    Kindell, Daniel Glenn; Keck, Rick Wayne; Jankun, Jerzy


    Plasminogen activator inhibitor (PAI)-1 is a serpin glycoprotein that can stabilize blood clots by inhibiting fibrinolysis. However, wild-type PAI-1 has the disadvantage of a short half-life of ∼2 h. A very long half-life (VLHL) PAI-1 mutant was developed previously with an active-form half-life of >700 h, making it a possible candidate for use in hemorrhagic therapy. Current treatments for mitigating hemorrhage, other than inducers of blood clotting, are limited to lysine analog antifibrinolytics, including 6-aminocaproic acid and tranexamic acid. VLHL PAI-1 has been previously demonstrated to limit bleeding; however, the efficacy of this protein compared with lysine analog antifibrinolytics has not been investigated. The aim of the current study was to compare the clot stabilizing properties of the novel antifibrinolytic VLHL PAI-1 with those of 6-aminocaproic acid in reference plasma. Using thromboelastographic analysis, VLHL PAI-1 exhibited an IC 50 (half maximal inhibitory concentration) of 8.8×10 -8 mol/l, while 6-aminocaproic acid showed an IC 50 of 1.6×10 -4 mol/l. However, at doses of >9.0×10 -7 mol/l, VLHL PAI-1 exhibited a delay in the onset of clot formation, which may be attributed to thrombin inhibition by excess PAI-1. The inhibition of tissue plasminogen activator by VLHL PAI-1 demonstrated improved efficacy over 6-aminocaproic acid in mitigating hemorrhage. In addition, patients with a PAI-1 deficiency, which causes blood clots to lyse rapidly resulting in profuse bleeding, may benefit from the application of VLHL PAI-1 as an antihemorrhagic therapy.

  12. Development of PF-06671008, a Highly Potent Anti-P-cadherin/Anti-CD3 Bispecific DART Molecule with Extended Half-Life for the Treatment of Cancer

    Directory of Open Access Journals (Sweden)

    Adam R. Root


    Full Text Available Bispecific antibodies offer a promising approach for the treatment of cancer but can be challenging to engineer and manufacture. Here we report the development of PF-06671008, an extended-half-life dual-affinity re-targeting (DART® bispecific molecule against P-cadherin and CD3 that demonstrates antibody-like properties. Using phage display, we identified anti-P-cadherin single chain Fv (scFv that were subsequently affinity-optimized to picomolar affinity using stringent phage selection strategies, resulting in low picomolar potency in cytotoxic T lymphocyte (CTL killing assays in the DART format. The crystal structure of this disulfide-constrained diabody shows that it forms a novel compact structure with the two antigen binding sites separated from each other by approximately 30 Å and facing approximately 90° apart. We show here that introduction of the human Fc domain in PF-06671008 has produced a molecule with an extended half-life (-4.4 days in human FcRn knock-in mice, high stability (Tm1 > 68 °C, high expression (>1 g/L, and robust purification properties (highly pure heterodimer, all with minimal impact on potency. Finally, we demonstrate in vivo anti-tumor efficacy in a human colorectal/human peripheral blood mononuclear cell (PBMC co-mix xenograft mouse model. These results suggest PF-06671008 is a promising new bispecific for the treatment of patients with solid tumors expressing P-cadherin.

  13. Samarium-neodymium chronology and rubidium-strontium systematics of an Allende calcium-aluminum-rich inclusion with implications for 146Sm half-life (United States)

    Marks, N. E.; Borg, L. E.; Hutcheon, I. D.; Jacobsen, B.; Clayton, R. N.


    Calcium-aluminum-rich inclusions (CAIs) are primitive objects that formed within the protoplanetary disk surrounding the young Sun. Recent Pb-Pb chronologic studies have demonstrated that CAIs are the oldest solar system solids, crystallizing 4567 Ma ago (Amelin et al., 2002; Connelly et al., 2012). The isotope systematics of CAIs therefore provide critical insight into the earliest history of the Solar System. Although Sm-Nd and Rb-Sr geochronometers are highly effective tools for investigating cosmochemical evolution in the early Solar System, previous studies of CAIs have revealed evidence for isotopically disturbed systems. Here we report new age data for Allende CAI Al3S4 derived from both the long-lived (147Sm-143Nd) and short-lived (146Sm-142Nd) isotopic systems. The 147Sm-143Nd chronometer yields an age of 4560 ± 34 Ma that is concordant with 207Pb-206Pb ages for CAIs and indicates that the Sm-Nd system was not significantly disturbed by secondary alteration or nucleosynthetic processes. The slope of the 146Sm-142Nd isochron defines the Solar System initial 146Sm/144Sm of 0.00828 ± 0.00044. This value is significantly different from the value of 0.0094 determined by Kinoshita et al. (2012). Ages recalculated from all published 146Sm-142Nd isochron data using the traditional 103 Ma half-life and the initial 146Sm/144Sm value determined here closely match Pb-Pb and 147Sm-143Nd ages determined on the same samples. In contrast, ages recalculated using the 68 Ma half-life determined by Kinoshita et al. (2012) and either of the initial 146Sm/144Sm values are often anomalously old. This is particularly true for the youngest samples with 146Sm-142Nd isochron ages that are most sensitive to the choice of 146Sm half-life used in the age calculation. In contrast to the Sm-Nd isotope system, the Rb-Sr system is affected by alteration but yields an apparent isochron with a slope corresponding to a much younger age of 4247 ± 110 Ma. Although the Rb-Sr system in CAIs


    Energy Technology Data Exchange (ETDEWEB)

    Simonucci, S. [Department of Physics, University of Camerino, Camerino (Italy); Taioli, S.; Busso, M. [Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, Perugia (Italy); Palmerini, S., E-mail:, E-mail: [Departamento de Fisica Teorica y del Cosmos, Universidad de Granada (Spain)


    The enrichment of Li in the universe is still unexplained, presenting various puzzles to astrophysics. One open issue is that of obtaining reliable estimates for the rate of e {sup -} captures on {sup 7}Be for T and {rho} conditions that are different from the solar ones. This is of crucial importance for modeling the Galactic nucleosynthesis of Li. In this framework, we present here a new theoretical method for calculating the e {sup -} capture rate in typical conditions for evolved stars. Furthermore, we show how our approach compares with state-of-the-art techniques for solar conditions, where various estimates are available. Our computations include (1) 'traditional' calculations of the electronic density at the nucleus, to which the e {sup -} capture rate for {sup 7}Be is proportional, for different theoretical approaches including the Thomas-Fermi, Poisson-Boltzmann, and Debye-Hueckel (DH) models of screening; and (2) a new computation, based on a formalism that goes beyond the previous ones, adopting a mean-field 'adiabatic' approximation to the scattering process. The results obtained with the new approach as well as with traditional ones and their differences are discussed in some detail, starting from solar conditions, where our approach and the DH model essentially converge to the same solution. We then analyze the applicability of both our method and the DH model to a rather broad range of T and {rho} values, embracing those typical of red giant stars, where both bound and continuum states contribute to the capture. We find that over a wide region of the parameter space explored, the DH approximation does not really stand, so that the more general method we suggest should be preferred. As a first application, we briefly reanalyze the {sup 7}Li abundances in red giant branch and asymptotic giant branch stars of the Galactic disk in light of a revision in the Be decay only; however, we emphasize that the changes we find in the electron

  15. Analysis of radon and thoron progeny measurements based on air filtration. (United States)

    Stajic, J M; Nikezic, D


    Measuring of radon and thoron progeny concentrations in air, based on air filtration, was analysed in order to assess the reliability of the method. Changes of radon and thoron progeny activities on the filter during and after air sampling were investigated. Simulation experiments were performed involving realistic measuring parameters. The sensitivity of results (radon and thoron concentrations in air) to the variations of alpha counting in three and five intervals was studied. The concentration of (218)Po showed up to be the most sensitive to these changes, as was expected because of its short half-life. The well-known method for measuring of progeny concentrations based on air filtration is rather unreliable and obtaining unrealistic or incorrect results appears to be quite possible. A simple method for quick estimation of radon potential alpha energy concentration (PAEC), based on measurements of alpha activity in a saturation regime, was proposed. Thoron PAEC can be determined from the saturation activity on the filter, through beta or alpha measurements. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email:

  16. The influence of severe hypoalbuminemia on the half-life of vancomycin in elderly patients with methicillin-resistant Staphylococcus aureus hospital-acquired pneumonia

    Directory of Open Access Journals (Sweden)

    Mizuno T


    Full Text Available Tomohiro Mizuno,1,* Fumihiro Mizokami,2,* Kazuhiro Fukami,2 Kazuhiro Ito,2 Masataka Shibasaki,3 Tadashi Nagamatsu,1 Katsunori Furuta,4 1Department of Analytical Pharmacology, Meijo University Graduate School of Pharmacy, Nagoya, Japan; 2Department of Pharmacy, National Center for Geriatrics and Gerontology, 3Department of Respiratory Medicine, National Center for Geriatrics and Gerontology, 4Department of Clinical Research and Development, National Center for Geriatrics and Gerontology, Obu, Japan *These authors contributed equally to this work Background: Vancomycin (VCM treatment outcomes depend on the characteristics of the patient, and it is well known that hypoalbuminemia is a risk factor for poor treatment outcomes, as reported in a previous study. However, the reason that severe hypoalbuminemia has an influence on the treatment outcome of VCM remains unknown. Objective: To elucidate the association between severe hypoalbuminemia and VCM treatment outcomes, we examined pharmacokinetic/pharmacodynamic (PK/PD parameters in elderly patients with severe hypoalbuminemia. Methods: We conducted a retrospective observational study of 94 patients with methicillin-resistant Staphylococcus aureus (MRSA hospital-acquired pneumonia who had been treated with VCM between January 2006 and December 2012. The 94 patients were divided into severe hypoalbuminemia and non-severe hypoalbuminemia groups. The PK/PD parameters and treatment outcomes of VCM were compared between the two groups. Results: The half-life of VCM in the severe hypoalbuminemia group was significantly longer than in the non-severe hypoalbuminemia group (33.2 ± 5.4 vs 24.9 ± 1.6; P = 0.049. Area under the concentration curve (AUC/minimum inhibitory concentration (MIC values of 250–450 and >450 µg × h/mL were significantly associated with 28-day mortality in the severe hypoalbuminemia group (P < 0.001, whereas AUC/MIC values of <250 µg × h/mL were not associated. We also detected a

  17. Investigation of temperature effect on half-life periods of long-lived isomer sup 1 sup 8 sup 0 sup m Hf and sup 8 sup 7 sup m Sr

    CERN Document Server

    Alpatov, V G; Davydov, A V; Isaev, Y N; Kartashov, G R; Korotkov, M M; Samojlov, V M


    The experiments on measuring the half-life periods of the sup 1 sup 8 sup 0 sup m Hf and sup 8 sup 7 sup m Sr long-lived isomers at the room temperature and at 77 K with application of the HfO sub 2 , Sr(NO sub 3) sub 2 and SrCO sub 3 massive samples are described. The isomer states of the corresponding nuclei were formed by the samples irradiation through neutrons from the Pu-Be source. According to the Vysotski theory and other authors the surrounding of the gamma-active nuclei by a large number of the same nuclei in the basic state should lead to the T sub 1 sub / sub 2 growth due to distortion of the zero electromagnetic vacuum oscillations near the nuclear energy level value. Decrease in the sample temperature leads to the narrowing of the gamma-lines, especially for the Moessbauer low-energy transitions, which increases the resonance effect on the zero oscillations spectrum. Increase in the T sub 1 sub / sub 2 by 2.99 +- 0.87% was observed by cooling the sup 1 sup 8 sup 0 sup m Hf isomer sample, in the ...

  18. Express with caution: Epitope tags and cDNA variants effects on hERG channel trafficking, half-life and function. (United States)

    Osterbur Badhey, Marika L; Bertalovitz, Alexander C; McDonald, Thomas V


    Genetic mutations in KCNH2, which encodes hERG, the alpha subunit of the potassium channel responsible for the IKr current, cause long QT syndrome (LQTS), an inherited cardiac arrhythmia disorder. Electrophysiology techniques are used to correlate genotype with molecular phenotype to determine which mutations identified in patients diagnosed with LQTS are disease causing, and which are benign. These investigations are usually done using heterologous expression in cell lines, and often, epitope fusion tags are used to enable isolation and identification of the protein of interest. Here, we demonstrate through electrophysiology techniques and immunohistochemistry, that both N-terminal and C-terminal myc fusion tags may perturb hERG protein channel expression and kinetics of the IKr current. We also characterize the impact of 2 previously reported inadvertent cDNA variants on hERG channel expression and half-life. Our results underscore the importance of careful characterization of the impact of epitope fusion tags and of confirming complete sequence accuracy prior to genotype-phenotype studies for ion channel proteins such as hERG. © 2017 Wiley Periodicals, Inc.

  19. Interleukin 17 treatment prolongs CXCL1 mRNA half-life via TRAF5 and the splicing regulatory factor SF2/ASF (United States)

    Sun, Dongxu; Novotny, Michael; Bulek, Katarzyna; Liu, Caini; Li, Xiaoxia; Hamilton, Thomas


    Interleukin 17 (IL-17) promotes expression of chemokines and cytokines via induction of gene transcription and post-transcriptional stabilization of mRNA. We show that IL-17 enhanced the stability of CXCL1 and other mRNAs through a pathway that involves Act1, TRAF2 or TRAF5 and the splicing factor SF2/ASF. TRAF2/TRAF5 were necessary for IL-17 to signal CXCL1 mRNA stabilization. Furthermore, IL-17 promoted formation of complexes between TRAF5/TRAF2, Act1 and SF2/ASF. Overexpression of SF2/ASF shortened while depletion of SF2/ASF prolonged CXCL1 mRNA half-life. SF2/ASF bound chemokine mRNA in unstimulated cells while the SF2/ASF-mRNA interaction was markedly diminished following stimulation with IL-17. These findings define an IL-17-induced signaling pathway that links to the stabilization of selected mRNAs through Act1, TRAF2/5 and the RNA binding protein SF2/ASF. PMID:21822258

  20. Cassette dosing for pharmacokinetic screening in drug discovery: comparison of clearance, volume of distribution, half-life, mean residence time, and oral bioavailability obtained by cassette and discrete dosing in rats. (United States)

    Nagilla, Rakesh; Nord, Melanie; McAtee, Jeff J; Jolivette, Larry J


    The purpose of this investigation was to compare selected pharmacokinetic (PK) parameters obtained by cassette and discrete dosing of compounds in rats. The concordance of PK properties obtained by the two dosing strategies was evaluated for 116 compounds representing various therapeutic programs and diverse chemical structures. The correspondence between cassette- and discrete-dosing-derived PK properties was examined semiquantitatively and qualitatively. For semiquantitative comparison, compounds with cassette-to-discrete PK parameter ratios between 0.5 and 2 (inclusive) were considered to be in agreement. For qualitative comparison, compounds were divided into three categories (low, moderate, and high) based on the value of the PK parameter; compounds that fell into the same category following cassette and discrete dosing were considered to be in agreement. Of the 116 compounds evaluated, 89%, 91%, 80%, and 91% of the compounds were semiquantitatively equivalent for the intravenous PK parameters of clearance (CL), volume of distribution (Vdss), terminal elimination plasma half-life (HL), and mean residence time (MRT), respectively, whereas 79%, 80%, 79%, and 72% were qualitatively similar for CL, Vdss, MRT, and terminal elimination plasma HL, respectively. Following oral administration, bioavailability concordance was 72% when assessed qualitatively and 78% when determined semiquantitatively. Results from these analyses indicate that a cassette dosing strategy is a viable approach to screen compounds for PK properties within a drug discovery setting. Copyright © 2011 Wiley-Liss, Inc.

  1. Fidelity based measurement induced nonlocality (United States)

    Muthuganesan, R.; Sankaranarayanan, R.


    In this paper, we propose measurement induced nonlocality (MIN) using a metric based on fidelity to capture global nonlocal effect of a quantum state due to locally invariant projective measurements. This quantity is a remedy for local ancilla problem in the original definition of MIN. We present an analytical expression of the proposed version of MIN for pure bipartite state and 2 × n dimensional mixed state. We also provide an upper bound of the MIN for general mixed state. Finally, we compare this quantity with MINs based on Hilbert-Schmidt norm and skew information for higher dimensional Werner and isotropic states.

  2. Software-based acoustical measurements

    CERN Document Server

    Miyara, Federico


    This textbook provides a detailed introduction to the use of software in combination with simple and economical hardware (a sound level meter with calibrated AC output and a digital recording system) to obtain sophisticated measurements usually requiring expensive equipment. It emphasizes the use of free, open source, and multiplatform software. Many commercial acoustical measurement systems use software algorithms as an integral component; however the methods are not disclosed. This book enables the reader to develop useful algorithms and provides insight into the use of digital audio editing tools to document features in the signal. Topics covered include acoustical measurement principles, in-depth critical study of uncertainty applied to acoustical measurements, digital signal processing from the basics, and metrologically-oriented spectral and statistical analysis of signals. The student will gain a deep understanding of the use of software for measurement purposes; the ability to implement software-based...

  3. Study of the half-life of {sup 123}I and the determination of possible radionuclidic impurities; Estudo da meia-vida do {sup 123}I e determinacao de possiveis impurezas radionuclidicas

    Energy Technology Data Exchange (ETDEWEB)

    Delgado, Jose Ubiratan; Araujo, Miriam Taina Ferreira de; Silva, Carlos Jose da; Araujo, Camila Cristina Cunha; Candido, Marcos Antonio; Pereira, Wagner do Prado [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil)


    During the process of production of the radiopharmaceutical nuclear reactor or cyclotron, impurities can be generated from biological, chemical and radionuclidic. The development of the present work was to study the half-life of Na{sup 123}I sample produced in IEN (Institute of Nuclear Engineering) using the technique of gamma-ray spectrometry with germanium detector in order to Identify such impurities. The results Indicate values of half-life consistent with recent publications with a deviation of 0,08% and 0:11% of uncertainty as well as the identification of impurities to radionuclides {sup 95m}Tc, {sup 96}Tc and {sup 121}Te. (author)

  4. Strain measurement based battery testing (United States)

    Xu, Jeff Qiang; Steiber, Joe; Wall, Craig M.; Smith, Robert; Ng, Cheuk


    A method and system for strain-based estimation of the state of health of a battery, from an initial state to an aged state, is provided. A strain gauge is applied to the battery. A first strain measurement is performed on the battery, using the strain gauge, at a selected charge capacity of the battery and at the initial state of the battery. A second strain measurement is performed on the battery, using the strain gauge, at the selected charge capacity of the battery and at the aged state of the battery. The capacity degradation of the battery is estimated as the difference between the first and second strain measurements divided by the first strain measurement.

  5. Modeling shows that the NS5A inhibitor daclatasvir has two modes of action and yields a shorter estimate of the hepatitis C virus half-life. (United States)

    Guedj, Jeremie; Dahari, Harel; Rong, Libin; Sansone, Natasha D; Nettles, Richard E; Cotler, Scott J; Layden, Thomas J; Uprichard, Susan L; Perelson, Alan S


    The nonstructural 5A (NS5A) protein is a target for drug development against hepatitis C virus (HCV). Interestingly, the NS5A inhibitor daclatasvir (BMS-790052) caused a decrease in serum HCV RNA levels by about two orders of magnitude within 6 h of administration. However, NS5A has no known enzymatic functions, making it difficult to understand daclatasvir's mode of action (MOA) and to estimate its antiviral effectiveness. Modeling viral kinetics during therapy has provided important insights into the MOA and effectiveness of a variety of anti-HCV agents. Here, we show that understanding the effects of daclatasvir in vivo requires a multiscale model that incorporates drug effects on the HCV intracellular lifecycle, and we validated this approach with in vitro HCV infection experiments. The model predicts that daclatasvir efficiently blocks two distinct stages of the viral lifecycle, namely viral RNA synthesis and virion assembly/secretion with mean effectiveness of 99% and 99.8%, respectively, and yields a more precise estimate of the serum HCV half-life, 45 min, i.e., around four times shorter than previous estimates. Intracellular HCV RNA in HCV-infected cells treated with daclatasvir and the HCV polymerase inhibitor NM107 showed a similar pattern of decline. However, daclatasvir treatment led to an immediate and rapid decline of extracellular HCV titers compared to a delayed (6-9 h) and slower decline with NM107, confirming an effect of daclatasvir on both viral replication and assembly/secretion. The multiscale modeling approach, validated with in vitro kinetic experiments, brings a unique conceptual framework for understanding the mechanism of action of a variety of agents in development for the treatment of HCV.

  6. Studies on the mechanism of the epileptiform activity induced by U18666A. II. concentration, half-life and distribution of radiolabeled U18666A in the brain

    Energy Technology Data Exchange (ETDEWEB)

    Cenedella, R.J.; Sarkar, C.P.; Towns, L.


    The concentration, half-life, and distribution in brain of U18666A, a drug that can drastically alter cerebral lipids and induce a chronic epileptiform state, was determined following both acute and chronic drug administration. U18666A specifically labeled with tritium was prepared by custom synthesis. Brain levels of 1 x 10(-6)M and higher were reached soon after giving an acute 10-mg/kg dose (i.p. or s.c.) of U18666A containing 7-/sup 3/H-U18666A of known specific activity. A steady state concentration of 1 to 2 x 10(-6)M was reached with chronic injection of 10 mg/kg every 4th day, a treatment schedule that results in altered brain lipids and induction of epilepsy if begun soon after birth. The disappearance of U18666A from both brain and serum was described by two similar biexponential processes, a brief rapid clearance (t1/2 . 10 h) and a sustained and much slower one (t1/2 . 65 h). Brain levels of the drug were about 10 times higher than serum at all times examined. Few differences were seen in the regional distribution of radiolabeled drug in brain as determined by both direct analysis and by autoradiographic examination; but the drug did concentrate in lipid-rich subcellular fractions. For example, the synaptosome and myelin fractions each contained about 25-35% of both the total /sup 3/H-labeled drug and total lipid in whole brain. The lipid composition of these fractions was drastically altered in treated animals. In conclusion, the chronic epileptiform state induced by U18666A does not appear to involve localization of the drug in a specific brain region or particular cell type. Rather, the condition could involve localization of the drug in lipid-rich membranes and marked changes in the composition of these membranes.

  7. Measurement-Based Linear Optics. (United States)

    Alexander, Rafael N; Gabay, Natasha C; Rohde, Peter P; Menicucci, Nicolas C


    A major challenge in optical quantum processing is implementing large, stable interferometers. We offer a novel approach: virtual, measurement-based interferometers that are programed on the fly solely by the choice of homodyne measurement angles. The effects of finite squeezing are captured as uniform amplitude damping. We compare our proposal to existing (physical) interferometers and consider its performance for BosonSampling, which could demonstrate postclassical computational power in the near future. We prove its efficiency in time and squeezing (energy) in this setting.

  8. « What’s Black and White and Read All Over? » Esquisse d’un je(u étrange : Half Life (2006 de Shelley Jackson “What’s Black and White and Read all Over?” Shelley Jackson’s Half Life (2006, or a Strange Game of I and Seek

    Directory of Open Access Journals (Sweden)

    Stéphane Vanderhaeghe


    Full Text Available Shelley Jackson’s first novel, Half Life, leaves Nora in command of the text, making her tell her story and conquer a self she was deprived of at her birth. But how to do so when “her” story is always-already “theirs,” split in half, and when saying I is therefore impossible? For Nora is not alone, Nora is inseparable—and literally so—from Blanche, her “twofer” sister. Through an interplay of binary oppositions, Jackson fashions the (twin metaphor of writing and reading, inciting her reader to intervene in the text and dig deeper into it to (uncover meaning under the successive layers of whiteness that reveal the blankness of a page that is slowly being erased as one reads / runs over it.

  9. Pressure Measurement Based on Thermocouples (United States)

    Thomsen, K.


    Measuring gas pressures reliably in a harsh radiation environment was confirmed to be tricky during operation of the liquid spallation target of MEGAPIE at the Paul Scherrer Institute (PSI). Severe drift of calibration and the loss of a sensor were experienced. At the same time, the only instrumentation that worked flawlessly in the system were thermocouples. Motivated by this experience, a novel pressure sensor for application in high radiation fields has been developed, which is based on temperature measurement. The new sensor takes advantage of the fact that the thermal conductivity over a mechanical joint exhibits a strong dependence on the contact pressure. In the novel sensor heating is applied at one point and temperatures are measured at different specific locations of the pressure gage; in particular, the temperatures on the two sides of a mechanical contact are monitored. From the observed temperature distribution the gas pressure can be derived. By choosing specific mechanical details in the lay-out, it is possible to tailor the useful measurement range. In addition to yielding pressure values, the new sensor concept admits for obtaining a measure for the accuracy of the result. This is done by continuous self monitoring of the device. The health status and based thereupon the plausibility of the indicated pressure value can be deducted by comparing sensed temperatures to expectation values for any given heating power. Malfunctioning of the pressure gage is reliably detected from the diverse readings of only one device; this can be seen as providing internal redundancy while at the same time immunity to common mode failure. After some analytical and finite element studies to verify the concept in principle, a first prototype of such a novel pressure sensor has been built at PSI. Initial measurement campaigns demonstrated the correct operation of the device as anticipated. Further potential for optimization, like designing a gage for high temperature

  10. Fallout radionuclide-based techniques for assessing the impact of soil conservation measures on erosion control and soil quality: an overview of the main lessons learnt under an FAO/IAEA Coordinated Research Project. (United States)

    Dercon, G; Mabit, L; Hancock, G; Nguyen, M L; Dornhofer, P; Bacchi, O O S; Benmansour, M; Bernard, C; Froehlich, W; Golosov, V N; Haciyakupoglu, S; Hai, P S; Klik, A; Li, Y; Lobb, D A; Onda, Y; Popa, N; Rafiq, M; Ritchie, J C; Schuller, P; Shakhashiro, A; Wallbrink, P; Walling, D E; Zapata, F; Zhang, X


    This paper summarizes key findings and identifies the main lessons learnt from a 5-year (2002-2008) coordinated research project (CRP) on "Assessing the effectiveness of soil conservation measures for sustainable watershed management and crop production using fallout radionuclides" (D1.50.08), organized and funded by the International Atomic Energy Agency through the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture. The project brought together nineteen participants, from Australia, Austria, Brazil, Canada, Chile, China, Japan, Morocco, Pakistan, Poland, Romania, Russian Federation, Turkey, United Kingdom, United States of America and Vietnam, involved in the use of nuclear techniques and, more particularly, fallout radionuclides (FRN) to assess the relative impacts of different soil conservation measures on soil erosion and land productivity. The overall objective of the CRP was to develop improved land use and management strategies for sustainable watershed management through effective soil erosion control practices, by the use of ¹³⁷Cs (half-life of 30.2 years), ²¹⁰Pb(ex) (half-life of 22.3 years) and ⁷Be (half-life of 53.4 days) for measuring soil erosion over several spatial and temporal scales. The environmental conditions under which the different research teams applied the tools based on the use of fallout radionuclides varied considerably--a variety of climates, soils, topographies and land uses. Nevertheless, the achievements of the CRP, as reflected in this overview paper, demonstrate that fallout radionuclide-based techniques are powerful tools to assess soil erosion/deposition at several spatial and temporal scales in a wide range of environments, and offer potential to monitor soil quality. The success of the CRP has stimulated an interest in many IAEA Member States in the use of these methodologies to identify factors and practices that can enhance sustainable agriculture and minimize land degradation. Copyright

  11. The relationships between half-life (t1/2) and mean residence time (MRT) in the two-compartment open body model. (United States)

    Sobol, Eyal; Bialer, Meir


    In the one-compartment model following i.v. administration the mean residence time (MRT) of a drug is always greater than its half-life (t(1/2)). However, following i.v. administration, drug plasma concentration (C) versus time (t) is best described by a two-compartment model or a two exponential equation:C=Ae(-alpha t)+Be(-beta t), where A and B are concentration unit-coefficients and alpha and beta are exponential coefficients. The relationships between t(1/2) and MRT in the two-compartment model have not been explored and it is not clear whether in this model too MRT is always greater than t(1/2). In the current paper new equations have been developed that describe the relationships between the terminal t(1/2) (or t(1/2 beta)) and MRT in the two-compartment model following administration of i.v. bolus, i.v. infusion (zero order input) and oral administration (first order input). A critical value (CV) equals to the quotient of (1-ln2) and (1-beta/alpha) (CV=(1-ln2)/(1-beta/alpha)=0.307/(1-beta/alpha)) has been derived and was compared with the fraction (f(1)) of drug elimination or AUC (AUC-area under C vs t curve) associated with the first exponential term of the two-compartment equation (f(1)=A/alpha/AUC). Following i.v. bolus, CV ranges between a minimal value of 0.307 (1-ln2) and infinity. As long as f(1)MRT>t(1/2) and vice versa, and when f(1)=CV, then MRT=t(1/2). Following i.v. infusion and oral administration the denominator of the CV equation does not change but its numerator increases to (0.307+beta T/2) (T-infusion duration) and (0.307+beta/ka) (ka-absorption rate constant), respectively. Examples of various drugs are provided. For every drug that after i.v. bolus shows two-compartment disposition kinetics the following conclusions can be drawn (a) When f(1)MRT>t(1/2). (b) When beta/alpha>ln2, then CV>1>f(1) and thus(,) MRT>t(1/2). (c) When ln2>beta/alpha>(ln4-1), then 1>CV>0.5 and thus, in order for t(1/2)>MRT, f(1) has to be greater than its

  12. Half-Life Systematics across the N =126 Shell Closure: Role of First-Forbidden Transitions in the β Decay of Heavy Neutron-Rich Nuclei (United States)

    Morales, A. I.; Benlliure, J.; Kurtukián-Nieto, T.; Schmidt, K.-H.; Verma, S.; Regan, P. H.; Podolyák, Z.; Górska, M.; Pietri, S.; Kumar, R.; Casarejos, E.; Al-Dahan, N.; Algora, A.; Alkhomashi, N.; Álvarez-Pol, H.; Benzoni, G.; Blazhev, A.; Boutachkov, P.; Bruce, A. M.; Cáceres, L. S.; Cullen, I. J.; Denis Bacelar, A. M.; Doornenbal, P.; Estévez-Aguado, M. E.; Farrelly, G.; Fujita, Y.; Garnsworthy, A. B.; Gelletly, W.; Gerl, J.; Grebosz, J.; Hoischen, R.; Kojouharov, I.; Kurz, N.; Lalkovski, S.; Liu, Z.; Mihai, C.; Molina, F.; Mücher, D.; Rubio, B.; Shaffner, H.; Steer, S. J.; Tamii, A.; Tashenov, S.; Valiente-Dobón, J. J.; Walker, P. M.; Wollersheim, H. J.; Woods, P. J.


    This Letter reports on a systematic study of β-decay half-lives of neutron-rich nuclei around doubly magic Pb208. The lifetimes of the 126-neutron shell isotone Pt204 and the neighboring Ir200-202, Pt203, Au204 are presented together with other 19 half-lives measured during the "stopped beam" campaign of the rare isotope investigations at GSI collaboration. The results constrain the main nuclear theories used in calculations of r-process nucleosynthesis. Predictions based on a statistical macroscopic description of the first-forbidden β strength reveal significant deviations for most of the nuclei with N <126. In contrast, theories including a fully microscopic treatment of allowed and first-forbidden transitions reproduce more satisfactorily the trend in the measured half-lives for the nuclei in this region, where the r-process pathway passes through during β decay back to stability.

  13. European wet deposition maps based on measurements

    NARCIS (Netherlands)

    Leeuwen EP van; Erisman JW; Draaijers GPJ; Potma CJM; Pul WAJ van; LLO


    To date, wet deposition maps on a European scale have been based on long-range transport model results. For most components wet deposition maps based on measurements are only available on national scales. Wet deposition maps of acidifying components and base cations based on measurements are needed

  14. Measuring the actual I-131 thyroid uptake curve with a collar detector system: a feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Brinks, Peter; Van Gils, Koen; Dickerscheid, Dennis B.M.; Habraken, Jan B.A. [Department of Medical Physics, St. Antonius Hospital, Nieuwegein (Netherlands); Kranenborg, Ellen; Lavalaye, Jules [Department of Nuclear Medicine, St. Antonius Hospital, Nieuwegein (Netherlands)


    Radionuclide therapy using I-131 is commonly used for the treatment of benign thyroid diseases. The therapeutic dose to be administered is calculated based on the type of disease, the volume of the thyroid, and the measured uptake percentage. This methodology assumes a similar biological half-life of iodine, whereas in reality a large variation in biological half-life is observed. More knowledge about the actual biological half-life of iodine for individual patients will improve the quantification of the delivered radiation dose during radioiodine therapy and could aid the evaluation of the success of the therapy. In this feasibility study we used a novel measurement device [Collar Therapy Indicator (CoTI)] to measure the uptake curve of patients undergoing I-131 radioiodine therapy. The CoTI device is a light-weight wearable device that contains two independent gamma radiation detectors that are placed in a collar. By comparing results of thyroid uptake measurements with results obtained with a gamma camera, the precision of the system is demonstrated. Additionally, for three patients the uptake curve is measured during 48 h of admission in the hospital. The presented results demonstrate the feasibility of the new measurement device to measure the uptake curve during radioiodine therapy. (orig.)

  15. Identification of the neutron-rich nuclides /sup 147; 148/Ba and half- life determination of the heavy isotopes of Rb, Sr, Y, Cs, Ba and La

    CERN Document Server

    Amiel, S; Nir-El, Y; Shmid, M


    The neutron nuclides /sup 147; 148/Ba were produced in the thermal neutron induced fission of /sup 235/U. A new surface ionization integrated target ion source operating at temperatures in the region of 1800 degrees C permits the measurement of half-lives of isotopes down to about 0.1 sec due to the very fast release of atoms from the target. Isotopes of Rb, Sr, Cs, and Ba were separated by positive surface ionization and their half-lives measured using beta activity detected by a silicon surface barrier detector with a depletion depth of 300 mu . The isotopes /sup 147/Ba and /sup 148/Ba were identified for the first time and their half-lives were found to be 0.72+or-0.07 sec and 0.47+or-0.20 sec, respectively. (0 refs).

  16. I-Xe systematics of the impact plume produced chondrules from the CB carbonaceous chondrites: Implications for the half-life value of 129I and absolute age normalization of 129I-129Xe chronometer. (United States)

    Pravdivtseva, O; Meshik, A; Hohenberg, C M; Krot, A N


    0.6 Ma value for 129I half-life. The slopes of I-Xe - Pb-Pb correlation lines plotted for different sets of samples for Shallowater normalization are always ≤1. Assuming uranium half-life values are correct; this restricts the half-life of 129I to ≤15.7 Ma.

  17. Influência dos sucos de frutas sobre a biodisponibilidade e meia-vida dos medicamentos Influence of fruit juices on bioavailability and half-life of drugs

    Directory of Open Access Journals (Sweden)

    Carolina Mariante de Abreu


    Full Text Available Introdução: A administração de medicamentos juntamente com sucos de frutas pode determinar variações na farmacocinética e farmacodinâmica, comprometendo a biodisponibilidade e meia-vida dos compostos envolvidos. Objetivo: Verificar e classificar informações sobre ocorrências de interações que resultam da co-administração de medicamentos e sucos de frutas. Materiais e Métodos: Foi realizada uma revisão na literatura a cerca do tema proposto. Os dados sobre interações foram obtidos de pesquisa em diferentes bancos de dados e livros relacionados, realizada no período de 2008/2009. O método utilizado no estudo incluiu a análise de informações obtidas a partir de bases de dados como PUBMED, SCIELO, BDENF, BBO e natural medicine comprehensive database, sendo que os descritores utilizados consistiram em: interações, interactions, sucos de frutas, fruit juice, CYP 450, P-glicoproteínas e OATP (organic anion transporter polypeptide. Resultados: Ficou evidente a existência de um número significativo de interações, identificadas a partir de estudos experimentais, casos-controle e relatos de caso. Referências sobre agravamento de um efeito ou ineficácia terapêutica, decorrentes da associação de fármacos com sucos de frutas, foram citadas por diversos autores. Conclusão: O estudo evidenciou que interações entre sucos de laranja e toranja consistem nas ocorrências mais comumente citadas, muitas vezes responsáveis por variações sobre a biodisponibilidade e meia-vida de fármacos o que, em última instância, determina variações na resposta terapêutica.Administering medications together with fruit juices may lead to variations in pharmacokinetics and pharmacodynamics, with repercussions on the bioavailability and half –life of the compounds. Pharmacokinetic mechanisms are usually related to intestinal and hepatic enzymes of cytochrome P-450 (CYP 450 interference, as well as in transporter enzymes of the mucosa (P

  18. Methodical Challenges and a Possible Resolution in the Assessment of Receptor Reserve for Adenosine, an Agonist with Short Half-Life

    Directory of Open Access Journals (Sweden)

    Judit Zsuga


    Full Text Available The term receptor reserve, first introduced and used in the traditional receptor theory, is an integrative measure of response-inducing ability of the interaction between an agonist and a receptor system (consisting of a receptor and its downstream signaling. The underlying phenomenon, i.e., stimulation of a submaximal fraction of receptors can apparently elicit the maximal effect (in certain cases, provides an opportunity to assess the receptor reserve. However, determining receptor reserve is challenging for agonists with short half-lives, such as adenosine. Although adenosine metabolism can be inhibited several ways (in order to prevent the rapid elimination of adenosine administered to construct concentration–effect (E/c curves for the determination, the consequent accumulation of endogenous adenosine biases the results. To address this problem, we previously proposed a method, by means of which this bias can be mathematically corrected (utilizing a traditional receptor theory-independent approach. In the present investigation, we have offered in silico validation of this method by simulating E/c curves with the use of the operational model of agonism and then by evaluating them using our method. We have found that our method is suitable to reliably assess the receptor reserve for adenosine in our recently published experimental setting, suggesting that it may be capable for a qualitative determination of receptor reserve for rapidly eliminating agonists in general. In addition, we have disclosed a possible interference between FSCPX (8-cyclopentyl-N3-[3-(4-(fluorosulfonylbenzoyloxypropyl]-N1-propylxanthine, an irreversible A1 adenosine receptor antagonist, and NBTI (S-(2-hydroxy-5-nitrobenzyl-6-thioinosine, a nucleoside transport inhibitor, i.e., FSCPX may blunt the effect of NBTI.

  19. Methodical Challenges and a Possible Resolution in the Assessment of Receptor Reserve for Adenosine, an Agonist with Short Half-Life. (United States)

    Zsuga, Judit; Erdei, Tamas; Szabó, Katalin; Lampe, Nora; Papp, Csaba; Pinter, Akos; Szentmiklosi, Andras Jozsef; Juhasz, Bela; Szilvássy, Zoltán; Gesztelyi, Rudolf


    The term receptor reserve, first introduced and used in the traditional receptor theory, is an integrative measure of response-inducing ability of the interaction between an agonist and a receptor system (consisting of a receptor and its downstream signaling). The underlying phenomenon, i.e., stimulation of a submaximal fraction of receptors can apparently elicit the maximal effect (in certain cases), provides an opportunity to assess the receptor reserve. However, determining receptor reserve is challenging for agonists with short half-lives, such as adenosine. Although adenosine metabolism can be inhibited several ways (in order to prevent the rapid elimination of adenosine administered to construct concentration-effect (E/c) curves for the determination), the consequent accumulation of endogenous adenosine biases the results. To address this problem, we previously proposed a method, by means of which this bias can be mathematically corrected (utilizing a traditional receptor theory-independent approach). In the present investigation, we have offered in silico validation of this method by simulating E/c curves with the use of the operational model of agonism and then by evaluating them using our method. We have found that our method is suitable to reliably assess the receptor reserve for adenosine in our recently published experimental setting, suggesting that it may be capable for a qualitative determination of receptor reserve for rapidly eliminating agonists in general. In addition, we have disclosed a possible interference between FSCPX (8-cyclopentyl- N³ -[3-(4-(fluorosulfonyl)benzoyloxy)propyl]- N ¹-propylxanthine), an irreversible A₁ adenosine receptor antagonist, and NBTI (S-(2-hydroxy-5-nitrobenzyl)-6-thioinosine), a nucleoside transport inhibitor, i.e., FSCPX may blunt the effect of NBTI.

  20. Toward Measuring Network Aesthetics Based on Symmetry

    Directory of Open Access Journals (Sweden)

    Zengqiang Chen


    Full Text Available In this exploratory paper, we discuss quantitative graph-theoretical measures of network aesthetics. Related work in this area has typically focused on geometrical features (e.g., line crossings or edge bendiness of drawings or visual representations of graphs which purportedly affect an observer’s perception. Here we take a very different approach, abandoning reliance on geometrical properties, and apply information-theoretic measures to abstract graphs and networks directly (rather than to their visual representaions as a means of capturing classical appreciation of structural symmetry. Examples are used solely to motivate the approach to measurement, and to elucidate our symmetry-based mathematical theory of network aesthetics.

  1. Acoustic Hygrometer Based on Reverberation Time Measurement (United States)

    Motegi, Takahiro; Mizutani, Koichi; Wakatsuki, Naoto


    In this paper, a hygrometer operated by acoustic means is proposed. It is important to measure spatial average humidity for environmental management in a room. In a large space, it is difficult to determine spatial average humidity because conventional sensors measure only local humidity at the measurement point. The proposed acoustic hygrometer utilizes the relationship between the sound attenuation coefficient and humidity. To measure the sound attenuation coefficient, reverberation time in a room is utilized. An acoustic hygrometer based on reverberation time achieves a noncontact measurement of spatial average humidity. As a practical examination, relative humidity (RH) was measured on the basis of reverberation time in a chamber, and compared with reference values. The humidity measurement accuracy of the hygrometer was evaluated by statistical means because the measured reverberation time showed variability. From the results, the possibility of humidity measurement with an accuracy of about 5% RH at 50% RH or more using this hygrometer was verified. Here, the unit of RH is % RH.

  2. Accuracy of magnetic resonance based susceptibility measurements (United States)

    Erdevig, Hannah E.; Russek, Stephen E.; Carnicka, Slavka; Stupic, Karl F.; Keenan, Kathryn E.


    Magnetic Resonance Imaging (MRI) is increasingly used to map the magnetic susceptibility of tissue to identify cerebral microbleeds associated with traumatic brain injury and pathological iron deposits associated with neurodegenerative diseases such as Parkinson's and Alzheimer's disease. Accurate measurements of susceptibility are important for determining oxygen and iron content in blood vessels and brain tissue for use in noninvasive clinical diagnosis and treatment assessments. Induced magnetic fields with amplitude on the order of 100 nT, can be detected using MRI phase images. The induced field distributions can then be inverted to obtain quantitative susceptibility maps. The focus of this research was to determine the accuracy of MRI-based susceptibility measurements using simple phantom geometries and to compare the susceptibility measurements with magnetometry measurements where SI-traceable standards are available. The susceptibilities of paramagnetic salt solutions in cylindrical containers were measured as a function of orientation relative to the static MRI field. The observed induced fields as a function of orientation of the cylinder were in good agreement with simple models. The MRI susceptibility measurements were compared with SQUID magnetometry using NIST-traceable standards. MRI can accurately measure relative magnetic susceptibilities while SQUID magnetometry measures absolute magnetic susceptibility. Given the accuracy of moment measurements of tissue mimicking samples, and the need to look at small differences in tissue properties, the use of existing NIST standard reference materials to calibrate MRI reference structures is problematic and better reference materials are required.

  3. Green maritime transportation: Market based measures

    DEFF Research Database (Denmark)

    Psaraftis, Harilaos N.


    The purpose of this chapter is to introduce the concept of Market Based Measures (MBMs) to reduce Green House Gas (GHG) emissions from ships, and review several distinct MBM proposals that have been under consideration by the International Maritime Organization (IMO). The chapter discusses...

  4. Some effects on SPM based surface measurement

    Energy Technology Data Exchange (ETDEWEB)

    Huang Wenhao; Chen Yuhang [University of Science and Technology of China, Department of Precision Engineering and Instrumentation, Hefei, 230026 (China)


    The scanning probe microscope (SPM) has been used as a powerful tool for nanotechnology, especially in surface nanometrology. However, there are a lot of false images and modifications during the SPM measurement on the surfaces. This is because of the complex interaction between the SPM tip and the surface. The origin is not only due to the tip material or shape, but also to the structure of the sample. So people are paying much attention to draw true information from the SPM images. In this paper, we present some simulation methods and reconstruction examples for the microstructures and surface roughness based on SPM measurement. For example, in AFM measurement, we consider the effects of tip shape and dimension, also the surface topography distribution in both height and space. Some simulation results are compared with other measurement methods to verify the reliability.

  5. Treatment with IL-17 prolongs the half-life of chemokine CXCL1 mRNA via the adaptor TRAF5 and the splicing-regulatory factor SF2 (ASF). (United States)

    Sun, Dongxu; Novotny, Michael; Bulek, Katarzyna; Liu, Caini; Li, Xiaoxia; Hamilton, Thomas


    Interleukin 17 (IL-17) promotes the expression of chemokines and cytokines via the induction of gene transcription and post-transcriptional stabilization of mRNA. We show here that IL-17 enhanced the stability of chemokine CXCL1 mRNA and other mRNAs through a pathway that involved the adaptor Act1, the adaptors TRAF2 or TRAF5 and the splicing factor SF2 (also known as alternative splicing factor (ASF)). TRAF2 and TRAF5 were necessary for IL-17 to signal the stabilization of CXCL1 mRNA. Furthermore, IL-17 promoted the formation of complexes of TRAF5-TRAF2, Act1 and SF2 (ASF). Overexpression of SF2 (ASF) shortened the half-life of CXCL1 mRNA, whereas depletion of SF2 (ASF) prolonged it. SF2 (ASF) bound chemokine mRNA in unstimulated cells, whereas the SF2 (ASF)-mRNA interaction was much lower after stimulation with IL-17. Our findings define an IL-17-induced signaling pathway that links to the stabilization of selected mRNA species through Act1, TRAF2-TRAF5 and the RNA-binding protein SF2 (ASF).

  6. Reliability-Based Planning of Chloride Measurements

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Engelund, S.


    In reinforced concrete structures corrosion is initiated when the chloride concentration around the reinforcement exceeds a threshold value. If corrosion starts then expensive repairs can be necessary. The estimation of the probability that corrosion has been initiated in a given structure is based...... on measurements of the chloride content obtained from the given structure. In the present paper optimal planning of measurements of the chloride content in reinforced concrete structures is considered. It is shown how optimal experimental plans can be obtained using FORM-analysis. Bayesian statistics are used...

  7. Measuring globalization-based acculturation in Ladakh

    DEFF Research Database (Denmark)

    Ozer, Simon; Schwartz, Seth


    to include groups who are exposed to global culturalstreams without international migration. The globalization-based acculturation process inthe North Indian region of Ladakh appears to be a tricultural encounter, suggesting anaddendum to the bidimensional acculturation model for this group (and perhaps...... for othersas well). This study explores the development, usability, and validity of a tridimensionalacculturation measure aiming to capture the multicultural orientations initiated by theprocess of globalization in Ladakh. The tridimensional acculturation scale was found to fitthe data significantly better...... compared to the bidimensional scale. Implications for the studyof globalization-based acculturation are discussed....

  8. A SVD Based Image Complexity Measure

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John; Pedersen, Kim Steenstrup; Nielsen, Mads


    Images are composed of geometric structures and texture, and different image processing tools - such as denoising, segmentation and registration - are suitable for different types of image contents. Characterization of the image content in terms of geometric structure and texture is an important...... problem that one is often faced with. We propose a patch based complexity measure, based on how well the patch can be approximated using singular value decomposition. As such the image complexity is determined by the complexity of the patches. The concept is demonstrated on sequences from the newly...... collected DIKU Multi-Scale image database....

  9. Statistical inference based on divergence measures

    CERN Document Server

    Pardo, Leandro


    The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...

  10. Bioimpedance measurement based evaluation of wound healing. (United States)

    Kekonen, Atte; Bergelin, Mikael; Eriksson, Jan-Erik; Vaalasti, Annikki; Ylänen, Heimo; Viik, Jari


    Our group has developed a bipolar bioimpedance measurement-based method for determining the state of wound healing. The objective of this study was to assess the capability of the method. To assess the performance of the method, we arranged a follow-up study of four acute wounds. The wounds were measured using the method and photographed throughout the healing process. Initially the bioimpedance of the wounds was significantly lower than the impedance of the undamaged skin, used as a baseline. Gradually, as healing progressed, the wound impedance increased and finally reached the impedance of the undamaged skin. The clinical appearance of the wounds examined in this study corresponded well with the parameters derived from the bioimpedance data. Hard-to-heal wounds are a significant and growing socioeconomic burden, especially in the developed countries, due to aging populations and to the increasing prevalence of various lifestyle related diseases. The assessment and the monitoring of chronic wounds are mainly based on visual inspection by medical professionals. The dressings covering the wound must be removed before assessment; this may disturb the wound healing process and significantly increases the work effort of the medical staff. There is a need for an objective and quantitative method for determining the status of a wound without removing the wound dressings. This study provided evidence of the capability of the bioimpedance based method for assessing the wound status. In the future measurements with the method should be extended to concern hard-to-heal wounds.

  11. Animal-based measures for welfare assessment

    Directory of Open Access Journals (Sweden)

    Agostino Sevi


    Full Text Available Animal welfare assessment can’t be irrespective of measures taken on animals. Indeed, housing parametersrelatedtostructures, designandmicro-environment, evenifreliable parameters related to structures, design and micro-environment, even if reliable and easier to take, can only identify conditions which could be detrimental to animal welfare, but can’t predict poor welfare in animals per se. Welfare assessment through animal-based measures is almost complex, given that animals’ responses to stressful conditions largely depend on the nature, length and intensity of challenges and on physiological status, age, genetic susceptibility and previous experience of animals. Welfare assessment requires a multi-disciplinary approach and the monitoring of productive, ethological, endocrine, immunological and pathological param- eters to be exhaustive and reliable. So many measures are needed, because stresses can act only on some of the mentioned parameters or on all of them but at different times and degree. Under this point of view, the main aim of research is to find feasible and most responsive indicators of poor animal welfare. In last decades, studies focused on the following parameters for animal wel- fare assessment indexes of biological efficiency, responses to behavioral tests, cortisol secretion, neutrophil to lymphocyte ratio, lymphocyte proliferation, production of antigen specific IgG and cytokine release, somatic cell count and acute phase proteins. Recently, a lot of studies have been addressed to reduce handling and constraint of animals for taking measures to be used in welfare assessment, since such procedures can induce stress in animals and undermined the reliability of measures taken for welfare assessment. Range of animal-based measures for welfare assessment is much wider under experimental condition than at on-farm level. In welfare monitoring on-farm the main aim is to find feasible measures of proved validity and reliability

  12. Property-Based Software Engineering Measurement (United States)

    Briand, Lionel C.; Morasca, Sandro; Basili, Victor R.


    Little theory exists in the field of software system measurement. Concepts such as complexity, coupling, cohesion or even size are very often subject to interpretation and appear to have inconsistent definitions in the literature. As a consequence, there is little guidance provided to the analyst attempting to define proper measures for specific problems. Many controversies in the literature are simply misunderstandings and stem from the fact that some people talk about different measurement concepts under the same label (complexity is the most common case). There is a need to define unambiguously the most important measurement concepts used in the measurement of software products. One way of doing so is to define precisely what mathematical properties characterize these concepts, regardless of the specific software artifacts to which these concepts are applied. Such a mathematical framework could generate a consensus in the software engineering community and provide a means for better communication among researchers, better guidelines for analysts, and better evaluation methods for commercial static analyzers for practitioners. In this paper, we propose a mathematical framework which is generic, because it is not specific to any particular software artifact and rigorous, because it is based on precise mathematical concepts. We use this framework to propose definitions of several important measurement concepts (size, length, complexity, cohesion, coupling). It does not intend to be complete or fully objective; other frameworks could have been proposed and different choices could have been made. However, we believe that the formalisms and properties we introduce are convenient and intuitive. This framework contributes constructively to a firmer theoretical ground of software measurement.

  13. Environmental Metabolic Footprinting (EMF) vs. half-life: a new and integrative proxy for the discrimination between control and pesticides exposed sediments in order to further characterise pesticides' environmental impact. (United States)

    Salvia, Marie-Virginie; Ben Jrad, Amani; Raviglione, Delphine; Zhou, Yuxiang; Bertrand, Cédric


    Pesticides are regularly used for a variety of applications and are disseminated throughout the environment. These substances may have significant negative impacts. To date, the half-life, t1/2, was often used to study the fate of pesticides in environmental matrices (water, soil, sediment). However, this value gives limited information. First, it does not evaluate the formation of by-products, resulting in the need for additional experiments to be performed to evaluate biodegradation and biotransformation products. T1/2 also fails to consider the chemical's impact on biodiversity. Resilience time, a new and integrative proxy, was recently proposed as an alternative to t1/2, with the potential to evaluate all the post-application effects of the chemical on the environment. The 'Environmental Metabolic Footprinting' (EMF) approach, giving an idea of the resilience time, was used to evaluate the impact of botanicals on soil. The goal is to optimise the EMF to study the impact of a microbial insecticide, the Bacillus thuringiensis israelensis (Bti), on sediment. The difficulty of this work lies in the commercial solution of Bti that is really complex, and this complexity yields chromatograms that are extremely difficult to interpret; t1/2 cannot be used. No methodologies currently exist to monitor the impact of these compounds on the environment. We will test the EMF to determine if it is sensitive enough to tolerate such complex mixtures. A pure chemical insecticide, the α-cypermethrin, will be also studied. The article shows that the EMF is able to distinguish meta-metabolome differences between control and exposed (with Bti) sediments.

  14. Korean Clinic Based Outcome Measure Studies

    Directory of Open Access Journals (Sweden)

    Jongbae Park


    Full Text Available Background: Evidence based medicine has become main tools for medical practice. However, conducting a highly ranked in the evidence hierarchy pyramid is not easy or feasible at all times and places. There remains a room for descriptive clinical outcome measure studies with admitting the limit of the intepretation. Aims: Presents three Korean clinic based outcome measure studies with a view to encouraging Korean clinicians to conduct similar studies. Methods: Three studies are presented briefly here including 1 Quality of Life of liver cancer patients after 8 Constitutional acupuncture; 2 Developing a Korean version of Measuring yourself Medical Outcome profile (MYMOP; and 3 Survey on 5 Shu points: a pilot In the first study, we have included 4 primary or secondary liver cancer patients collecting their diagnostic X-ray film and clinical data f개m their hospital, and asked them to fill in the European Organization Research and Treatment of Cancer, Quality of Life Questionnaire before the commencement of the treatment. The acupuncture treatment is set up format but not disclosed yet. The translation and developing a Korean version of outcome measures that is Korean clinician friendly has been sought for MYMOP is one of the most appropriate one. The permission was granted, the translation into Korean was done, then back translated into English only based on the Korean translation by the researcher who is bilingual in both languages. The back translation was compared by the original developer of MYMOP and confirmed usable. In order to test the existence of acupoints and meridians through popular forms of Korean acupuncture regimes, we aim at collecting opinions from 101 Korean clinicians that have used those forms. The questions asked include most effective symptoms, 5 Shu points, points those are least likely to use due to either adverse events or the lack of effectiveness, theoretical reasons for the above proposals, proposing outcome measures

  15. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev


    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  16. Using satellite-based measurements to explore ... (United States)

    New particle formation (NPF) can potentially alter regional climate by increasing aerosol particle (hereafter particle) number concentrations and ultimately cloud condensation nuclei. The large scales on which NPF is manifest indicate potential to use satellite-based (inherently spatially averaged) measurements of atmospheric conditions to diagnose the occurrence of NPF and NPF characteristics. We demonstrate the potential for using satellite-measurements of insolation (UV), trace gas concentrations (sulfur dioxide (SO2), nitrogen dioxide (NO2), ammonia (NH3), formaldehyde (HCHO), ozone (O3)), aerosol optical properties (aerosol optical depth (AOD), Ångström exponent (AE)), and a proxy of biogenic volatile organic compound emissions (leaf area index (LAI), temperature (T)) as predictors for NPF characteristics: formation rates, growth rates, survival probabilities, and ultrafine particle (UFP) concentrations at five locations across North America. NPF at all sites is most frequent in spring, exhibits a one-day autocorrelation, and is associated with low condensational sink (AOD×AE) and HCHO concentrations, and high UV. However, there are important site-to-site variations in NPF frequency and characteristics, and in which of the predictor variables (particularly gas concentrations) significantly contribute to the explanatory power of regression models built to predict those characteristics. This finding may provide a partial explanation for the reported spatia

  17. The Result of Multiple I-131 Treatments on the Effective Half-Life of Retained Radioactivity in Patients Ablated for Differentiated Thyroid Cancer: Possible Evidence for Thyroid Remnant Function Impairment. (United States)

    Okkalides, Demetrios


    The ablation of differentiated thyroid cancer by ingested I-131 depends on the activity absorbed by the remnant. This depends on the function of the thyroid cells and on the rate that radioactivity is excreted from the blood. The reduction of radioiodine is described by the effective half-life (EHL), which is the time taken to half the retained radioactivity. If the tumor recurs, more treatments are prescribed, often with escalating activities. Patients may receive several treatments during the evolution of the disease, and the total radioactivity administered (TRA) is the sum of all such activities. The patients' archived information permitted the calculation of EHL and TRA. The patient cohort processed here comprised 274 females and 101 males treated during 1997 to 2015. The TRA to the patients ranged between 1.1 and 129.5 GBq (average = 7.93 ± 9.9 GBq) and the EHL varied between 5.06 and 43.87 hours (average = 14.13 ± 5.7 hours). The data were processed as follows: (a) the EHL corresponding to the last treatment of each patient was plotted against TRA to patients who were treated once and to those treated several times for comparison and (b) using a small subgroup of 16 patients who were treated at least 5 times, the EHL and TRA corresponding to each treatment of each patient were plotted. A function of the form y = p-k·ln(x) was fitted on the data in all graphs and k was calculated. For patients treated once, EHL was independent of TRA. A decrease was seen in (a) multitreated patients, with the gradient (k) ranging between -0.541 and -13.880 and (b) 13 out of 16 patients, with the gradient (k) ranging between -5.55 and -31.17, both indicating an impairment of the remnant function, perhaps identified as "stunning." Since this is not avoidable, the uptake may be boosted by splitting the prescribed activity into low radioactivity fractions, which will also reduce patient hospitalization.

  18. Thermistor-based thermal conductivity measurement system

    National Research Council Canada - National Science Library

    Atkins, R.T; Wright, E.A


    This report describes a patented method for using commercially available thermistors to make in-situ thermal conductivity measurements with commonly available electronic equipment such as digital voltmeters...

  19. Refractive index measurement based on confocal method (United States)

    An, Zhe; Xu, XiPing; Yang, JinHua; Qiao, Yang; Liu, Yang


    The development of transparent materials is closed to optoelectronic technology. It plays an increasingly important role in various fields. It is not only widely used in optical lens, optical element, optical fiber grating, optoelectronics, but also widely used in the building material, pharmaceutical industry with vessel, aircraft windshield and daily wear glasses.Regard of solving the problem of refractive index measurement in optical transparent materials. We proposed that using the polychromatic confocal method to measuring the refractive index of transparent materials. In this article, we describes the principle of polychromatic confocal method for measuring the refractive index of glass,and sketched the optical system and its optimization. Then we establish the measurement model of the refractive index, and set up the experimental system. In this way, the refractive index of the glass has been calibrated for refractive index experiment. Due to the error in the experimental process, we manipulated the experiment data to compensate the refractive index measurement formula. The experiment taking the quartz glass for instance. The measurement accuracy of the refractive index of the glass is +/-1.8×10-5. This method is more practical and accurate, especially suitable for non-contact measurement occasions, which environmental requirements is not high. Environmental requirements are not high, the ordinary glass production line up to the ambient temperature can be fully adapted. There is no need for the color of the measured object that you can measure the white and a variety of colored glass.

  20. Energy Based Acoustic Measurement Senors Project (United States)

    National Aeronautics and Space Administration — This research focuses on fully developing energy density sensors that will yield a significant benefit both for measurements of interest to NASA, as well as for...

  1. Parkinson's disease detection based on dysphonia measurements (United States)

    Lahmiri, Salim


    Assessing dysphonic symptoms is a noninvasive and effective approach to detect Parkinson's disease (PD) in patients. The main purpose of this study is to investigate the effect of different dysphonia measurements on PD detection by support vector machine (SVM). Seven categories of dysphonia measurements are considered. Experimental results from ten-fold cross-validation technique demonstrate that vocal fundamental frequency statistics yield the highest accuracy of 88 % ± 0.04. When all dysphonia measurements are employed, the SVM classifier achieves 94 % ± 0.03 accuracy. A refinement of the original patterns space by removing dysphonia measurements with similar variation across healthy and PD subjects allows achieving 97.03 % ± 0.03 accuracy. The latter performance is larger than what is reported in the literature on the same dataset with ten-fold cross-validation technique. Finally, it was found that measures of ratio of noise to tonal components in the voice are the most suitable dysphonic symptoms to detect PD subjects as they achieve 99.64 % ± 0.01 specificity. This finding is highly promising for understanding PD symptoms.

  2. Measurement properties of performance-based measures to assess physical function in hip and knee osteoarthritis

    DEFF Research Database (Denmark)

    Dobson, F; Hinman, R S; Hall, M


    reviewers independently rated measurement properties using the consensus-based standards for the selection of health status measurement instrument (COSMIN). "Best evidence synthesis" was made using COSMIN outcomes and the quality of findings. RESULTS: Twenty-four out of 1792 publications were eligible......OBJECTIVES: To systematically review the measurement properties of performance-based measures to assess physical function in people with hip and/or knee osteoarthritis (OA). METHODS: Electronic searches were performed in MEDLINE, CINAHL, Embase, and PsycINFO up to the end of June 2012. Two...... for inclusion. Twenty-one performance-based measures were evaluated including 15 single-activity measures and six multi-activity measures. Measurement properties evaluated included internal consistency (three measures), reliability (16 measures), measurement error (14 measures), validity (nine measures...

  3. Asset-Based Measurement of Poverty (United States)

    Brandolini, Andrea; Magri, Silvia; Smeeding, Timothy M.


    Poverty is generally defined as income or expenditure insufficiency, but the economic condition of a household also depends on its real and financial asset holdings. This paper investigates measures of poverty that rely on indicators of household net worth. We review and assess two main approaches followed in the literature: income-net worth…

  4. Skill composition : exploring a wage-based skill measure


    Nilsen, Øivind Anti; Raknerud, Arvid; Rybalka, Marina; Skjerpen, Terje


    This study explores a wage-based skill measure using information from a wage equation. Evidence from matched employer-employee data show that skill is attributable to variables other than educational length, for instance experience and type of education. Applying our wage-based skill measure to TFP growth analysis, the TFP growth decreases, indicating that more of the change in value-added is picked up by our skill measure than when using a purely education-based skill measure.

  5. Statistical Measures for Usage-Based Linguistics (United States)

    Gries, Stefan Th.; Ellis, Nick C.


    The advent of usage-/exemplar-based approaches has resulted in a major change in the theoretical landscape of linguistics, but also in the range of methodologies that are brought to bear on the study of language acquisition/learning, structure, and use. In particular, methods from corpus linguistics are now frequently used to study distributional…

  6. A Transdermal Measurement Platform Based on Microfluidics

    Directory of Open Access Journals (Sweden)

    Wen-Ying Huang


    Full Text Available The Franz diffusion cell is one of the most widely used devices to evaluate transdermal drug delivery. However, this static and nonflowing system has some limitations, such as a relatively large solution volume and skin area and the development of gas bubbles during sampling. To overcome these disadvantages, this study provides a proof of concept for miniaturizing models of transdermal delivery by using a microfluidic chip combined with a diffusion cell. The proposed diffusion microchip system requires only 80 μL of sample solution and provides flow circulation. Two model compounds, Coomassie Brilliant Blue G-250 and potassium ferricyanide, were successfully tested for transdermal delivery experiments. The diffusion rate is high for a high sample concentration or a large membrane pore size. The developed diffusion microchip system, which is feasible, can be applied for transdermal measurement in the future.

  7. Accuracy and optimal timing of activity measurements in estimating the absorbed dose of radioiodine in the treatment of Graves' disease (United States)

    Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.


    Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.

  8. Validation of PSF-based 3D reconstruction for myocardial blood flow measurements with Rb-82 PET

    DEFF Research Database (Denmark)

    Tolbod, Lars Poulsen; Christensen, Nana Louise; Møller, Lone W.

    Aim:The use of PSF-based 3D reconstruction algorithms (PSF) is desirable in most clinical PET-exams due to their superior image quality. Rb-82 cardiac PET is inherently noisy due to short half-life and prompt gammas and would presumably benefit from PSF. However, the quantitative behavior of PSF...... is not well validated and problems with both edge-effects and unphysical contrast-recovery have been reported.1 In this study, we compare myocardial blood flow (MBF) and coronary flow reserve (CFR) obtained using GEs implementation of PSF, SharpIR, with the conventional method for reconstruction of dynamic...... images, filtered backprojection (FBP). Furthermore, since myocardial segmentation might be affected by image quality, two different approaches to segmentation implemented in standard software (Carimas (Turku PET Centre) and QPET (Cedar Sinai)) are utilized. Method:14 dynamic rest-stress Rb-82 patient...

  9. Calibration Base Lines for Electronic Distance Measuring Instruments (EDMI) (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A calibration base line (CBL) is a precisely measured, straight-line course of approximately 1,400 m used to calibrate Electronic Distance Measuring Instruments...

  10. Topic-based Social Influence Measurement for Social Networks

    Directory of Open Access Journals (Sweden)

    Asso Hamzehei


    Full Text Available Social science studies have acknowledged that the social influence of individuals is not identical. Social networks structure and shared text can reveal immense information about users, their interests, and topic-based influence. Although some studies have considered measuring user influence, less has been on measuring and estimating topic-based user influence. In this paper, we propose an approach that incorporates network structure, user-generated content for topic-based influence measurement, and user’s interactions in the network. We perform experimental analysis on Twitter data and show that our proposed approach can effectively measure topic-based user influence.

  11. The Reliability of Randomly Generated Math Curriculum-Based Measurements (United States)

    Strait, Gerald G.; Smith, Bradley H.; Pender, Carolyn; Malone, Patrick S.; Roberts, Jarod; Hall, John D.


    "Curriculum-Based Measurement" (CBM) is a direct method of academic assessment used to screen and evaluate students' skills and monitor their responses to academic instruction and intervention. offers a math worksheet generator at no cost that creates randomly generated "math curriculum-based measures"…

  12. Serological markers to measure recent changes in malaria at population level in Cambodia. (United States)

    Kerkhof, Karen; Sluydts, Vincent; Willen, Laura; Kim, Saorin; Canier, Lydie; Heng, Somony; Tsuboi, Takafumi; Sochantha, Tho; Sovannaroth, Siv; Ménard, Didier; Coosemans, Marc; Durnez, Lies


    Serological markers for exposure to different Plasmodium species have recently been used in multiplex immunoassays based on the Luminex technology. However, interpretation of the assay results requires consideration of the half-life of specific antibodies against these markers. Therefore, the aim of the present study was to document the half-life of malaria specific serological makers, as well as assessing the sensitivity of these markers to pick up recent changes in malaria exposure. A recently developed multiplex immunoassay was used to measure the intensity of antibody (Ab) responses against 19 different Plasmodium specific antigens, covering different human malaria parasites and two vector saliva antigens. Therefore, 8439 blood samples from five cross-sectional surveys in Ratanakiri, Cambodia, were analysed. These involve a random selection from two selected surveys, and an additional set of blood samples of individuals that were randomly re-sampled three, four or five times. A generalized estimating equation model and linear regression models were fitted on log transformed antibody intensity data. Results showed that most (17/21) Ab-responses are higher in PCR positive than PCR negative individuals. Furthermore, these antibody-responses follow the same upward trend within each age group. Estimation of the half-lives showed differences between serological markers that reflect short- (seasonal) and long-term (year round) transmission trends. Ab levels declined significantly together with a decrease of PCR prevalence in a group of malaria endemic villages. For Plasmodium falciparum, antibodies against LSA3.RE, GLURP and Pf.GLURP.R2 are most likely to be a reflexion of recent (range from 6 to 8 months) exposure in the Mekong Subregion. PvEBP is the only Plasmodium vivax Ag responding reasonably well, in spite of an estimated Ab half-life of more than 1 year. The use of Ab intensity data rather dichotomizing the continuous Ab-titre data (positive vs negative



    Taylor, Dane; MYERS, SEAN A.; Clauset, Aaron; Porter, Mason A.; Mucha, Peter J.


    Numerous centrality measures have been developed to quantify the importances of nodes in time-independent networks, and many of them can be expressed as the leading eigenvector of some matrix. With the increasing availability of network data that changes in time, it is important to extend such eigenvector-based centrality measures to time-dependent networks. In this paper, we introduce a principled generalization of network centrality measures that is valid for any eigenvector-based centralit...

  14. Slice-Based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z (United States)

    Bollin, Andreas


    This paper demonstrates that existing slice-based measures can reasonably be mapped to the field of state-based specification languages. By making use of Z specifications this contribution renews the idea of slice-profiles and derives coupling and cohesion measures for them. The measures are then assessed by taking a critical look at their sensitiveness in respect to modifications on the specification source. The presented study shows that slice-based coupling and cohesion measures have the potential to be used as quality indicators for specifications as they reflect the changes in the structure of a specification as accustomed from their program-related pendants.

  15. Observational attachment theory-based parenting measures predict children's attachment narratives independently from social learning theory-based measures. (United States)

    Matias, Carla; O'Connor, Thomas G; Futh, Annabel; Scott, Stephen


    Conceptually and methodologically distinct models exist for assessing quality of parent-child relationships, but few studies contrast competing models or assess their overlap in predicting developmental outcomes. Using observational methodology, the current study examined the distinctiveness of attachment theory-based and social learning theory-based measures of parenting in predicting two key measures of child adjustment: security of attachment narratives and social acceptance in peer nominations. A total of 113 5-6-year-old children from ethnically diverse families participated. Parent-child relationships were rated using standard paradigms. Measures derived from attachment theory included sensitive responding and mutuality; measures derived from social learning theory included positive attending, directives, and criticism. Child outcomes were independently-rated attachment narrative representations and peer nominations. Results indicated that Attachment theory-based and Social Learning theory-based measures were modestly correlated; nonetheless, parent-child mutuality predicted secure child attachment narratives independently of social learning theory-based measures; in contrast, criticism predicted peer-nominated fighting independently of attachment theory-based measures. In young children, there is some evidence that attachment theory-based measures may be particularly predictive of attachment narratives; however, no single model of measuring parent-child relationships is likely to best predict multiple developmental outcomes. Assessment in research and applied settings may benefit from integration of different theoretical and methodological paradigms.

  16. Cyst-based measurements for assessing lymphangioleiomyomatosis in computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lo, P., E-mail:; Brown, M. S.; Kim, H.; Kim, H.; Goldin, J. G. [Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California 90024 (United States); Argula, R.; Strange, C. [Division of Pulmonary and Critical Care Medicine, Medical University of South Carolina, Charleston, South Carolina 29425 (United States)


    Purpose: To investigate the efficacy of a new family of measurements made on individual pulmonary cysts extracted from computed tomography (CT) for assessing the severity of lymphangioleiomyomatosis (LAM). Methods: CT images were analyzed using thresholding to identify a cystic region of interest from chest CT of LAM patients. Individual cysts were then extracted from the cystic region by the watershed algorithm, which separates individual cysts based on subtle edges within the cystic regions. A family of measurements were then computed, which quantify the amount, distribution, and boundary appearance of the cysts. Sequential floating feature selection was used to select a small subset of features for quantification of the severity of LAM. Adjusted R{sup 2} from multiple linear regression and R{sup 2} from linear regression against measurements from spirometry were used to compare the performance of our proposed measurements with currently used density based CT measurements in the literature, namely, the relative area measure and the D measure. Results: Volumetric CT data, performed at total lung capacity and residual volume, from a total of 49 subjects enrolled in the MILES trial were used in our study. Our proposed measures had adjusted R{sup 2} ranging from 0.42 to 0.59 when regressing against the spirometry measures, with p < 0.05. For previously used density based CT measurements in the literature, the best R{sup 2} was 0.46 (for only one instance), with the majority being lower than 0.3 or p > 0.05. Conclusions: The proposed family of CT-based cyst measurements have better correlation with spirometric measures than previously used density based CT measurements. They show potential as a sensitive tool for quantitatively assessing the severity of LAM.

  17. Improved CDGPS FDIR Using Comm-based Relative Measurements Project (United States)

    National Aeronautics and Space Administration — The proposed innovation is to use the comm-based measurements available from many advanced satellite cluster wireless networks to improve the flexibility and...

  18. Curriculum-Based Measurement for Beginning Writers (K-2) (United States)

    Dombek, Jennifer L.; Al Otaiba, Stephanie


    Assessment and instruction of reading tend to dominate current discussions of early literacy. Shifting the focus to writing, this article addresses the assessment of writing for students in kindergarten through second grade. Using curriculum-based measurement of written expression for beginning writers, teachers can measure growth of smaller…

  19. Similarity measures for convex polyhedra based on Minkowski addition

    NARCIS (Netherlands)

    Tuzikov, Alexander V.; Roerdink, Jos B.T.M.; Heijmans, Henk J.A.M.

    In this paper we introduce and investigate similarity measures for convex polyhedra based on Minkowski addition and inequalities for the mixed volume and volume related to the Brunn-Minkowski theory. All measures considered are invariant under translations; furthermore, some of them are also

  20. Measuring Disorientation Based on the Needleman-Wunsch Algorithm (United States)

    Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel


    This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…

  1. Survey-Based Measurement of Public Management and Policy Networks (United States)

    Henry, Adam Douglas; Lubell, Mark; McCoy, Michael


    Networks have become a central concept in the policy and public management literature; however, theoretical development is hindered by a lack of attention to the empirical properties of network measurement methods. This paper compares three survey-based methods for measuring organizational networks: the roster, the free-recall name generator, and…

  2. [Welding arc temperature field measurements based on Boltzmann spectrometry]. (United States)

    Si, Hong; Hua, Xue-Ming; Zhang, Wang; Li, Fang; Xiao, Xiao


    Arc plasma, as non-uniform plasma, has complicated energy and mass transport processes in its internal, so plasma temperature measurement is of great significance. Compared with absolute spectral line intensity method and standard temperature method, Boltzmann plot measuring is more accurate and convenient. Based on the Boltzmann theory, the present paper calculates the temperature distribution of the plasma and analyzes the principle of lines selection by real time scanning the space of the TIG are measurements.

  3. Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip

    Directory of Open Access Journals (Sweden)

    Jane Louie Fresco Zamora


    Full Text Available Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values.

  4. Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip. (United States)

    Zamora, Jane Louie Fresco; Kashihara, Shigeru; Yamaguchi, Suguru


    Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values.

  5. Calibration of Smartphone-Based Weather Measurements Using Pairwise Gossip (United States)

    Yamaguchi, Suguru


    Accurate and reliable daily global weather reports are necessary for weather forecasting and climate analysis. However, the availability of these reports continues to decline due to the lack of economic support and policies in maintaining ground weather measurement systems from where these reports are obtained. Thus, to mitigate data scarcity, it is required to utilize weather information from existing sensors and built-in smartphone sensors. However, as smartphone usage often varies according to human activity, it is difficult to obtain accurate measurement data. In this paper, we present a heuristic-based pairwise gossip algorithm that will calibrate smartphone-based pressure sensors with respect to fixed weather stations as our referential ground truth. Based on actual measurements, we have verified that smartphone-based readings are unstable when observed during movement. Using our calibration algorithm on actual smartphone-based pressure readings, the updated values were significantly closer to the ground truth values. PMID:26421312

  6. Half-life determination of T{sub z} = -1 and T{sub z} = -(1)/(2) proton-rich nuclei and the β decay of {sup 58}Zn

    Energy Technology Data Exchange (ETDEWEB)

    Kucuk, L.; Oktem, Y.; Cakirli, R.B.; Ganioglu, E.; Susoy, G. [Istanbul University, Department of Physics, Istanbul (Turkey); Orrigo, S.E.A.; Montaner-Piza, A.; Rubio, B. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Fujita, Y. [Osaka University, Department of Physics, Toyonaka, Osaka (Japan); Osaka University, Research Center for Nuclear Physics, Ibaraki, Osaka (Japan); Gelletly, W. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); University of Surrey, Department of Physics, Guildford, Surrey (United Kingdom); Blank, B.; Ascher, P.; Giovinazzo, J.; Grevy, S. [Centre d' Etudes Nucleaires de Bordeaux Gradignan, CNRS/IN2P3 - Universite de Bordeaux, Gradignan (France); Adachi, T.; Fujita, H.; Tamii, A. [Osaka University, Research Center for Nuclear Physics, Ibaraki, Osaka (Japan); Algora, A. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Inst. of Nuclear Research of the Hung. Acad. of Sciences, Debrecen (Hungary); France, G. de; Oliveira Santos, F. de; Thomas, J.C. [Grand Accelerateur National d' Ions Lourds (GANIL), CEA/DRF-CNRS/IN2P3, Caen (France); Marques, F.M. [Laboratoire de Physique Corpusculaire de Caen, ENSICAEN, UNICAEN, IN2P3/CNRS, Caen (France); Molina, F. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Comision Chilena de Energia Nuclear, Casilla 188-D, Santiago (Chile); Perrot, L. [IPN Orsay, Orsay (France); Raabe, R. [Grand Accelerateur National d' Ions Lourds (GANIL), CEA/DRF-CNRS/IN2P3, Caen (France); KU Leuven, Instituut voor Kern- en Stralingsfysica, Leuven (Belgium); Srivastava, P.C. [Grand Accelerateur National d' Ions Lourds (GANIL), CEA/DRF-CNRS/IN2P3, Caen (France); Indian Institute of Technology, Department of Physics, Roorkee (India)


    We have measured the β-decay half-lives of 16 neutron-deficient nuclei with T{sub z} = -1/2 and -1, ranging from chromium to germanium. They were produced in an experiment carried out at GANIL and optimized for the production of {sup 58}Zn, for which in addition we present the decay scheme and absolute Fermi and Gamow-Teller transition strengths. Since all of these nuclei lie on the rp-process pathway, the T{sub 1/2} values are important ingredients for the rp-process reaction flow calculations and for models of X-ray bursters. (orig.)

  7. PC Based Linear Variable Differential Displacement Measurement Uses Optical Technique

    Directory of Open Access Journals (Sweden)

    Tapan Kumar MAITI


    Full Text Available PC based linear variable differential displacement (LVDD measurement with optical approach has been presented. The technique is a good blending of both hardware and software and is basically an alternative method of linear variable differential transformer (LVDT. A visual basic (VB programming is used for this PC based measurement. Here the voltage output and the displacement of the reflector can be studied and stored continuously. Theoretical predictions are supported by experimental results. This technique can be used for the measurement of some non-electrical parameters e.g. force, torque and liquid level etc.

  8. Dental pulp vitality measurement based on multiwavelength photoplethysmography (United States)

    Sarkela, Ville; Kopola, Harri K.; Oikarinen, Kyosti; Herrala, Esko


    Observation of the intradental blood supply is important in cases of dental trauma, but difficult. As the methods used by dentists to measure pulp vitality are not very reliable, a dental pulp vitalometer based on fiberoptic reflectance measurement and measurement of the absorption of blood has been designed and built. In addition to the fiber optic probe and reflectance sensor electronics, the vitalometer includes a data acquisition card, a PC and data processing programs. The thick dentin and enamel layers and the small amount of blood in a tooth are major problems for optical measurement of its vitality, and scattered light from the enamel and the dentin surrounding the pulpa also causes a problem in measurements based on reflectance. These problems are assessed here by means of theoretical models and calculations. The advantage of reflectance measurement is that only one probe is used, which is easy to put against the tooth. Thus measurements are simple to make. Three wavelengths (560 nm, 650 nm, 850 nm) are used to measure photoplethysmographic signals, and these should allow the oxygen saturation of the blood in a tooth to be measured as well in the future. Series of measurements have been performed on vital and non-vital teeth by recording photoplethysmographic signals, using the vitalometer and using a commercial laser-Doppler instrument. Verifications of the laser-Doppler and vitalometer results are presented and deduced here.

  9. Natural language in measuring user emotions : A qualitative approach to quantitative survey-based emotion measurement

    NARCIS (Netherlands)

    Tonetto, L.M.; Desmet, P.M.A.


    This paper presents an approach to developing surveys that measure user experiences with the use of natural everyday language. The common approach to develop questionnaires that measure experience is to translate theoretical factors into verbal survey items. This theory-based approach can impair the

  10. Measuring Software Test Verification for Complex Workpieces based on Virtual Gear Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yin Peili


    Full Text Available Validity and correctness test verification of the measuring software has been a thorny issue hindering the development of Gear Measuring Instrument (GMI. The main reason is that the software itself is difficult to separate from the rest of the measurement system for independent evaluation. This paper presents a Virtual Gear Measuring Instrument (VGMI to independently validate the measuring software. The triangular patch model with accurately controlled precision was taken as the virtual workpiece and a universal collision detection model was established. The whole process simulation of workpiece measurement is implemented by VGMI replacing GMI and the measuring software is tested in the proposed virtual environment. Taking involute profile measurement procedure as an example, the validity of the software is evaluated based on the simulation results; meanwhile, experiments using the same measuring software are carried out on the involute master in a GMI. The experiment results indicate a consistency of tooth profile deviation and calibration results, thus verifying the accuracy of gear measuring system which includes the measurement procedures. It is shown that the VGMI presented can be applied in the validation of measuring software, providing a new ideal platform for testing of complex workpiece-measuring software without calibrated artifacts.

  11. Measuring Software Test Verification for Complex Workpieces based on Virtual Gear Measuring Instrument (United States)

    Yin, Peili; Wang, Jianhua; Lu, Chunxia


    Validity and correctness test verification of the measuring software has been a thorny issue hindering the development of Gear Measuring Instrument (GMI). The main reason is that the software itself is difficult to separate from the rest of the measurement system for independent evaluation. This paper presents a Virtual Gear Measuring Instrument (VGMI) to independently validate the measuring software. The triangular patch model with accurately controlled precision was taken as the virtual workpiece and a universal collision detection model was established. The whole process simulation of workpiece measurement is implemented by VGMI replacing GMI and the measuring software is tested in the proposed virtual environment. Taking involute profile measurement procedure as an example, the validity of the software is evaluated based on the simulation results; meanwhile, experiments using the same measuring software are carried out on the involute master in a GMI. The experiment results indicate a consistency of tooth profile deviation and calibration results, thus verifying the accuracy of gear measuring system which includes the measurement procedures. It is shown that the VGMI presented can be applied in the validation of measuring software, providing a new ideal platform for testing of complex workpiece-measuring software without calibrated artifacts.

  12. Accurate measurement method for tube's endpoints based on machine vision (United States)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng


    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  13. Camera-based measurement of respiratory rates is reliable. (United States)

    Becker, Christoph; Achermann, Stefan; Rocque, Mukul; Kirenko, Ihor; Schlack, Andreas; Dreher-Hummel, Thomas; Zumbrunn, Thomas; Bingisser, Roland; Nickel, Christian H


    Respiratory rate (RR) is one of the most important vital signs used to detect whether a patient is in critical condition. It is part of many risk scores and its measurement is essential for triage of patients in emergency departments. It is often not recorded as measurement is cumbersome and time-consuming. We intended to evaluate the accuracy of camera-based measurements as an alternative measurement to the current practice of manual counting. We monitored the RR of healthy male volunteers with a camera-based prototype application and simultaneously by manual counting and by capnography, which was considered the gold standard. The four assessors were mutually blinded. We simulated normoventilation, hypoventilation and hyperventilation as well as deep, normal and superficial breathing depths to assess potential clinical settings. The volunteers were assessed while being undressed, wearing a T-shirt or a winter coat. In total, 20 volunteers were included. The results of camera-based measurements of RRs and capnography were in close agreement throughout all clothing styles and respiratory patterns (Pearson's correlation coefficient, r=0.90-1.00, except for one scenario, in which the volunteer breathed slowly dressed in a winter coat r=0.84). In the winter-coat scenarios, the camera-based prototype application was superior to human counters. In our pilot study, we found that camera-based measurements delivered accurate and reliable results. Future studies need to show that camera-based measurements are a secure alternative for measuring RRs in clinical settings as well.

  14. Patch near field acoustic holography based on particle velocity measurements

    DEFF Research Database (Denmark)

    Zhang, Yong-Bin; Jacobsen, Finn; Bi, Chuan-Xing


    Patch near field acoustic holography (PNAH) based on sound pressure measurements makes it possible to reconstruct the source field near a source by measuring the sound pressure at positions on a surface. that is comparable in size to the source region of concern. Particle velocity is an alternative...... input quantity for NAH, and the advantage of using the normal component of the particle velocity rather than the sound pressure as the input of conventional spatial Fourier transform based NAH and as the input of the statistically optimized variant of NAH has recently been demonstrated. This paper......, PNAH based on particle velocity measurements can give better results than the pressure-based PNAH with a reduced number of iterations. A simulation study, as well as an experiment carried out with a pressure-velocity sound intensity probe, demonstrates these findings....

  15. Residence time measurement of an isothermal combustor flow field

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Liangta; Spencer, Adrian [Loughborough University, Department of Aero and Auto Engineering, Loughborough (United Kingdom)


    Residence times of combustors have commonly been used to help understand NO{sub x} emissions and flame blowout. Both the time mean velocity and turbulence fields are important to the residence time, but determining the residence time via analysis of a measured velocity field is difficult due to the inherent unsteadiness and the three-dimensional nature of a high-Re swirling flow. A more direct approach to measure residence time is reported here that examines the dynamic response of fuel concentration to a sudden cutoff in the fuel injection. Residence time measurement was mainly taken using a time-resolved planar laser-induced fluorescence (PLIF) technique, but a second camera for particle image velocimetry (PIV) was added to check that the step change does not alter the velocity field and the spectral content of the coherent structures. Characteristic timescales evaluated from the measurements are referred to as convection and half-life times: The former describes the time delay from a fuel injector exit reference point to a downstream point of interest, and the latter describes the rate of decay once the effect of the reduced scalar concentration at the injection source has been transported to the point of interest. Residence time is often defined as the time taken for a conserved scalar to reduce to half its initial value after injection is stopped: this equivalent to the sum of the convection time and the half-life values. The technique was applied to a high-swirl fuel injector typical of that found in combustor applications. Two test cases have been studied: with central jet (with-jet) and without central jet (no-jet). It was found that the relatively unstable central recirculation zone of the no-jet case resulted in increased transport of fuel into the central region that is dominated by a precessing vortex core, where long half-life times are also found. Based on this, it was inferred that the no-jet case may be more prone to NO{sub x} production. The

  16. Assessing Therapist Competence: Development of a Performance-Based Measure and Its Comparison With a Web-Based Measure. (United States)

    Cooper, Zafra; Doll, Helen; Bailey-Straebler, Suzanne; Bohn, Kristin; de Vries, Dian; Murphy, Rebecca; O'Connor, Marianne E; Fairburn, Christopher G


    Recent research interest in how best to train therapists to deliver psychological treatments has highlighted the need for rigorous, but scalable, means of measuring therapist competence. There are at least two components involved in assessing therapist competence: the assessment of their knowledge of the treatment concerned, including how and when to use its strategies and procedures, and an evaluation of their ability to apply such knowledge skillfully in practice. While the assessment of therapists' knowledge has the potential to be completed efficiently on the Web, the assessment of skill has generally involved a labor-intensive process carried out by clinicians, and as such, may not be suitable for assessing training outcome in certain circumstances. The aims of this study were to develop and evaluate a role-play-based measure of skill suitable for assessing training outcome and to compare its performance with a highly scalable Web-based measure of applied knowledge. Using enhanced cognitive behavioral therapy (CBT-E) for eating disorders as an exemplar, clinical scenarios for role-play assessment were developed and piloted together with a rating scheme for assessing trainee therapists' performance. These scenarios were evaluated by examining the performance of 93 therapists from different professional backgrounds and at different levels of training in implementing CBT-E. These therapists also completed a previously developed Web-based measure of applied knowledge, and the ability of the Web-based measure to efficiently predict competence on the role-play measure was investigated. The role-play measure assessed performance at implementing a range of CBT-E procedures. The majority of the therapists rated their performance as moderately or closely resembling their usual clinical performance. Trained raters were able to achieve good-to-excellent reliability for averaged competence, with intraclass correlation coefficients ranging from .653 to 909. The measure was

  17. Measurement of the super-allowed branching ratio of $^{22}$Mg

    CERN Multimedia

    We propose to measure the super-allowed branching ratio and the half-life of $^{22}$Mg, one of the least-well-measured $0^{+} \\rightarrow 0^{+}$ transitions of the 14 nuclei used to determine V$_{ud}$ and to test the unitarity of the CKM matrix. We propose measurements which should allow to significantly improve the precision on the super-allowed branching ratio employing a precisely efficiency calibrated germanium detector and on the half-life. As no method exists to greatly improve (e.g. an order of magnitude) on previous results, the branching ratio and the half-life have to be measured several times with independent methods and in independent experiments.

  18. Modelling the biological half-life and seasonality of ¹⁴C in Fucus vesiculosus from the east coast of Ireland: implications for the estimation of future trends. (United States)

    Keogh, S M; Cournane, S; León Vintró, L; McGee, E J; Mitchell, P I


    Radiocarbon levels were recorded in Fucus vesiculosus samples collected on a monthly basis over a three-year period at a site on the east coast of Ireland. The resulting data was analysed using a numerical model which estimates the transit times from the Sellafield plant to the sampling location, and the mean availability time of ¹⁴C in seaweed. With the inclusion of a model parameter allowing for seasonal variability in uptake by the Fucus, good correlation was observed between the predicted and measured concentrations. Future temporal trends of ¹⁴C Fucus concentrations along the eastern Irish coastline were modelled with the application of three possible prospective discharge scenarios, predicting ¹⁴C Fucus concentrations to reduce to ambient background levels within 2.5-years of discharges being set to zero. Such projections may prove helpful in assessing the consequences of discharge management and policy making in the context of the OSPAR convention. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. A New Laser Based Approach for Measuring Atmospheric Greenhouse Gases

    Directory of Open Access Journals (Sweden)

    Jeremy Dobler


    Full Text Available In 2012, we developed a proof-of-concept system for a new open-path laser absorption spectrometer concept for measuring atmospheric CO2. The measurement approach utilizes high-reliability all-fiber-based, continuous-wave laser technology, along with a unique all-digital lock-in amplifier method that, together, enables simultaneous transmission and reception of multiple fixed wavelengths of light. This new technique, which utilizes very little transmitted energy relative to conventional lidar systems, provides high signal-to-noise (SNR measurements, even in the presence of a large background signal. This proof-of-concept system, tested in both a laboratory environment and a limited number of field experiments over path lengths of 680 m and 1,600 m, demonstrated SNR values >1,000 for received signals of ~18 picoWatts averaged over 60 s. A SNR of 1,000 is equivalent to a measurement precision of ±0.001 or ~0.4 ppmv. The measurement method is expected to provide new capability for automated monitoring of greenhouse gas at fixed sites, such as carbon sequestration facilities, volcanoes, the short- and long-term assessment of urban plumes, and other similar applications. In addition, this concept enables active measurements of column amounts from a geosynchronous orbit for a network of ground-based receivers/stations that would complement other current and planned space-based measurement capabilities.

  20. [On-site measurement of landfill gas yield and verification of IPCC model]. (United States)

    Luo, Yu-Xiang; Wang, Wei; Gao, Xing-Bao


    In order to obtain the accurate yield of landfill gas in Yulongkeng Landfill, Shenzhen, improved pumping test was conducted. The methane production rates of the influence region were figured out as 14.67 x 10(-5), 9.46 x 10(-5), 9.55 x 10(-5), and 4.28 x 10(-5) m3/(t x h), respectively. According to the methane production rate, the whole methane yield of Yulongkeng Landfill in 2005 was 322 m3/h, which indicated that Yulongkeng Landfill had went into stationary phase and the recycle of landfill gas was not valuable. IPCC model was verified by the measured data. Degradation half life of the waste was the key parameter concerned to the prediction accuracy of IPCC model. In China, the degradable waste in municipal solid waste was mainly kitchen waste leading to a short degradation period, which caused the degradation half life was shorter than the proposed value in IPCC model. For the improvement in prediction accuracy of landfill gas yield, the model parameters should be adopted reasonably based on a full survey of waste characterization in China, which will boost the applicability of IPCC model.

  1. Modern gas-based temperature and pressure measurements

    CERN Document Server

    Pavese, Franco


    This 2nd edition volume of Modern Gas-Based Temperature and Pressure Measurements follows the first publication in 1992. It collects a much larger set of information, reference data, and bibliography in temperature and pressure metrology of gaseous substances, including the physical-chemical issues related to gaseous substances. The book provides solutions to practical applications where gases are used in different thermodynamic conditions. Modern Gas-Based Temperature and Pressure Measurements, 2nd edition is the only comprehensive survey of methods for pressure measurement in gaseous media used in the medium-to-low pressure range closely connected with thermometry. It assembles current information on thermometry and manometry that involve the use of gaseous substances which are likely to be valid methods for the future. As such, it is an important resource for the researcher. This edition is updated through the very latest scientific and technical developments of gas-based temperature and pressure measurem...

  2. Nonlinearity measurements of PIN photodiode based ROSA for FTTX applications (United States)

    Wang, Xinzhong; He, Chun; Li, Yao; Zhou, Andy; Tsay, Wei-Shin


    We have designed and fabricated PIN photodiode based ROSA used for FTTX applications. The critical nonlinearity parameters of Inter-modulation Distortion (IMD) were measured by two RF modulated light sources near 1550nm wavelength channels. A cost effective measuring system with narrow pass band filter was set up and some procedures were utilized for determining the low level signals of IMD. Obtained test results were used in real time to guide packaging process to achieve best receiver performance.


    DEFF Research Database (Denmark)

    Zhu, Wei; Novati, S. Calchi; Gould, A.


    We report on the mass and distance measurements of two single-lens events from the 2015 Spitzer microlensing campaign. With both finite-source effect and microlens parallax measurements, we find that the lens of OGLE-2015-BLG-1268 is very likely a brown dwarf (BD). Assuming that the source star l...... is dramatically increased once simultaneous ground- and space-based observations are conducted....

  4. Measurement-based reliability prediction methodology. M.S. Thesis (United States)

    Linn, Linda Shen


    In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.

  5. Verifiable fault tolerance in measurement-based quantum computation (United States)

    Fujii, Keisuke; Hayashi, Masahito


    Quantum systems, in general, cannot be simulated efficiently by a classical computer, and hence are useful for solving certain mathematical problems and simulating quantum many-body systems. This also implies, unfortunately, that verification of the output of the quantum systems is not so trivial, since predicting the output is exponentially hard. As another problem, the quantum system is very delicate for noise and thus needs an error correction. Here, we propose a framework for verification of the output of fault-tolerant quantum computation in a measurement-based model. In contrast to existing analyses on fault tolerance, we do not assume any noise model on the resource state, but an arbitrary resource state is tested by using only single-qubit measurements to verify whether or not the output of measurement-based quantum computation on it is correct. Verifiability is equipped by a constant time repetition of the original measurement-based quantum computation in appropriate measurement bases. Since full characterization of quantum noise is exponentially hard for large-scale quantum computing systems, our framework provides an efficient way to practically verify the experimental quantum error correction.

  6. Hybrid architecture for encoded measurement-based quantum computation. (United States)

    Zwerger, M; Briegel, H J; Dür, W


    We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication.

  7. Predictive Software Measures based on Z Specifications - A Case Study

    Directory of Open Access Journals (Sweden)

    Andreas Bollin


    Full Text Available Estimating the effort and quality of a system is a critical step at the beginning of every software project. It is necessary to have reliable ways of calculating these measures, and, it is even better when the calculation can be done as early as possible in the development life-cycle. Having this in mind, metrics for formal specifications are examined with a view to correlations to complexity and quality-based code measures. A case study, based on a Z specification and its implementation in ADA, analyzes the practicability of these metrics as predictors.

  8. Quantum yields of decomposition and homo-dimerization of solid L-alanine induced by 7.2 eV Vacuum ultraviolet light irradiation: an estimate of the half-life of L-alanine on the surface of space objects. (United States)

    Izumi, Yudai; Nakagawa, Kazumichi


    One of the leading hypotheses regarding the origin of prebiotic molecules on primitive Earth is that they formed from inorganic molecules in extraterrestrial environments and were delivered by meteorites, space dust and comets. To evaluate the availability of extraterrestrial amino acids, it is necessary to examine their decomposition and oligomerization rates as induced by extraterrestrial energy sources, such as vacuum ultraviolet (VUV) and X-ray photons and high energy particles. This paper reports the quantum yields of decomposition ((8.2 ± 0.7) × 10(-2) photon(-1)) and homo-dimerization ((1.2 ± 0.3) × 10(-3) photon(-1)) and decomposition of the dimer (0.24 ± 0.06 photon(-1)) of solid L-alanine (Ala) induced by VUV light with an energy of 7.2 eV. Using these quantum yields, the half-life of L-Ala on the surface of a space object in the present earth orbit was estimated to be about 52 days, even when only photons with an energy of 7.2 eV emitted from the present Sun were considered. The actual half-life of solid L-Ala on the surface of a space object orbit around the present day Earth would certainly be much shorter than our estimate, because of the added effect of photons and particles of other energies. Thus, we propose that L-Ala needs to be shielded from solar VUV in protected environments, such as the interior of a meteorite, within a time scale of days after synthesis to ensure its arrival on the primitive Earth.

  9. Micro-vision-based displacement measurement with high accuracy (United States)

    Lu, Qinghua; Zhang, Xianmin; Fan, Yanbin


    The micro-motion stages are widely used in micro/nano manufacturing technology. In this paper, an integrated approach for measuring micro-displacement of micro-motion stage that incorporates motion estimation algorithm into the computer microvision is proposed. At first, the basic principle of the computer microvision measurement is analyzed. Then, a robust multiscale motion estimation algorithm for micro-motion measurement is proposed. Finally, the microdisplacement of the micro-motion stage based on the piezoelectric ceramic actuators and the compliant mechanisms is measured using the integrated approach. The maximal bias of the proposed approach reached 13 nm. Experimental results show that the new integrated method can measure micro-displacement with nanometer accuracy.

  10. A complex network-based importance measure for mechatronics systems (United States)

    Wang, Yanhui; Bi, Lifeng; Lin, Shuai; Li, Man; Shi, Hao


    In view of the negative impact of functional dependency, this paper attempts to provide an alternative importance measure called Improved-PageRank (IPR) for measuring the importance of components in mechatronics systems. IPR is a meaningful extension of the centrality measures in complex network, which considers usage reliability of components and functional dependency between components to increase importance measures usefulness. Our work makes two important contributions. First, this paper integrates the literature of mechatronic architecture and complex networks theory to define component network. Second, based on the notion of component network, a meaningful IPR is brought into the identifying of important components. In addition, the IPR component importance measures, and an algorithm to perform stochastic ordering of components due to the time-varying nature of usage reliability of components and functional dependency between components, are illustrated with a component network of bogie system that consists of 27 components.

  11. Image based method for aberration measurement of lithographic tools (United States)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa


    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  12. Drone based measurement system for radiofrequency exposure assessment. (United States)

    Joseph, Wout; Aerts, Sam; Vandenbossche, Matthias; Thielens, Arno; Martens, Luc


    For the first time, a method to assess radiofrequency (RF) electromagnetic field (EMF) exposure of the general public in real environments with a true free-space antenna system is presented. Using lightweight electronics and multiple antennas placed on a drone, it is possible to perform exposure measurements. This technique will enable researchers to measure three-dimensional RF-EMF exposure patterns accurately in the future and at locations currently difficult to access. A measurement procedure and appropriate measurement settings have been developed. As an application, outdoor measurements are performed as a function of height up to 60 m for Global System for Mobile Communications (GSM) 900 MHz base station exposure. Bioelectromagnetics. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. A Secure LFSR Based Random Measurement Matrix for Compressive Sensing (United States)

    George, Sudhish N.; Pattathil, Deepthi P.


    In this paper, a novel approach for generating the secure measurement matrix for compressive sensing (CS) based on linear feedback shift register (LFSR) is presented. The basic idea is to select the different states of LFSR as the random entries of the measurement matrix and normalize these values to get independent and identically distributed (i.i.d.) random variables with zero mean and variance , where N is the number of input samples. The initial seed for the LFSR system act as the key to the user to provide security. Since the measurement matrix is generated from the LFSR system, and memory overload to store the measurement matrix is avoided in the proposed system. Moreover, the proposed system can provide security maintaining the robustness to noise of the CS system. The proposed system is validated through different block-based CS techniques of images. To enhance security, the different blocks of images are measured with different measurement matrices so that the proposed encryption system can withstand known plaintext attack. A modulo division circuit is used to reseed the LFSR system to generate multiple random measurement matrices, whereby after each fundamental period of LFSR, the feedback polynomial of the modulo circuit is modified in terms of a chaotic value. The proposed secure robust CS paradigm for images is subjected to several forms of attacks and is proven to be resistant against the same. From experimental analysis, it is proven that the proposed system provides better performance than its counterparts.

  14. A reliability measure of protein-protein interactions and a reliability measure-based search engine. (United States)

    Park, Byungkyu; Han, Kyungsook


    Many methods developed for estimating the reliability of protein-protein interactions are based on the topology of protein-protein interaction networks. This paper describes a new reliability measure for protein-protein interactions, which does not rely on the topology of protein interaction networks, but expresses biological information on functional roles, sub-cellular localisations and protein classes as a scoring schema. The new measure is useful for filtering many spurious interactions, as well as for estimating the reliability of protein interaction data. In particular, the reliability measure can be used to search protein-protein interactions with the desired reliability in databases. The reliability-based search engine is available at We believe this is the first search engine for interacting proteins, which is made available to public. The search engine and the reliability measure of protein interactions should provide useful information for determining proteins to focus on.

  15. Measurement model equivalence in web- and paper-based surveys

    African Journals Online (AJOL)

    Measurement model equivalence in web- and paper-based surveys. N. Martins. 4ABSTRACT. 8The aim of this research is to investigate whether web-based ... panels for survey research and found that the use of internet panels would continue to grow and that it would be ...... (q26–q29) and Teamwork (q93–q97). Items q17 ...

  16. Comparison of pharmacy-based measures of medication adherence

    Directory of Open Access Journals (Sweden)

    Vollmer William M


    Full Text Available Abstract Background Pharmacy databases are commonly used to assess medication usage, and a number of measures have been developed to measure patients’ adherence to medication. An extensive literature now supports these measures, although few studies have systematically compared the properties of different adherence measures. Methods As part of an 18-month randomized clinical trial to assess the impact of automated telephone reminders on adherence to inhaled corticosteroids (ICS among 6903 adult members of a managed care organization, we computed eight pharmacy-based measures of ICS adherence using outpatient pharmacy dispensing records obtained from the health plan’s electronic medical record. We used simple descriptive statistics to compare the relative performance characteristics of these measures. Results Comparative analysis found a relative upward bias in adherence estimates for those measures that require at least one dispensing event to be calculated. Measurement strategies that require a second dispensing event evidence even greater upward bias. These biases are greatest with shorter observation times. Furthermore, requiring a dispensing to be calculated meant that these measures could not be defined for large numbers of individuals (17-32 % of participants in this study. Measurement strategies that do not require a dispensing event to be calculated appear least vulnerable to these biases and can be calculated for everyone. However they do require additional assumptions and data (e.g., pre-intervention dispensing data to support their validity. Conclusions Many adherence measures require one, or sometimes two, dispensings in order to be defined. Since such measures assume all dispensed medication is used as directed, they have a built in upward bias that is especially pronounced when they are calculated over relatively short timeframes ( Trial registration The study was funded by grant R01HL83433 from the National Heart, Lung and

  17. Molecular imprinting sensor based on quantum weak measurement. (United States)

    Li, Dongmei; He, Qinghua; He, Yonghong; Xin, Meiguo; Zhang, Yilong; Shen, Zhiyuan


    A new type of sensing protocol, based on a high precision metrology of quantum weak measurement, was first proposed for molecularly imprinted polymers (MIP) sensor. The feasibility, sensitivity and selectivity of weak measurement based MIP (WMMIP) sensor were experimentally demonstrated with bovine serum albumin (BSA). Weak measurement system exhibits high sensitivity to the optical phase shift corresponding to the refractive index change, which is induced by the specific capture of target protein molecules with its recognition sites. The recognition process can be finally characterized by the central wavelength shift of output spectra through weak value amplification. In our experiment, we prepared BSA@MIP with modified reversed-phase microemulsion method, and coated it on the internal surface of measuring channels assembled into the Mach-Zehnder (MZ) interferometer based optical weak measurement system. The design of this home-built optical system makes it possible to detect analyte in real time. The dynamic process of the specific adsorption and concentration response to BSA from 5×10-4 to 5×10-1μg/L was achieved with a limit of detection (LOD) of 8.01×10-12g/L. This WMMIP shows superiority in accuracy, fast response and low cost. Furthermore, real-time monitoring system can creatively promote the performance of MIP in molecular analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Integrated method for the measurement of trace nitrogenous atmospheric bases

    Directory of Open Access Journals (Sweden)

    D. Key


    Full Text Available Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  19. An Adaptive Algorithm for Pairwise Comparison-based Preference Measurement

    DEFF Research Database (Denmark)

    Meissner, Martin; Decker, Reinhold; Scholz, Sören W.


    The Pairwise Comparison‐based Preference Measurement (PCPM) approach has been proposed for products featuring a large number of attributes. In the PCPM framework, a static two‐cyclic design is used to reduce the number of pairwise comparisons. However, adaptive questioning routines that maximize ...

  20. Robust Similarity Measure for Spectral Clustering Based on Shared Neighbors

    National Research Council Canada - National Science Library

    Ye, Xiucai; Sakurai, Tetsuya


    ...) graph, which cannot reveal the real clusters when the data are not well separated. In this paper, to improve the spectral clustering, we consider a robust similarity measure based on the shared nearest neighbors in a directed kNN graph...

  1. Hydrogel-based sensor for CO2 measurements

    NARCIS (Netherlands)

    Herber, S.; Olthuis, Wouter; Bergveld, Piet; van den Berg, Albert


    A hydrogel-based sensor is presented for CO2 measurements. The sensor consists of a pressure sensor and porous silicon cover. A pH-sensitive hydrogel is confined between the two parts. Furthermore the porous cover contains a bicarbonate solution and a gaspermeable membrane. CO2 reacts with the

  2. Upconversion-based lidar measurements of atmospheric CO2

    DEFF Research Database (Denmark)

    Høgstedt, Lasse; Fix, Andreas; Wirth, Martin


    For the first time an upconversion based detection scheme is demonstrated for lidar measurements of atmospheric CO2-concentrations, with a hard target at a range of 3 km and atmospheric backscatter from a range of similar to 450 m. The pulsed signals at 1572 nm are upconverted to 635 nm...

  3. Curriculum-Based Measurement of Reading: Recent Advances (United States)

    Madelaine, Alison; Wheldall, Kevin


    A significant amount of literature has been published on curriculum-based measurement (CBM) in the last decade. Much of the conceptual work on CBM was done in the 1980s and early 1990s. This review concentrates on the large body of research published within the last 10 years on CBM of reading, including further research demonstrating its technical…

  4. Metrology of human-based and other qualitative measurements (United States)

    Pendrill, Leslie; Petersson, Niclas


    The metrology of human-based and other qualitative measurements is in its infancy—concepts such as traceability and uncertainty are as yet poorly developed. This paper reviews how a measurement system analysis approach, particularly invoking as performance metric the ability of a probe (such as a human being) acting as a measurement instrument to make a successful decision, can enable a more general metrological treatment of qualitative observations. Measures based on human observations are typically qualitative, not only in sectors, such as health care, services and safety, where the human factor is obvious, but also in customer perception of traditional products of all kinds. A principal challenge is that the usual tools of statistics normally employed for expressing measurement accuracy and uncertainty will probably not work reliably if relations between distances on different portions of scales are not fully known, as is typical of ordinal or other qualitative measurements. A key enabling insight is to connect the treatment of decision risks associated with measurement uncertainty to generalized linear modelling (GLM). Handling qualitative observations in this way unites information theory, the perceptive identification and choice paradigms of psychophysics. The Rasch invariant measure psychometric GLM approach in particular enables a proper treatment of ordinal data; a clear separation of probe and item attribute estimates; simple expressions for instrument sensitivity; etc. Examples include two aspects of the care of breast cancer patients, from diagnosis to rehabilitation. The Rasch approach leads in turn to opportunities of establishing metrological references for quality assurance of qualitative measurements. In psychometrics, one could imagine a certified reference for knowledge challenge, for example, a particular concept in understanding physics or for product quality of a certain health care service. Multivariate methods, such as Principal Component

  5. Software-defined Radio Based Measurement Platform for Wireless Networks. (United States)

    Chao, I-Chun; Lee, Kang B; Candell, Richard; Proctor, Frederick; Shen, Chien-Chung; Lin, Shinn-Yan


    End-to-end latency is critical to many distributed applications and services that are based on computer networks. There has been a dramatic push to adopt wireless networking technologies and protocols (such as WiFi, ZigBee, WirelessHART, Bluetooth, ISA100.11a, etc.) into time-critical applications. Examples of such applications include industrial automation, telecommunications, power utility, and financial services. While performance measurement of wired networks has been extensively studied, measuring and quantifying the performance of wireless networks face new challenges and demand different approaches and techniques. In this paper, we describe the design of a measurement platform based on the technologies of software-defined radio (SDR) and IEEE 1588 Precision Time Protocol (PTP) for evaluating the performance of wireless networks.

  6. Method of pectus excavatum measurement based on structured light technique (United States)

    Glinkowski, Wojciech; Sitnik, Robert; Witkowski, Marcin; Kocoń, Hanna; Bolewicki, Pawel; Górecki, Andrzej


    We present an automatic method for assessment of pectus excavatum severity based on an optical 3-D markerless shape measurement. A four-directional measurement system based on a structured light projection method is built to capture the shape of the body surface of the patients. The system setup is described and typical measurement parameters are given. The automated data analysis path is explained. Their main steps are: normalization of trunk model orientation, cutting the model into slices, analysis of each slice shape, selecting the proper slice for the assessment of pectus excavatum of the patient, and calculating its shape parameter. We develop a new shape parameter (I3ds) that shows high correlation with the computed tomography (CT) Haller index widely used for assessment of pectus excavatum. Clinical results and the evaluation of developed indexes are presented.

  7. Observer-based Coal Mill Control using Oxygen Measurements

    DEFF Research Database (Denmark)

    Andersen, Palle; Bendtsen, Jan Dimon; S., Tom


    This paper proposes a novel approach to coal flow estimation in pulverized coal mills, which utilizes measurements of oxygen content in the flue gas. Pulverized coal mills are typically not equipped with sensors that detect the amount of coal injected into the furnace. This makes control...... of the coal flow difficult, causing stability problems and limits the plant's load following capabilities. To alleviate this problem without having to rely on expensive flow measurement equipment, a novel observer-based approach is investigated. A Kalman filter based on measurements of combustion air flow led...... into the furnace and oxygen concentration in the flue gas is designed to estimate the actual coal flow injected into the furnace. With this estimate, it becomes possible to close an inner loop around the coal mill itself, thus giving a better disturbance rejection capability. The approach is validated against...

  8. Depth Measurement Based on Infrared Coded Structured Light

    Directory of Open Access Journals (Sweden)

    Tong Jia


    Full Text Available Depth measurement is a challenging problem in computer vision research. In this study, we first design a new grid pattern and develop a sequence coding and decoding algorithm to process the pattern. Second, we propose a linear fitting algorithm to derive the linear relationship between the object depth and pixel shift. Third, we obtain depth information on an object based on this linear relationship. Moreover, 3D reconstruction is implemented based on Delaunay triangulation algorithm. Finally, we utilize the regularity of the error curves to correct the system errors and improve the measurement accuracy. The experimental results show that the accuracy of depth measurement is related to the step length of moving object.

  9. Computer Vision Based Measurement of Wildfire Smoke Dynamics

    Directory of Open Access Journals (Sweden)



    Full Text Available This article presents a novel method for measurement of wildfire smoke dynamics based on computer vision and augmented reality techniques. The aspect of smoke dynamics is an important feature in video smoke detection that could distinguish smoke from visually similar phenomena. However, most of the existing smoke detection systems are not capable of measuring the real-world size of the detected smoke regions. Using computer vision and GIS-based augmented reality, we measure the real dimensions of smoke plumes, and observe the change in size over time. The measurements are performed on offline video data with known camera parameters and location. The observed data is analyzed in order to create a classifier that could be used to eliminate certain categories of false alarms induced by phenomena with different dynamics than smoke. We carried out an offline evaluation where we measured the improvement in the detection process achieved using the proposed smoke dynamics characteristics. The results show a significant increase in algorithm performance, especially in terms of reducing false alarms rate. From this it follows that the proposed method for measurement of smoke dynamics could be used to improve existing smoke detection algorithms, or taken into account when designing new ones.

  10. Damage detection technique by measuring laser-based mechanical impedance (United States)

    Lee, Hyeonseok; Sohn, Hoon


    This study proposes a method for measurement of mechanical impedance using noncontact laser ultrasound. The measurement of mechanical impedance has been of great interest in nondestructive testing (NDT) or structural health monitoring (SHM) since mechanical impedance is sensitive even to small-sized structural defects. Conventional impedance measurements, however, have been based on electromechanical impedance (EMI) using contact-type piezoelectric transducers, which show deteriorated performances induced by the effects of a) Curie temperature limitations, b) electromagnetic interference (EMI), c) bonding layers and etc. This study aims to tackle the limitations of conventional EMI measurement by utilizing laser-based mechanical impedance (LMI) measurement. The LMI response, which is equivalent to a steady-state ultrasound response, is generated by shooting the pulse laser beam to the target structure, and is acquired by measuring the out-of-plane velocity using a laser vibrometer. The formation of the LMI response is observed through the thermo-mechanical finite element analysis. The feasibility of applying the LMI technique for damage detection is experimentally verified using a pipe specimen under high temperature environment.

  11. Accuracy of MRI-based Magnetic Susceptibility Measurements (United States)

    Russek, Stephen; Erdevig, Hannah; Keenan, Kathryn; Stupic, Karl

    Magnetic Resonance Imaging (MRI) is increasingly used to map tissue susceptibility to identify microbleeds associated with brain injury and pathologic iron deposits associated with neurologic diseases such as Parkinson's and Alzheimer's disease. Field distortions with a resolution of a few parts per billion can be measured using MRI phase maps. The field distortion map can be inverted to obtain a quantitative susceptibility map. To determine the accuracy of MRI-based susceptibility measurements, a set of phantoms with paramagnetic salts and nano-iron gels were fabricated. The shapes and orientations of features were varied. Measured susceptibility of 1.0 mM GdCl3 solution in water as a function of temperature agreed well with the theoretical predictions, assuming Gd+3 is spin 7/2. The MRI susceptibility measurements were compared with SQUID magnetometry. The paramagnetic susceptibility sits on top of the much larger diamagnetic susceptibility of water (-9.04 x 10-6), which leads to errors in the SQUID measurements. To extract out the paramagnetic contribution using standard magnetometry, measurements must be made down to low temperature (2K). MRI-based susceptometry is shown to be as or more accurate than standard magnetometry and susceptometry techniques.

  12. Microcontroller Power Consumption Measurement Based on PSoC

    Directory of Open Access Journals (Sweden)

    S. P. Janković


    Full Text Available Microcontrollers are often used as central processing elements in embedded systems. Because of different sleep and performance modes that microcontrollers support, their power consumption may have a high dynamic range, over 100 dB. In this paper, a data acquisition (DAQ system for measuring and analyzing the power consumption of microcontrollers is presented. DAQ system consists of a current measurement circuit using potentiostat technique, a DAQ device based on system on chip PSoC 5LP and Python PC program for the analysis, storage and visualization of measured data. Both Successive Approximation Register (SAR and Delta-Sigma (DS ADCs contained in the PSoC 5LP are used for measuring voltage drop across the shunt resistor. SAR ADC samples data at a 10 times higher rate than DS ADC, so the input range of DS ADC can be adjusted based on data measured by SAR ADC, thus enabling the extension of current measuring range by 28%. Implemented DAQ device is connected with a computer through a USB port and tested with developed Python PC program.

  13. Half Life - The divided life of Bruno Maximovitch Pontecorvo

    CERN Multimedia

    CERN. Geneva


    When Bruno Pontecorvo fled to the USSR at the height of the Cold War in 1950, half way through his life, the British Government, MI5 and FBI tried to portray him as scientifically insignificant, and to imply that his disappearance posed no threat to the West. In reality Pontecorvo was already one of the leading experts in nuclear physics, and recently declassified papers reveal that a prime agenda of FBI and MI5 was to cover up their errors. . During his time in the USSR he made major contributions to physics, and justified the sobriquet: "Mr Neutrino". This talk will reveal the background to his sudden flight, and also evaluate his work in theoretical physics in the aftermath of his arrival in Dubna. Previously secret documents now show that he proposed the concept of associated production before Gell Mann and Pais, and he had an idea to discover the neutrino at a reactor. He may be considered the father of neutrino astronomy with his successful prediction that neutrinos from a supernova could be detected, b...

  14. What Is the Half-Life of Basketball Teams? (United States)

    Hrepic, Zdeslav


    What do basketball teams have in common with radioactive nuclei? It turns out, there is more here than first meets the eye. The National Collegiate Athletic Association (NCAA) basketball tournaments feeds fans' craving when NBA competitions are not in swing, and the college tournament time has been referred to as "March Madness" or…

  15. CAIP system for vision-based on-machine measurement (United States)

    Xia, Rui-xue; Lu, Rong-sheng; Shi, Yan-qiong; Li, Qi; Dong, Jing-tao; Liu, Ning


    Computer-Aided Inspection Planning (CAIP) is an important module of modern dimensional measuring instruments, utilizing the CAIP for machined parts inspection is an important indication of the level of automation and intelligence. Aiming at the characteristic of visual inspection, it develops a CAIP system for vision-based On-Machine Measurement (OMM) based on a CAD development platform whose kernel is Open CASCADE. The working principle of vision-based OMM system is introduced, and the key technologies of CAIP include inspection information extraction, sampling strategy, inspection path planning, inspection codes generation, inspection procedure verification, data post-processor, comparison, and so on. The entire system was verified on a CNC milling machine, and relevant examples show that the system can accomplish automatic inspection planning task for common parts efficiently.

  16. Global Observer-Based Attitude Controller Using Direct Inertial Measurements

    Directory of Open Access Journals (Sweden)

    Saâdi Bouhired


    Full Text Available In this work, we address the problem of global attitude control using direct inertial measurements. When using direct inertial measurement to observe the rigid body attitude, it is shown that due to a geometrical obstruction, it is impossible to achieve global asymptotic stability. In fact, for a particular initial condition the tracking error quaternion converges to a pure imaginary quaternion formed by an eigenvector of a characteristic matrix related to the inertial constant and known vectors. Our proposition consists of adding a dynamic signal to force the rigid body to escape from such a situation. The proposed observer-based controller is synthesized based on a single Lyapunov function and a stability analysis shows that the controller stabilizes globally and asymptotically the rigid body attitude at the desired one. The effectiveness of the proposed observer-based controller is confirmed by simulation results.

  17. The Pragmatic Utility of Watson-Based Caring Measures. (United States)

    Atiyeh, Huda; Ahmad, Muayyad; Alslman, Eman Tariq; Hani, Manar Ali Bani


    This study aims to clarify the caring concept with emphasis on the pragmatic utility of caring measures. The Morse et al. (Morse, Hupcey, Mitcham, & Lenz, 1996; Morse, Mitcham, Hupcey, & Tasón, 1996) criteria for concept maturity were used as a framework. A literature review of Watson-based caring concept was undertaken to evaluate the logical, epistemological, linguistical, and pragmatic parameters. The concept of caring as conceptualized by Watson and operationalized through different measures appeared mature. Despite differences, shorter scales of caring measures were effective to capture caring behavior. Caring concept was further clarified and more confidence in using caring measures for assessment, evaluation, and modification of caring behavior become more feasible.

  18. High Precision Infrared Temperature Measurement System Based on Distance Compensation

    Directory of Open Access Journals (Sweden)

    Chen Jing


    Full Text Available To meet the need of real-time remote monitoring of human body surface temperature for optical rehabilitation therapy, a non-contact high-precision real-time temperature measurement method based on distance compensation was proposed, and the system design was carried out. The microcontroller controls the infrared temperature measurement module and the laser range module to collect temperature and distance data. The compensation formula of temperature with distance wass fitted according to the least square method. Testing had been performed on different individuals to verify the accuracy of the system. The results indicate that the designed non-contact infrared temperature measurement system has a residual error of less than 0.2°C and the response time isless than 0.1s in the range of 0 to 60cm. This provides a reference for developing long-distance temperature measurement equipment in optical rehabilitation therapy.

  19. Specific Emitter Identification Based on the Natural Measure

    Directory of Open Access Journals (Sweden)

    Yongqiang Jia


    Full Text Available Specific emitter identification (SEI techniques are often used in civilian and military spectrum-management operations, and they are also applied to support the security and authentication of wireless communication. In this letter, a new SEI method based on the natural measure of the one-dimensional component of the chaotic system is proposed. We find that the natural measures of the one-dimensional components of higher dimensional systems exist and that they are quite diverse for different systems. Based on this principle, the natural measure is used as an RF fingerprint in this letter. The natural measure can solve the problems caused by a small amount of data and a low sample rate. The Kullback–Leibler divergence is used to quantify the difference between the natural measures obtained from diverse emitters and classify them. The data obtained from real application are exploited to test the validity of the proposed method. Experimental results show that the proposed method is not only easy to operate, but also quite effective, even though the amount of data is small and the sample rate is low.



    R. Kohila*, Dr. K. Arunesh


    Now-a-days, the documents similarity measuring plays an important role in text related researches. There are many applications in document similarity measures such as plagiarism detection, document clustering, automatic essay scoring, information retrieval and machine translation. String Based Similarity, Knowledge Based Similarity and Corpus Based Similarity are the three major approaches proposed by the most of the   researchers to solve the problems in document similarity. In thi...

  1. An ABS control logic based on wheel force measurement (United States)

    Capra, D.; Galvagno, E.; Ondrak, V.; van Leeuwen, B.; Vigliani, A.


    The paper presents an anti-lock braking system (ABS) control logic based on the measurement of the longitudinal forces at the hub bearings. The availability of force information allows to design a logic that does not rely on the estimation of the tyre-road friction coefficient, since it continuously tries to exploit the maximum longitudinal tyre force. The logic is designed by means of computer simulation and then tested on a specific hardware in the loop test bench: the experimental results confirm that measured wheel force can lead to a significant improvement of the ABS performances in terms of stopping distance also in the presence of road with variable friction coefficient.

  2. Laser based measurement for the monitoring of shaft misalignment.


    Simm, Anthony; Wang, Qing; Huang, Songling; Wei ZHAO


    This paper presents a method for real-time online monitoring of shaft misalignment, which is a common problem in rotating machinery, such as the drive train of wind turbines. A non-contact laser based measurement method is used to monitor positional changes of a rotating shaft in real time while in operation. The results are then used to detect the presence of shaft misalignment. An experimental test rig is designed to measure shaft misalignment and the results from the work show that the tec...

  3. A Laser Based Instrument for MWPC Wire Tension Measurement

    CERN Document Server

    Baldini, W; Evangelisti, F; Germani, S; Landi, L; Savrié, M; Graziani, G; Lenti, M; Lenzi, M; Passaleva, G; Carboni, G; De Capua, S; Kachtchouk, A


    A fast and simple method for the measurement of the mechanical tension of wires of Multi Wires Proportional Chambers (MWPCs) is described. The system is based on commercial components and does not require any electrical connection to the wires or electric or magnetic field. It has been developed for the quality control of MWPCs of the Muon Detector of the LHCb experiment in construction at CERN. The system allows a measurement of the wire tension with a precision better than 0.5% within 3-4 seconds per wire

  4. Toward a Theory-Based Measurement of Culture


    Detmar Straub; Karen Loch; Roberto Evaristo; Elena Karahanna; Mark Srite


    In reviewing the history of the conceptualization and measurement of “culture,†one quickly realizes that there is wide-ranging and contradictory scholarly opinion about which values, norms, and beliefs should be measured to represent the concept of “culture.†We explore an alternate theory-based view of culture via social identity theory (SIT), which suggests that each individual is influenced by plethora of cultures and sub-cultures–some ethnic, some national, and some organizationa...

  5. Defining and Computing a Valued Based Cyber-Security Measure

    Energy Technology Data Exchange (ETDEWEB)

    Aissa, Anis Ben [University of Tunis, Belvedere, Tunisia; Abercrombie, Robert K [ORNL; Sheldon, Frederick T [ORNL; Mili, Ali [New Jersey Insitute of Technology


    In earlier work, we presented a value based measure of cybersecurity that quantifies the security of a system in concrete terms, specifically, in terms of how much each system stakeholder stands to lose (in dollars per hour of operation) as a result of security threats and system vulnerabilities; our metric varies according to the stakes that each stakeholder has in meeting each security requirement. In this paper, we discuss the specification and design of a system that collects, updates, and maintains all the information that pertains to estimating our cybersecurity measure, and offers stakeholders quantitative means to make security-related decisions.

  6. Measurement Uncertainty Analysis of the Strain Gauge Based Stabilographic Platform

    Directory of Open Access Journals (Sweden)

    Walendziuk Wojciech


    Full Text Available The present article describes constructing a stabilographic platform which records a standing patient’s deflection from their point of balance. The constructed device is composed of a toughen glass slab propped with 4 force sensors. Power transducers are connected to the measurement system based on a 24-bit ADC transducer which acquires slight body movements of a patient. The data is then transferred to the computer in real time and data analysis is conducted. The article explains the principle of operation as well as the algorithm of measurement uncertainty for the COP (Centre of Pressure surface (x, y.

  7. Quantum Measurements Based on Photon Number Resolved Detection (United States)

    Silberhorn, Christine


    The characterization of any quantum system requires measurements, which allows an observer to gain information about a performed experiment. The theory of quantum measurements connects the properties of a quantum state, which is typically defined by its density matrix ρ, and the description of the measurement devices, represented by a positive-operator-valued measure (POVM), with the probabilities of obtaining specific detection outcomes. The way on how we interpret our results depends, on the one hand, on the technical limitations of available detectors, and on the other hand, on our knowledge about the measurement apparatus. Up to recently no practical photon-number resolving detectors were available. Hence most research dealing with multi-photon states is based on homodyne tomography schemes. A time-multiplexing detector (TMD) that is capable to resolve photon statistics can be built from a fiber network followed by avalanche photo-detection. TMDs enable the direct measurement of count statistics, but their moderate efficiency hampers identifying the photon number of each signal state on a single-shot basis. The POVMs describing this detector correspond to loss-degraded photon number measurements, and a precise calibration of the losses can be utilized to recover the original photon number statistics in ensemble measurements by a loss inversion method. However, the knowledge of the photon statistics is not sufficient to completely characterize a state, because photon counting annihilates any information about the coherences between photon numbers. Nevertheless, TMD measurements can render a complete characterization of a density matrix ρ, if the statistics of the displaced states are analyzed. We investigate the capabilities of detector tomography and loss-tolerant detection of photon statistics for the complete characterization of photonic states.

  8. Developing a community-based flood resilience measurement standard (United States)

    Keating, Adriana; Szoenyi, Michael; Chaplowe, Scott; McQuistan, Colin; Campbell, Karen


    Given the increased attention to resilience-strengthening in international humanitarian and development work, there has been concurrent interest in its measurement and the overall accountability of "resilience strengthening" initiatives. The literature is reaching beyond the polemic of defining resilience to its measurement. Similarly, donors are increasingly expecting organizations to go beyond claiming resilience programing to measuring and showing it. However, key questions must be asked, in particular "Resilience of whom and to what?". There is no one-size-fits-all solution. The approach to measuring resilience is dependent on the audience and the purpose of the measurement exercise. Deriving a resilience measurement system needs to be based on the question it seeks to answer and needs to be specific. This session highlights key lessons from the Zurich Flood Resilience Alliance approach to develop a flood resilience measurement standard to measure and assess the impact of community based flood resilience interventions, and to inform decision-making to enhance the effectiveness of these interventions. We draw on experience in methodology development to-date, together with lessons from application in two case study sites in Latin America. Attention will be given to the use of a consistent measurement methodology for community resilience to floods over time and place; challenges to measuring a complex and dynamic phenomenon such as community resilience; methodological implications of measuring community resilience versus impact on and contribution to this goal; and using measurement and tools such as cost-benefit analysis to prioritize and inform strategic decision making for resilience interventions. The measurement tool follows the five categories of the Sustainable Livelihoods Framework and the 4Rs of complex adaptive systems - robustness, rapidity, redundancy and resourcefulness -5C-4R. A recent white paper by the Zurich Flood Resilience Alliance traces the

  9. Observer-Based Fuel Control Using Oxygen Measurement

    DEFF Research Database (Denmark)

    Andersen, Palle; Bendtsen, Jan Dimon; Mortensen, Jan Henrik

    This report describes an attempt to improve the existing control af coal mills used at the Danish power plant Nordjyllandsværket Unit 3. The coal mills are not equipped with coal flow sensors; thus an observer-based approach is investigated. A nonlinear differential equation model of the boiler...... is constructed and validated against data obtained at the plant. A Kalman filter based on measurements of combustion air flow led into the furnace and oxygen concentration in the flue gas is designed to estimate the actual coal flow. With this estimate, it becomes possible to close an inner loop around the coal...

  10. Automated pavement horizontal curve measurement methods based on inertial measurement unit and 3D profiling data

    Directory of Open Access Journals (Sweden)

    Wenting Luo


    Full Text Available Pavement horizontal curve is designed to serve as a transition between straight segments, and its presence may cause a series of driving-related safety issues to motorists and drivers. As is recognized that traditional methods for curve geometry investigation are time consuming, labor intensive, and inaccurate, this study attempts to develop a method that can automatically conduct horizontal curve identification and measurement at network level. The digital highway data vehicle (DHDV was utilized for data collection, in which three Euler angles, driving speed, and acceleration of survey vehicle were measured with an inertial measurement unit (IMU. The 3D profiling data used for cross slope calibration was obtained with PaveVision3D Ultra technology at 1 mm resolution. In this study, the curve identification was based on the variation of heading angle, and the curve radius was calculated with kinematic method, geometry method, and lateral acceleration method. In order to verify the accuracy of the three methods, the analysis of variance (ANOVA test was applied by using the control variable of curve radius measured by field test. Based on the measured curve radius, a curve safety analysis model was used to predict the crash rates and safe driving speeds at horizontal curves. Finally, a case study on 4.35 km road segment demonstrated that the proposed method could efficiently conduct network level analysis.

  11. On-Line Voltage Stability Assessment based on PMU Measurements

    DEFF Research Database (Denmark)

    Garcia-Valle, Rodrigo; P. Da Silva, Luiz C.; Nielsen, Arne Hejde


    through statistic analysis. During the off-line analysis, a memory of high-risk situations following a pre-defined voltage stability criterion is obtained. Thereafter, basic statistics analyses are applied resulting in the definition of voltage regions. During on-line operation, voltage magnitudes...... of critical buses obtained by phasor measurements are monitored in relation to the risk regions. Comprehensive studies demonstrate that the proposed method could assist operators to avoid voltage collapse events, by taking preventive or emergency actions.......This paper presents a method for on-line monitoring of risk voltage collapse based on synchronised phasor measurement. As there is no room for intensive computation and analysis in real-time, the method is based on the combination of off-line computation and on-line monitoring, which are correlated...

  12. Monitoring psychotherapy with performance-based measures of personality functioning. (United States)

    Weiner, Irving B


    In this commentary, I review a meta-analysis and three original research reports concerning the Rorschach (Exner, 2003; Rorschach, 1921/1942) and the Thematic Apperception Test (TAT; Murray, 1943) assessment in psychological treatment planning and outcome evaluation. The information in these four articles bears witness to the potential utility of performance-based personality assessment measures for this purpose. The strengths and limitations of the articles suggest several guidelines for future research designed to examine this Rorschach and TAT application including an emphasis on effectiveness studies, longitudinal data, integrated independent variables, observable dependent variables, sophisticated data analysis combining nomothetic and idiographic presentation, and the incremental contribution of performance-based measures to psychotherapy-related personality assessment.

  13. Web service reputation evaluation based on QoS measurement. (United States)

    Zhang, Haiteng; Shao, Zhiqing; Zheng, Hong; Zhai, Jie


    In the early service transactions, quality of service (QoS) information was published by service provider which was not always true and credible. For better verification the trust of the QoS information was provided by the Web service. In this paper, the factual QoS running data are collected by our WS-QoS measurement tool; based on these objectivity data, an algorithm compares the difference of the offered and measured quality data of the service and gives the similarity, and then a reputation evaluation method computes the reputation level of the Web service based on the similarity. The initial implementation and experiment with three Web services' example show that this approach is feasible and these values can act as the references for subsequent consumers to select the service.

  14. Modifying measures based on differential item functioning (DIF) impact analyses. (United States)

    Teresi, Jeanne A; Ramirez, Mildred; Jones, Richard N; Choi, Seung; Crane, Paul K


    Measure modification can impact comparability of scores across groups and settings. Changes in items can affect the percent admitting to a symptom. Using item response theory (IRT) methods, well-calibrated items can be used interchangeably, and the exact same item does not have to be administered to each respondent, theoretically permitting wider latitude in terms of modification. Recommendations regarding modifications vary, depending on the use of the measure. In the context of research, adjustments can be made at the analytic level by freeing and fixing parameters based on findings of differential item functioning (DIF). The consequences of DIF for clinical decision making depend on whether or not the patient's performance level approaches the scale decision cutpoint. High-stakes testing may require item removal or separate calibrations to ensure accurate assessment. Guidelines for modification based on DIF analyses and illustrations of the impact of adjustments are presented.

  15. Fault Location Based on Synchronized Measurements: A Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    A. H. Al-Mohammed


    Full Text Available This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs, when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research.

  16. Automatic measurement of crops canopy height based on monocular vision (United States)

    Yu, Zhenghong; Cao, Zhiguo; Bai, Xiaodong


    Computer vision technology has been increasingly used for automatically observing crop growth state, but as one of the key parameters in the field of agro-meteorological observation, crop canopy height is still measured manually in the actual observation process up to now. In order to automatically measure the height based on the forward-and-downward-looking image in the existing monocular vision observation system, a novel method is proposed, that is, to measure the canopy height indirectly by the solving algorithm for the actual height of the vertical objects (SAAH) with the help of the intelligent sensor device. The experiment results verified the feasibility and validity of our method, and that the method could meet the actual observation demand.

  17. Neurally based measurement and evaluation of environmental noise

    CERN Document Server

    Soeta, Yoshiharu


    This book deals with methods of measurement and evaluation of environmental noise based on an auditory neural and brain-oriented model. The model consists of the autocorrelation function (ACF) and the interaural cross-correlation function (IACF) mechanisms for signals arriving at the two ear entrances. Even when the sound pressure level of a noise is only about 35 dBA, people may feel annoyed due to the aspects of sound quality. These aspects can be formulated by the factors extracted from the ACF and IACF. Several examples of measuring environmental noise—from outdoor noise such as that of aircraft, traffic, and trains, and indoor noise such as caused by floor impact, toilets, and air-conditioning—are demonstrated. According to the noise measurement and evaluation, applications for sound design are discussed. This book provides an excellent resource for students, researchers, and practitioners in a wide range of fields, such as the automotive, railway, and electronics industries, and soundscape, architec...

  18. [A novel spectral classifier based on coherence measure]. (United States)

    Li, Xiang-ru; Wu, Fu-chao; Hu, Zhan-yi; Luo, A-li


    Classification and discovery of new types of celestial bodies from voluminous celestial spectra are two important issues in astronomy, and these two issues are treated separately in the literature to our knowledge. In the present paper, a novel coherence measure is introduced which can effectively measure the coherence of a new spectrum of unknown type with the training sampleslocated within its neighbourhood, then a novel classifier is designed based on this coherence measure. The proposed classifier is capable of carrying out spectral classification and knowledge discovery simultaneously. In particular, it can effectively deal with the situation where different types of training spectra exist within the neighbourhood of a new spectrum, and the traditional k-nearest neighbour method usually fails to reach a correct classification. The satisfactory performance for classification and knowledge discovery has been obtained by the proposed novel classifier over active galactic nucleus (AGNs) and active galaxies (AGs) data.

  19. Beam based measurement of beam position monitor electrode gains

    Directory of Open Access Journals (Sweden)

    D. L. Rubin


    Full Text Available Low emittance tuning at the Cornell Electron Storage Ring (CESR test accelerator depends on precision measurement of vertical dispersion and transverse coupling. The CESR beam position monitors (BPMs consist of four button electrodes, instrumented with electronics that allow acquisition of turn-by-turn data. The response to the beam will vary among the four electrodes due to differences in electronic gain and/or misalignment. This variation in the response of the BPM electrodes will couple real horizontal offset to apparent vertical position, and introduce spurious measurements of coupling and vertical dispersion. To alleviate this systematic effect, a beam based technique to measure the relative response of the four electrodes has been developed. With typical CESR parameters, simulations show that turn-by-turn BPM data can be used to determine electrode gains to within ∼0.1%.

  20. Complexity measurement based on information theory and kolmogorov complexity. (United States)

    Lui, Leong Ting; Terrazas, Germán; Zenil, Hector; Alexander, Cameron; Krasnogor, Natalio


    In the past decades many definitions of complexity have been proposed. Most of these definitions are based either on Shannon's information theory or on Kolmogorov complexity; these two are often compared, but very few studies integrate the two ideas. In this article we introduce a new measure of complexity that builds on both of these theories. As a demonstration of the concept, the technique is applied to elementary cellular automata and simulations of the self-organization of porphyrin molecules.

  1. A Semantics-Based Measure of Emoji Similarity


    Wijeratne, Sanjaya; Balasuriya, Lakshika; Sheth, Amit; Doran, Derek


    Emoji have grown to become one of the most important forms of communication on the web. With its widespread use, measuring the similarity of emoji has become an important problem for contemporary text processing since it lies at the heart of sentiment analysis, search, and interface design tasks. This paper presents a comprehensive analysis of the semantic similarity of emoji through embedding models that are learned over machine-readable emoji meanings in the EmojiNet knowledge base. Using e...

  2. An Improved Dissonance Measure Based on Auditory Memory

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Hjortkjær, Jens


    Dissonance is an important feature in music audio analysis. We present here a dissonance model that accounts for the temporal integration of dissonant events in auditory short term memory. We compare the memory-based dissonance extracted from musical audio sequences to the response of human...... listeners. In a number of tests, the memory model predicts listener’s response better than traditional dissonance measures....

  3. Deformation Measurements of Gabion Walls Using Image Based Modeling

    Directory of Open Access Journals (Sweden)

    Marek Fraštia


    Full Text Available The image based modeling finds use in applications where it is necessary to reconstructthe 3D surface of the observed object with a high level of detail. Previous experiments showrelatively high variability of the results depending on the camera type used, the processingsoftware, or the process evaluation. The authors tested the method of SFM (Structure fromMotion to determine the stability of gabion walls. The results of photogrammetricmeasurements were compared to precise geodetic point measurements.

  4. Noncontact temperature measurement. II. Least squares based techniques (United States)

    Khan, Mansoor A.; Allemand, Charly; Eagar, Thomas W.


    A technique for the noncontact measurement of temperatures is described. The technique is based on the measurement of the emitted intensity at multiple wavelengths and the simultaneous calculation of emissivity and temperature through the use of least squares curve fitting techniques. The technique is shown to make no assumptions regarding the emissivity of the target except that it be possible to model it with an analytic function. A theory is developed to predict the errors in the estimation of temperatures based on both linear and nonlinear least-squares techniques. It is shown that the maximum error in the predicted temperature is related to the noise in the measured intensities in a quantifiable manner. It is shown using computer simulations that the theory and algorithms developed here can predict both the temperatures and the uncertainty associated with each temperature prediction with a very high accuracy. An instrument was developed to test this theory. This instrument, referred to as the MITTMA, was used to measure absolute temperatures of various sources from 900 °C to 2300 °C with an average accuracy of approximately 0.5%.

  5. Image-based force and moment measurement in hypersonic facilities

    Energy Technology Data Exchange (ETDEWEB)

    Laurence, Stuart J.; Hornung, H.G. [California Institute of Technology, Graduate Aeronautical Laboratories, Pasadena, CA (United States)


    This article addresses the problem of force and moment measurement in short-duration hypersonic facilities. An image-based technique is described in which the motion of a free-flying model is tracked over a sequence of high-speed digital images. Force components are derived from the calculated trajectory by assuming constant acceleration during the test time. A linear version of the technique, appropriate for simple model geometries, is first outlined and the technique's precision is estimated. A nonlinear version, suitable for more generalised body shapes, is then described in the context of a series of experiments to determine the aerodynamic characteristics of the NASA Orion vehicle in the T5 hypervelocity shock tunnel. The accuracy of these measurements was adversely affected by both the choice of light source and test-gas luminosity, but these experiments nevertheless show image-based measurements to be, at the very least, a promising supplement to standard accelerometer-based techniques. (orig.)

  6. Measurements of Electromagnetic Fields Emitted from Cellular Base Stations in

    Directory of Open Access Journals (Sweden)

    K. J. Ali


    Full Text Available With increasing the usage of mobile communication devices and internet network information, the entry of private telecommunications companies in Iraq has been started since 2003. These companies began to build up cellular towers to accomplish the telecommunication works but they ignore the safety conditions imposed for the health and environment that are considered in random way. These negative health effects which may cause a health risk for life beings and environment pollution. The aim of this work is to determine the safe and unsafe ranges and discuss damage caused by radiation emitted from Asia cell base stations in Shirqat city and discuses the best ways in which can be minimize its exposure level to avoid its negative health effects. Practical measurements of power density around base stations has been accomplished by using a radiation survey meter type (Radio frequency EMF Strength Meter 480846 in two ways. The first way of measurements has been accomplished at a height of 2 meters above ground for different distances from (0-300 meters .The second way is at a distance of 150 meters for different levels from (2-15 meters above ground level. The maximum measured power density is about (3 mW/m2. Results indicate that the levels of power density are far below the RF radiation exposure of USSR safety standards levels. And that means these cellular base station don't cause negative the health effect for life being if the exposure is within the acceptable international standard levels.

  7. Computer vision based nacre thickness measurement of Tahitian pearls (United States)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban


    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  8. Smartphone based hemispherical photography for canopy structure measurement (United States)

    Wan, Xuefen; Cui, Jian; Jiang, Xueqin; Zhang, Jingwen; Yang, Yi; Zheng, Tao


    The canopy is the most direct and active interface layer of the interaction between plant and environment, and has important influence on energy exchange, biodiversity, ecosystem matter and climate change. The measurement about canopy structure of plant is an important foundation to analyze the pattern, process and operation mechanism of forest ecosystem. Through the study of canopy structure of plant, solar radiation, ambient wind speed, air temperature and humidity, soil evaporation, soil temperature and other forest environmental climate characteristics can be evaluated. Because of its accuracy and effectiveness, canopy structure measurement based on hemispherical photography has been widely studied. However, the traditional method of canopy structure hemispherical photogrammetry based on SLR camera and fisheye lens. This method is expensive and difficult to be used in some low-cost occasions. In recent years, smartphone technology has been developing rapidly. The smartphone not only has excellent image acquisition ability, but also has the considerable computational processing ability. In addition, the gyroscope and positioning function on the smartphone will also help to measure the structure of the canopy. In this paper, we present a smartphone based hemispherical photography system. The system consists of smart phones, low-cost fisheye lenses and PMMA adapters. We designed an Android based App to obtain the canopy hemisphere images through low-cost fisheye lenses and provide horizontal collimation information. In addition, the App will add the acquisition location tag obtained by GPS and auxiliary positioning method in hemisphere image information after the canopy structure hemisphere image acquisition. The system was tested in the urban forest after it was completed. The test results show that the smartphone based hemispherical photography system can effectively collect the high-resolution canopy structure image of the plant.

  9. A Feature-Based Structural Measure: An Image Similarity Measure for Face Recognition

    Directory of Open Access Journals (Sweden)

    Noor Abdalrazak Shnain


    Full Text Available Facial recognition is one of the most challenging and interesting problems within the field of computer vision and pattern recognition. During the last few years, it has gained special attention due to its importance in relation to current issues such as security, surveillance systems and forensics analysis. Despite this high level of attention to facial recognition, the success is still limited by certain conditions; there is no method which gives reliable results in all situations. In this paper, we propose an efficient similarity index that resolves the shortcomings of the existing measures of feature and structural similarity. This measure, called the Feature-Based Structural Measure (FSM, combines the best features of the well-known SSIM (structural similarity index measure and FSIM (feature similarity index measure approaches, striking a balance between performance for similar and dissimilar images of human faces. In addition to the statistical structural properties provided by SSIM, edge detection is incorporated in FSM as a distinctive structural feature. Its performance is tested for a wide range of PSNR (peak signal-to-noise ratio, using ORL (Olivetti Research Laboratory, now AT&T Laboratory Cambridge and FEI (Faculty of Industrial Engineering, São Bernardo do Campo, São Paulo, Brazil databases. The proposed measure is tested under conditions of Gaussian noise; simulation results show that the proposed FSM outperforms the well-known SSIM and FSIM approaches in its efficiency of similarity detection and recognition of human faces.

  10. Uav-Based Automatic Tree Growth Measurement for Biomass Estimation (United States)

    Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.


    Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.


    Directory of Open Access Journals (Sweden)

    M. Karpina


    Full Text Available Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.

  12. Score-based tests of measurement invariance: Use in practice

    Directory of Open Access Journals (Sweden)

    Ting eWang


    Full Text Available In this paper, we consider a family of recently-proposed measurement invariance tests that are based on the scores of a fitted model. This family can be used to test for measurement invariance w.r.t. a continuous auxiliary variable, without pre-specification of subgroups. Moreover, the family can be used when one wishes to test for measurement invariance w.r.t. an ordinal auxiliary variable, yielding test statistics that are sensitive to violations that are monotonically related to the ordinal variable (and less sensitive to non-monotonic violations. The paper is specifically aimed at potential users of the tests who may wish to know (i how the tests can be employed for their data, and (ii whether the tests can accurately identify specific models parameters that violate measurement invariance (possibly in the presence of model misspecification. After providing an overview of the tests, we illustrate their general use via the R packages lavaan and strucchange. We then describe two novel simulations that provide evidence of the tests' practical abilities. As a whole, the paper provides researchers with the tools and knowledge needed to apply these tests to general measurement invariance scenarios.

  13. The radiation stability of the RNA base uracil in H2O-ice and CO2-ice: in-situ laboratory measurements with applications to comets, Europa, and Mars (United States)

    Gerakines, Perry A.; Frail, Sarah; Hudson, Reggie L.


    Planetary bodies of astrobiological interest, such as Mars, are often exposed to harsh incident radiation, which will influence the times that molecules can survive on them. Some or all of these bodies may well contain biologically-important organic molecules, some may even have supported life at some point in their history, and some may support life today. Future searches for organic molecules likely will include sampling the martian subsurface or a cometary surface sample return mission, where organics may be frozen in ices dominated by either H2O or CO2, which provide some protection from ionizing radiation.Recently, our research group has published studies of the radiation stability of amino acids, with a focus on glycine - in both undiluted form and in mixtures with H2O and CO2. Here, we present a similar study that focuses on the radiation-chemical kinetics of the RNA base uracil. We compare results for uracil decay for dilution in both H2O and CO2 ices. Moreover, we compare these new results with those for glycine. For each sample, we measured uracil’s destruction rate constant and half-life dose due to irradiation by 0.9-MeV protons. All measurements were made in situ at the temperature of irradiation using IR spectroscopy. Trends with dilution (up to ~300:1) and temperature (up to ~150 K) are considered, and results are discussed in the context of icy planetary surfaces.Acknowledgment: Our work is supported in part by the NASA Emerging Worlds Program and by the NASA Astrobiology Institute through the Goddard Center for Astrobiology.

  14. Ranking the dermatology programs based on measurements of academic achievement. (United States)

    Wu, Jashin J; Ramirez, Claudia C; Alonso, Carol A; Berman, Brian; Tyring, Stephen K


    The only dermatology rankings in the past were based on National Institutes of Health (NIH) funding and journal citations. To determine the highest ranking academic dermatology programs based on 5 outcome measures and on an overall ranking scale. To the best of our knowledge, this is the first report to rank the dermatology programs on 4 of the following outcome measures of academic achievement and with an overall ranking. We collected extensive 2001 to 2004 data ranging from total publications to grant funding on 107 U.S. dermatology programs and their full-time faculty. Data from part-time and volunteer faculty were not used. Publications in 2001 to 2004; NIH funding in 2004; Dermatology Foundation grants in 2001 to 2004; faculty lectures in 2004 delivered at national conferences; number of full-time faculty members who were on the editorial boards of the top 3 U.S. dermatology journals and the top 4 subspecialty journals We used the 5 outcome measures to tabulate the highest ranking programs in each category. Using a weighted ranking system, we also tabulated the overall top 30 dermatology programs based on these 5 outcome measures. We were not able to determine the total amount of NIH funding in dollars of the dermatology divisions. The impact factors of the journal in which these publications appeared was not factored into our calculations. Since faculty members may collaborate on the same publication, some publications may have been double-counted. In descending order, the 5 highest ranked academic programs are the University of Pennsylvania; University of California, San Francisco; Yale-New Haven Medical Center; New York University; and University of Michigan. This ranking system may allow residents and faculty to improve the academic achievements at their respective programs.

  15. Developing safety performance functions incorporating reliability-based risk measures. (United States)

    Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek


    Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Measurement of utero-placental blood flow with /sup 113m/In in diabetic pregnancy

    Energy Technology Data Exchange (ETDEWEB)

    Semmler, K.; Kirsch, G.; Zoellner, P.; Fuhrmann, K.; Jutzi, E. (Zentralinstitut fuer Diabetes, Karlsburg (German Democratic Republic); Ernst-Moritz-Arndt-Universitaet, Greifswald (German Democratic Republic). Radiologische Klinik)


    In 122 diabetic pregnancies the placental blood flow has been estimated determining the half-life of the activity inflow (2 MBq /sup 113m/In-transferrin) into the placenta. A highly sensitive detector (modified pinhole collimator) and a computer-supported evaluation were used. 259 flow measurements were compared to the risk of complication in the course of diabetic pregnancy. The half-life values in the diabetic group, calculated by a gamma camera computer system by means of an iterative regression analysis, were significantly different compared to a control group (12 pregnancies without risk.) Severe diabetic angiopathic complications (classes D, F, and R according to White) are accompanied by higher half-life values (placental blood flow reductions) and perinatal complications. Even in pregnant women with gestational diabetes of disturbances of the carbohydrate metabolism disturbed placental hemodynamics is to be found.

  17. IMU-based joint angle measurement for gait analysis. (United States)

    Seel, Thomas; Raisch, Jörg; Schauer, Thomas


    This contribution is concerned with joint angle calculation based on inertial measurement data in the context of human motion analysis. Unlike most robotic devices, the human body lacks even surfaces and right angles. Therefore, we focus on methods that avoid assuming certain orientations in which the sensors are mounted with respect to the body segments. After a review of available methods that may cope with this challenge, we present a set of new methods for: (1) joint axis and position identification; and (2) flexion/extension joint angle measurement. In particular, we propose methods that use only gyroscopes and accelerometers and, therefore, do not rely on a homogeneous magnetic field. We provide results from gait trials of a transfemoral amputee in which we compare the inertial measurement unit (IMU)-based methods to an optical 3D motion capture system. Unlike most authors, we place the optical markers on anatomical landmarks instead of attaching them to the IMUs. Root mean square errors of the knee flexion/extension angles are found to be less than 1° on the prosthesis and about 3° on the human leg. For the plantar/dorsiflexion of the ankle, both deviations are about 1°.

  18. IMU-Based Joint Angle Measurement for Gait Analysis

    Directory of Open Access Journals (Sweden)

    Thomas Seel


    Full Text Available This contribution is concerned with joint angle calculation based on inertial measurement data in the context of human motion analysis. Unlike most robotic devices, the human body lacks even surfaces and right angles. Therefore, we focus on methods that avoid assuming certain orientations in which the sensors are mounted with respect to the body segments. After a review of available methods that may cope with this challenge, we present a set of new methods for: (1 joint axis and position identification; and (2 flexion/extension joint angle measurement. In particular, we propose methods that use only gyroscopes and accelerometers and, therefore, do not rely on a homogeneous magnetic field. We provide results from gait trials of a transfemoral amputee in which we compare the inertial measurement unit (IMU-based methods to an optical 3D motion capture system. Unlike most authors, we place the optical markers on anatomical landmarks instead of attaching them to the IMUs. Root mean square errors of the knee flexion/extension angles are found to be less than 1° on the prosthesis and about 3° on the human leg. For the plantar/dorsiflexion of the ankle, both deviations are about 1°.

  19. Improvement of the energy resolution of the scintillating detectors for the low background measurement (United States)

    Hodák, R.; Bukový, M.; Burešová, H.; Cerna, C.; Fajt, L.; Jouve, J.; Kouba, P.; Marquet, Ch.; Piquemal, F.; Přidal, P.; Smolek, K.; Špavorová, M.; Štekl, I.


    The main goal of this project was the improvement of the energy resolution of the scintillating detectors. In order to obtain the required energy resolution at the level of ˜ 8 %, which corresponds to the half-life sensitivity of about 1.2 × 1026 years for the SuperNEMO experiment [1], an optimal ratio of concentrations of the activator (pTP) and the wavelength shifter (POPOP) in the purified polystyrene (PS) base had to be found. Furthermore, good optical properties and mainly the energy resolution for such improved detectors are comparable with properties for higher price plastic scintillators based on the polyvinyltoluene (PVT). In this contribution, the results of the measurement with the organic plastic scintillators with various composition are presented.

  20. Cultural distance in international business and management : from mean-based to variance-based measures

    NARCIS (Netherlands)

    Beugelsdijk, Sjoerd; Maseland, Robbert; Onrust, Marjolijn; van Hoorn, Andre; Slangen, Arjen


    Extant practice in international management is to measure cultural distance as a nation-to-nation comparison of country means on cultural values, thereby ignoring the cultural variation that exists within countries. We argue that these traditional mean-based measures of cultural distance should take

  1. Eating meat with cesium and in vitro measurement; Fleisch mit Caesium essen und in vitro messen

    Energy Technology Data Exchange (ETDEWEB)

    Philipsborn, Henning von [Regensburg Univ. (Germany). Fakultaet Physik


    Wild pig meat contains in several regions still several thousand Bq per kg Cs-137, especially during spring. ICRP publication 56 specifies an effective half life of Cs-137 for man 50 to 150 days. In vivo measurements using whole-body counters are used for monitoring. In vitro measurements of excretion samples over several months are described in the contribution.

  2. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein


    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  3. Impact of image quality on OCT angiography based quantitative measurements. (United States)

    Al-Sheikh, Mayss; Ghasemi Falavarjani, Khalil; Akil, Handan; Sadda, SriniVas R


    To study the impact of image quality on quantitative measurements and the frequency of segmentation error with optical coherence tomography angiography (OCTA). Seventeen eyes of 10 healthy individuals were included in this study. OCTA was performed using a swept-source device (Triton, Topcon). Each subject underwent three scanning sessions 1-2 min apart; the first two scans were obtained under standard conditions and for the third session, the image quality index was reduced using application of a topical ointment. En face OCTA images of the retinal vasculature were generated using the default segmentation for the superficial and deep retinal layer (SRL, DRL). Intraclass correlation coefficient (ICC) was used as a measure for repeatability. The frequency of segmentation error, motion artifact, banding artifact and projection artifact was also compared among the three sessions. The frequency of segmentation error, and motion artifact was statistically similar between high and low image quality sessions (P = 0.707, and P = 1 respectively). However, the frequency of projection and banding artifact was higher with a lower image quality. The vessel density in the SRL was highly repeatable in the high image quality sessions (ICC = 0.8), however, the repeatability was low, comparing the high and low image quality measurements (ICC = 0.3). In the DRL, the repeatability of the vessel density measurements was fair in the high quality sessions (ICC = 0.6 and ICC = 0.5, with and without automatic artifact removal, respectively) and poor comparing high and low image quality sessions (ICC = 0.3 and ICC = 0.06, with and without automatic artifact removal, respectively). The frequency of artifacts is higher and the repeatability of the measurements is lower with lower image quality. The impact of image quality index should be always considered in OCTA based quantitative measurements.

  4. Satellite-based Precipitation Measurements For Science and Society (United States)

    Skofronick Jackson, G.; Huffman, G. J.


    Water is essential to Earth. Thus, knowing when, where, and how precipitation falls is of paramount importance for science and society. Some areas of the world have dense ground-based rain observations, but the vast oceans, less populated regions, and parts of developing countries lack adequate surface precipitation data. Satellites provide an optimal platform to measure precipitation globally. In the 1970's satellites started measuring precipitation and, over time, satellite precipitation sensors improved considerably. A major breakthrough was the 1998 launch of the joint NASA-Japan Aerospace Exploration Agency (JAXA) Tropical Rainfall Measuring Mission (TRMM). The TRMM spacecraft had both a multi-frequency passive microwave imaging radiometer for measuring wide-swath rainfall surface intensity and horizontal structures, and a single-frequency radar channel capable of generating 3D views of rain in clouds. In 2014, NASA and JAXA launched the Global Precipitation Measurement Core Observatory (GPM-CO) spacecraft carrying the most advanced precipitation sensors currently in space, including a dual-frequency precipitation radar and a well-calibrated, multi-frequency passive microwave radiometer. The GPM-CO was designed to measure precipitation rates from 0.2-110 mm hr-1, to provide 3D particle size distributions, and to detect moderate to intense snow events, considerably improving over TRMM's capabilities. The GPM-CO serves as a reference for unifying data from a constellation of partner satellites to provide next-generation, merged estimates globally and with high temporal (30 min) and spatial (0.1ox0.1o) resolutions. GPM data have been used for observing hurricanes from the tropics to mid-latitudes; developing susceptibility maps for floods, landslides, and droughts; providing inputs into weather and climate models; and offering new insights into agricultural productivity and world health. The current status of GPM, its ongoing science, and the future plans will be

  5. Sex estimation based on tooth measurements using panoramic radiographs. (United States)

    Capitaneanu, Cezar; Willems, Guy; Jacobs, Reinhilde; Fieuws, Steffen; Thevissen, Patrick


    Sex determination is an important step in establishing the biological profile of unidentified human remains. The aims of the study were, firstly, to assess the degree of sexual dimorphism in permanent teeth, based on digital tooth measurements performed on panoramic radiographs. Secondly, to identify sex-related tooth position-specific measurements or combinations of such measurements, and to assess their applicability for potential sex determination. Two hundred digital panoramic radiographs (100 males, 100 females; age range 22-34 years) were retrospectively collected from the dental clinic files of the Dentomaxillofacial Radiology Center of the University Hospitals Leuven, Belgium, and imported in image enhancement software. Tooth length- and width-related variables were measured on all teeth in upper and lower left quadrant, and ratios of variables were calculated. Univariate and multivariate analyses were performed to quantify the sex discriminative value of the tooth position-specific variables and their combinations. The mandibular and maxillary canine showed the greatest sexual dimorphism, and tooth length variables had the highest discriminative potential. Compared to single variables, combining variables or ratios of variables did not improve substantially the discrimination between males and females. Considering that the discriminative ability values (area under the curve (AUC)) were not higher than 0.80, it is not advocated to use the currently studied dental variables for accurate sex estimation in forensic practice.

  6. Measure of Landscape Heterogeneity by Agent-Based Methodology (United States)

    Wirth, E.; Szabó, Gy.; Czinkóczky, A.


    With the rapid increase of the world's population, the efficient food production is one of the key factors of the human survival. Since biodiversity and heterogeneity is the basis of the sustainable agriculture, the authors tried to measure the heterogeneity of a chosen landscape. The EU farming and subsidizing policies (EEA, 2014) support landscape heterogeneity and diversity, nevertheless exact measurements and calculations apart from statistical parameters (standard deviation, mean), do not really exist. In the present paper the authors' goal is to find an objective, dynamic method that measures landscape heterogeneity. It is achieved with the so called agent-based modelling, where randomly dispatched dynamic scouts record the observed land cover parameters and sum up the features of a new type of land. During the simulation the agents collect a Monte Carlo integral as a diversity landscape potential which can be considered as the unit of the `greening' measure. As a final product of the ABM method, a landscape potential map is obtained that can serve as a tool for objective decision making to support agricultural diversity.

  7. A vision-based method for planar position measurement (United States)

    Chen, Zong-Hao; Huang, Peisen S.


    In this paper, a vision-based method is proposed for three-degree-of-freedom (3-DOF) planar position (XY{θZ} ) measurement. This method uses a single camera to capture the image of a 2D periodic pattern and then uses the 2D discrete Fourier transform (2D DFT) method to estimate the phase of its fundamental frequency component for position measurement. To improve position measurement accuracy, the phase estimation error of 2D DFT is analyzed and a phase estimation method is proposed. Different simulations are done to verify the feasibility of this method and study the factors that influence the accuracy and precision of phase estimation. To demonstrate the performance of the proposed method for position measurement, a prototype encoder consisting of a black-and-white industrial camera with VGA resolution (480  ×  640 pixels) and an iPhone 4s has been developed. Experimental results show the peak-to-peak resolutions to be 3.5 nm in X axis, 8 nm in Y axis and 4 μ \\text{rad} in {θZ} axis. The corresponding RMS resolutions are 0.52 nm, 1.06 nm, and 0.60 μ \\text{rad} respectively.

  8. WSN-Based Space Charge Density Measurement System.

    Directory of Open Access Journals (Sweden)

    Dawei Deng

    Full Text Available It is generally acknowledged that high voltage direct current (HVDC transmission line endures the drawback of large area, because of which the utilization of cable for space charge density monitoring system is of inconvenience. Compared with the traditional communication network, wireless sensor network (WSN shows advantages in small volume, high flexibility and strong self-organization, thereby presenting great potential in solving the problem. Additionally, WSN is more suitable for the construction of distributed space charge density monitoring system as it has longer distance and higher mobility. A distributed wireless system is designed for collecting and monitoring the space charge density under HVDC transmission lines, which has been widely applied in both Chinese state grid HVDC test base and power transmission projects. Experimental results of the measuring system demonstrated its adaptability in the complex electromagnetic environment under the transmission lines and the ability in realizing accurate, flexible, and stable demands for the measurement of space charge density.

  9. WSN-Based Space Charge Density Measurement System. (United States)

    Deng, Dawei; Yuan, Haiwen; Lv, Jianxun; Ju, Yong


    It is generally acknowledged that high voltage direct current (HVDC) transmission line endures the drawback of large area, because of which the utilization of cable for space charge density monitoring system is of inconvenience. Compared with the traditional communication network, wireless sensor network (WSN) shows advantages in small volume, high flexibility and strong self-organization, thereby presenting great potential in solving the problem. Additionally, WSN is more suitable for the construction of distributed space charge density monitoring system as it has longer distance and higher mobility. A distributed wireless system is designed for collecting and monitoring the space charge density under HVDC transmission lines, which has been widely applied in both Chinese state grid HVDC test base and power transmission projects. Experimental results of the measuring system demonstrated its adaptability in the complex electromagnetic environment under the transmission lines and the ability in realizing accurate, flexible, and stable demands for the measurement of space charge density.

  10. A fingerprint based metric for measuring similarities of crystalline structures. (United States)

    Zhu, Li; Amsler, Maximilian; Fuhrer, Tobias; Schaefer, Bastian; Faraji, Somayeh; Rostami, Samare; Ghasemi, S Alireza; Sadeghi, Ali; Grauzinyte, Migle; Wolverton, Chris; Goedecker, Stefan


    Measuring similarities/dissimilarities between atomic structures is important for the exploration of potential energy landscapes. However, the cell vectors together with the coordinates of the atoms, which are generally used to describe periodic systems, are quantities not directly suitable as fingerprints to distinguish structures. Based on a characterization of the local environment of all atoms in a cell, we introduce crystal fingerprints that can be calculated easily and define configurational distances between crystalline structures that satisfy the mathematical properties of a metric. This distance between two configurations is a measure of their similarity/dissimilarity and it allows in particular to distinguish structures. The new method can be a useful tool within various energy landscape exploration schemes, such as minima hopping, random search, swarm intelligence algorithms, and high-throughput screenings.

  11. Relating quantum coherence and correlations with entropy-based measures. (United States)

    Wang, Xiao-Li; Yue, Qiu-Ling; Yu, Chao-Hua; Gao, Fei; Qin, Su-Juan


    Quantum coherence and quantum correlations are important quantum resources for quantum computation and quantum information. In this paper, using entropy-based measures, we investigate the relationships between quantum correlated coherence, which is the coherence between subsystems, and two main kinds of quantum correlations as defined by quantum discord as well as quantum entanglement. In particular, we show that quantum discord and quantum entanglement can be well characterized by quantum correlated coherence. Moreover, we prove that the entanglement measure formulated by quantum correlated coherence is lower and upper bounded by the relative entropy of entanglement and the entanglement of formation, respectively, and equal to the relative entropy of entanglement for all the maximally correlated states.

  12. A fingerprint based metric for measuring similarities of crystalline structures

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Li; Fuhrer, Tobias; Schaefer, Bastian; Grauzinyte, Migle; Goedecker, Stefan, E-mail: [Department of Physics, Universität Basel, Klingelbergstr. 82, 4056 Basel (Switzerland); Amsler, Maximilian [Department of Physics, Universität Basel, Klingelbergstr. 82, 4056 Basel (Switzerland); Department of Materials Science and Engineering, Northwestern University, Evanston, Illinois 60208 (United States); Faraji, Somayeh; Rostami, Samare; Ghasemi, S. Alireza [Institute for Advanced Studies in Basic Sciences, P.O. Box 45195-1159, Zanjan (Iran, Islamic Republic of); Sadeghi, Ali [Physics Department, Shahid Beheshti University, G. C., Evin, 19839 Tehran (Iran, Islamic Republic of); Wolverton, Chris [Department of Materials Science and Engineering, Northwestern University, Evanston, Illinois 60208 (United States)


    Measuring similarities/dissimilarities between atomic structures is important for the exploration of potential energy landscapes. However, the cell vectors together with the coordinates of the atoms, which are generally used to describe periodic systems, are quantities not directly suitable as fingerprints to distinguish structures. Based on a characterization of the local environment of all atoms in a cell, we introduce crystal fingerprints that can be calculated easily and define configurational distances between crystalline structures that satisfy the mathematical properties of a metric. This distance between two configurations is a measure of their similarity/dissimilarity and it allows in particular to distinguish structures. The new method can be a useful tool within various energy landscape exploration schemes, such as minima hopping, random search, swarm intelligence algorithms, and high-throughput screenings.

  13. Defining and Computing a Valued Based Cyber Security Measure

    Energy Technology Data Exchange (ETDEWEB)

    Aissa, Anis Ben [University of Tunis, Belvedere, Tunisia; Abercrombie, Robert K [ORNL; Sheldon, Frederick T [ORNL; Mili, Ali [New Jersey Insitute of Technology


    In earlier works (Ben-Aissa et al. 2010; Abercrombie et al. 2008; Sheldon et al. 2009), we presented a value based measure of cybersecurity that quantifies the security of a system in concrete terms, specifically, in terms of how much each system stakeholder stands to lose (in dollars per hour of operation) as a result of security threats and system vulnerabilities; our metric varies according to the stakes that each stakeholder has in meeting each security requirement. In this paper, we discuss the specification and design of a system that collects, updates, and maintains all the information that pertains to estimating our cybersecurity measure, and offers stakeholders quantitative means to make security-related decisions.

  14. A fingerprint based metric for measuring similarities of crystalline structures (United States)

    Zhu, Li; Amsler, Maximilian; Fuhrer, Tobias; Schaefer, Bastian; Faraji, Somayeh; Rostami, Samare; Ghasemi, S. Alireza; Sadeghi, Ali; Grauzinyte, Migle; Wolverton, Chris; Goedecker, Stefan


    Measuring similarities/dissimilarities between atomic structures is important for the exploration of potential energy landscapes. However, the cell vectors together with the coordinates of the atoms, which are generally used to describe periodic systems, are quantities not directly suitable as fingerprints to distinguish structures. Based on a characterization of the local environment of all atoms in a cell, we introduce crystal fingerprints that can be calculated easily and define configurational distances between crystalline structures that satisfy the mathematical properties of a metric. This distance between two configurations is a measure of their similarity/dissimilarity and it allows in particular to distinguish structures. The new method can be a useful tool within various energy landscape exploration schemes, such as minima hopping, random search, swarm intelligence algorithms, and high-throughput screenings.

  15. High-speed FPGA-based phase measuring profilometry architecture. (United States)

    Zhan, Guomin; Tang, Hongwei; Zhong, Kai; Li, Zhongwei; Shi, Yusheng; Wang, Congjun


    This paper proposes a high-speed FPGA architecture for the phase measuring profilometry (PMP) algorithm. The whole PMP algorithm is designed and implemented based on the principle of full-pipeline and parallelism. The results show that the accuracy of the FPGA system is comparable with those of current top-performing software implementations. The FPGA system achieves 3D sharp reconstruction using 12 phase-shifting images and completes in 21 ms with 1024 × 768 pixel resolution. To the best of our knowledge, this is the first fully pipelined architecture for PMP systems, and this makes the PMP system very suitable for high-speed embedded 3D shape measurement applications.

  16. Estimating spacecraft attitude based on in-orbit sensor measurements

    DEFF Research Database (Denmark)

    Jakobsen, Britt; Lyn-Knudsen, Kevin; Mølgaard, Mathias


    of 2014/15. To better evaluate the performance of the payload, it is desirable to couple the payload data with the satellite's orientation. With AAUSAT3 already in orbit it is possible to collect data directly from space in order to evaluate the performance of the attitude estimation. An extended kalman...... filter (EKF) is used for quaternion-based attitude estimation. A Simulink simulation environment developed for AAUSAT3, containing a "truth model" of the satellite and the orbit environment, is used to test the performance The performance is tested using different sensor noise parameters obtained both...... solely on Earth or whether an in-orbit tuning/update of the algorithm is needed. of the EKF. Generally, sensor noise variances are larger in the in-orbit measurements than in the measurements obtained on ground. From Monte Carlo simulations with varying settings of the satellite inertia and initial time...

  17. Scintillator-based fast ion loss measurements in the EAST. (United States)

    Chang, J F; Isobe, M; Ogawa, K; Huang, J; Wu, C R; Xu, Z; Jin, Z; Lin, S Y; Hu, L Q


    A new scintillator-based fast ion loss detector (FILD) has been installed on Experimental Advanced Superconducting Tokamak (EAST) to investigate the fast ion loss behavior in high performance plasma with neutral beam injection (NBI) and ion cyclotron resonance heating (ICRH). A two dimensional 40 mm × 40 mm scintillator-coated (ZnS:Ag) stainless plate is mounted in the front of the detector, capturing the escaping fast ions. Photons from the scintillator plate are imaged with a Phantom V2010 CCD camera. The lost fast ions can be measured with the pitch angle from 60° to 120° and the gyroradius from 10 mm to 180 mm. This paper will describe the details of FILD diagnostic on EAST and describe preliminary measurements during NBI and ICRH heating.

  18. De novo likelihood-based measures for comparing genome assemblies. (United States)

    Ghodsi, Mohammadreza; Hill, Christopher M; Astrovskaya, Irina; Lin, Henry; Sommer, Dan D; Koren, Sergey; Pop, Mihai


    The current revolution in genomics has been made possible by software tools called genome assemblers, which stitch together DNA fragments "read" by sequencing machines into complete or nearly complete genome sequences. Despite decades of research in this field and the development of dozens of genome assemblers, assessing and comparing the quality of assembled genome sequences still relies on the availability of independently determined standards, such as manually curated genome sequences, or independently produced mapping data. These "gold standards" can be expensive to produce and may only cover a small fraction of the genome, which limits their applicability to newly generated genome sequences. Here we introduce a de novo  probabilistic measure of assembly quality which allows for an objective comparison of multiple assemblies generated from the same set of reads. We define the quality of a sequence produced by an assembler as the conditional probability of observing the sequenced reads from the assembled sequence. A key property of our metric is that the true genome sequence maximizes the score, unlike other commonly used metrics. We demonstrate that our de novo  score can be computed quickly and accurately in a practical setting even for large datasets, by estimating the score from a relatively small sample of the reads. To demonstrate the benefits of our score, we measure the quality of the assemblies generated in the GAGE and Assemblathon 1 assembly "bake-offs" with our metric. Even without knowledge of the true reference sequence, our de novo  metric closely matches the reference-based evaluation metrics used in the studies and outperforms other de novo  metrics traditionally used to measure assembly quality (such as N50). Finally, we highlight the application of our score to optimize assembly parameters used in genome assemblers, which enables better assemblies to be produced, even without prior knowledge of the genome being assembled. Likelihood-based

  19. An Innovative Transponder-Based Interferometric Radar for Vibration Measurements (United States)

    Coppi, F.; Cerutti, A.; Farina, P.; De Pasquale, G.; Novembrini, G.


    Ground-based radar interferometry has recently emerged as an innovative technology of remote sensing, able to accurately measure the static or dynamic displacement of several points of a structure. This technique in the last couple of years has been applied to different types of structures, such as bridges, towers and chimneys. This paper presents a prototype system developed by IDS, originally aimed at measuring the structural vibrations of helicopter rotor blades, based on an interferometric technique and constituted by combination of a radar sensor and a series of transponders installed on the target structure. The main advantages of this solution with respect to conventional interferometric radars, are related to the increased spatial resolution of the system, provided by the possibility to discriminate different transponders installed within the same resolution cell of the radar sensor, and to the reduction of the ambient noise (e.g. multi-path) on the radar measurement. The first feature allows the use of the microwave technology even on target areas with limited dimensions, such as industrial facilities, while the second aspect may extend the use of radar interferometric systems to complex scenarios, where multi-reflections are expected due to the presence of natural targets with high reflectivity to the radar signal. In the paper, the system and its major characteristics are first described; subsequently, application to the measurement of ambient vibration response of a lab set-up is summarized. Then the data acquired on a rotating mock-up are reported and analyzed to identify natural frequencies and mode shapes of the investigated structure.

  20. Height estimations based on eye measurements throughout a gait cycle. (United States)

    Yang, Sylvia X M; Larsen, Peter K; Alkjær, Tine; Juul-Kristensen, Birgit; Simonsen, Erik B; Lynnerup, Niels


    Anthropometric measurements (e.g. the height to the head, nose tip, eyes or shoulders) of a perpetrator based on video material may be used in criminal cases. However, several height measurements may be difficult to assess as the perpetrators may be disguised by clothes or headwear. The eye height (EH) measurement, on the other hand, is less prone to concealment. The purpose of the present study was to investigate: (1) how the eye height varies during the gait cycle, and (2) how the eye height changes with head position. The eyes were plotted manually in APAS for 16 test subjects during a complete gait cycle. The influence of head tilt on the EH was investigated in 20 healthy men. Markers were attached to the face and the subjects were instructed to stand relaxed, tilt their head to the right, to the left, forward and backward. The marker data for the right eye were used to calculate the EH. The respective deviation and SD from the relaxed standing EH and the EH in the Frankfurt plane, left tilted, right tilted, forward tilted and backward tilted, in addition to the corresponding head tilt angles were calculated. There was no correlation between the height of the subject and the maximum vertical displacement of the EH throughout the gait cycle nor between height of the subjects and the variation of the EH throughout the gait cycle. The average maximum vertical displacement for the test subject group was 4.76 cm (± 1.56 cm). The average EH was lower when the subjects were standing in the relaxed position than in the Frankfurt plane. The average EH was higher in the relaxed position than when the subjects tilted their heads, except when they tilted their heads backwards. The subjects had a slightly larger range of motion to the right than to the left, which was not significant. The results of this study provide a range for eye height estimates and may be readily implemented in forensic case work. It can be used as a reference in height estimates in cases with height

  1. AC dipole based optics measurement and correction at RHIC

    CERN Document Server

    Shen, X; Bai, M; White, S; Robert-Domolaize, G; Luo, Y; Marusic, A; Tomas, R


    Independent component analysis (ICA) was applied to the AC dipole based optics measurement at RHIC to extract beta functions as well as phase advances at each BPM. Existence of excessive beta-beat was observed in both rings of RHIC at polarized proton store energy. A unique global optics correction scheme was then developed and tested successfully during the RHIC polarized proton run in 2013. The feasibility of using horizontal closed orbit bump at sextupole for arc beta-beat correction was also demonstrated.

  2. Preparation and measurement of TFBG based vibration sensor (United States)

    Helan, Radek; Urban, Frantisek; Mikel, Bretislav; Urban, Frantisek


    We present vibration fiber sensor set up based on tilted fiber Bragg grating (TFBG) and fiber taper. The sensor uses the TFBG as a cladding modes reflector and fiber taper as a bend-sensitive recoupling member. The lower cladding modes (ghost), reflected from TFBG, is recoupled back into the fiber core via tapered fiber section. We focused on optimization of TFBG tilt angle to reach maximum reflection of the ghost and taper parameters. Comparative measurements were made using optical spectrum analyzer and superluminiscent diode as broadband light source. We present dependence between intensity of recoupled ghost mode and sensor deflection.

  3. Establishing maintenance intervals based on measurement reliability of engineering endpoints. (United States)

    James, P J


    Methods developed by the metrological community and principles used by the research community were integrated to provide a basis for a periodic maintenance interval analysis system. Engineering endpoints are used as measurement attributes on which to base two primary quality indicators: accuracy and reliability. Also key to establishing appropriate maintenance intervals is the ability to recognize two primary failure modes: random failure and time-related failure. The primary objective of the maintenance program is to avert predictable and preventable device failure, and understanding time-related failures enables service personnel to set intervals accordingly.

  4. Geometric, Kinematic and Radiometric Aspects of Image-Based Measurements (United States)

    Liu, Tianshu


    This paper discusses theoretical foundations of quantitative image-based measurements for extracting and reconstructing geometric, kinematic and dynamic properties of observed objects. New results are obtained by using a combination of methods in perspective geometry, differential geometry. radiometry, kinematics and dynamics. Specific topics include perspective projection transformation. perspective developable conical surface, perspective projection under surface constraint, perspective invariants, the point correspondence problem. motion fields of curves and surfaces. and motion equations of image intensity. The methods given in this paper arc useful for determining morphology and motion fields of deformable bodies such as elastic bodies. viscoelastic mediums and fluids.

  5. GNSS Spoofing Detection Based on Signal Power Measurements: Statistical Analysis

    Directory of Open Access Journals (Sweden)

    V. Dehghanian


    Full Text Available A threat to GNSS receivers is posed by a spoofing transmitter that emulates authentic signals but with randomized code phase and Doppler values over a small range. Such spoofing signals can result in large navigational solution errors that are passed onto the unsuspecting user with potentially dire consequences. An effective spoofing detection technique is developed in this paper, based on signal power measurements and that can be readily applied to present consumer grade GNSS receivers with minimal firmware changes. An extensive statistical analysis is carried out based on formulating a multihypothesis detection problem. Expressions are developed to devise a set of thresholds required for signal detection and identification. The detection processing methods developed are further manipulated to exploit incidental antenna motion arising from user interaction with a GNSS handheld receiver to further enhance the detection performance of the proposed algorithm. The statistical analysis supports the effectiveness of the proposed spoofing detection technique under various multipath conditions.

  6. Measuring children's social skills using microcomputer-based videodisc assessment. (United States)

    Irvin, L K; Walker, H M; Noell, J; Singer, G H; Irvine, A B; Marquez, K; Britz, B


    This article describes the development of a microcomputer-based videodisc assessment prototype for measuring children's social skills. The theoretical and empirical foundations for the content are described, and the contributions of interactive microcomputer-based video technology to assessment of children with handicaps are detailed. An application of Goldfried and D'Zurilla's "behavior-analytic" approach to development of the content of assessments is presented, and the related video and computer technology development is detailed. The article describes the conceptual foundations of the psychometrics of the assessment prototype as well as the psychometric methodology that was employed throughout the development process. Finally, a discussion of the potential applications and implications of the social skills assessment prototype is included.

  7. Air temperature measurements based on the speed of sound to compensate long distance interferometric measurements

    Directory of Open Access Journals (Sweden)

    Astrua Milena


    Full Text Available A method to measure the real time temperature distribution along an interferometer path based on the propagation of acoustic waves is presented. It exploits the high sensitivity of the speed of sound in air to the air temperature. In particular, it takes advantage of a special set-up where the generation of the acoustic waves is synchronous with the amplitude modulation of a laser source. A photodetector converts the laser light to an electronic signal considered as reference, while the incoming acoustic waves are focused on a microphone and generate a second signal. In this condition, the phase difference between the two signals substantially depends on the temperature of the air volume interposed between the sources and the receivers. The comparison with the traditional temperature sensors highlighted the limit of the latter in case of fast temperature variations and the advantage of a measurement integrated along the optical path instead of a sampling measurement. The capability of the acoustic method to compensate the interferometric distance measurements due to air temperature variations has been demonstrated for distances up to 27 m.

  8. Measuring participant rurality in Web-based interventions

    Directory of Open Access Journals (Sweden)

    McKay H Garth


    Full Text Available Abstract Background Web-based health behavior change programs can reach large groups of disparate participants and thus they provide promise of becoming important public health tools. Data on participant rurality can complement other demographic measures to deepen our understanding of the success of these programs. Specifically, analysis of participant rurality can inform recruitment and social marketing efforts, and facilitate the targeting and tailoring of program content. Rurality analysis can also help evaluate the effectiveness of interventions across population groupings. Methods We describe how the RUCAs (Rural-Urban Commuting Area Codes methodology can be used to examine results from two Randomized Controlled Trials of Web-based tobacco cessation programs: the project for smokeless tobacco cessation and the Smokers' Health Improvement Program (SHIP project for smoking cessation. Results Using RUCAs methodology helped to highlight the extent to which both Web-based interventions reached a substantial percentage of rural participants. The ChewFree program was found to have more rural participation which is consistent with the greater prevalence of smokeless tobacco use in rural settings as well as ChewFree's multifaceted recruitment program that specifically targeted rural settings. Conclusion Researchers of Web-based health behavior change programs targeted to the US should routinely include RUCAs as a part of analyzing participant demographics. Researchers in other countries should examine rurality indices germane to their country.

  9. Service quality measurement. A new approach based on Conjoint Analysis

    Directory of Open Access Journals (Sweden)

    Valerio Gatta


    Full Text Available This article is concerned with the measurement of service quality. The main objective is to suggest an alternative criterion for service quality definition and measurement. After a brief description of the most traditional techniques and with the intent to overcome some critical factors pertaining them, I focus my attention on the choice-based conjoint analysis, a particular stated preferences method that estimates the structure of consumers’ preferences given their choices between alternative service options. Discrete choice models and the traditional compensatory utility maximization framework are extended by the inclusion of the attribute cutoffs into the decision problem formulation. The major theoretical aspects of the described approach are examined and discussed, showing that it is able to identify the relative importance of the relevant attributes, calculating elasticity and monetary evaluation, and to determine a service quality index. Then simulations enable the identification of potential service quality levels, so that marketing managers have valuable information to plan their best business strategies. We present findings from an empirical study in the public transport sector designed to gain insights into the use of the choice-based conjoint analysis.

  10. Quantum Jarzynski equality of measurement-based work extraction. (United States)

    Morikuni, Yohei; Tajima, Hiroyasu; Hatano, Naomichi


    Many studies of quantum-size heat engines assume that the dynamics of an internal system is unitary and that the extracted work is equal to the energy loss of the internal system. Both assumptions, however, should be under scrutiny. In the present paper, we analyze quantum-scale heat engines, employing the measurement-based formulation of the work extraction recently introduced by Hayashi and Tajima [M. Hayashi and H. Tajima, arXiv:1504.06150]. We first demonstrate the inappropriateness of the unitary time evolution of the internal system (namely, the first assumption above) using a simple two-level system; we show that the variance of the energy transferred to an external system diverges when the dynamics of the internal system is approximated to a unitary time evolution. Second, we derive the quantum Jarzynski equality based on the formulation of Hayashi and Tajima as a relation for the work measured by an external macroscopic apparatus. The right-hand side of the equality reduces to unity for "natural" cyclic processes but fluctuates wildly for noncyclic ones, exceeding unity often. This fluctuation should be detectable in experiments and provide evidence for the present formulation.

  11. A Method to Measure the Bracelet Based on Feature Energy (United States)

    Liu, Hongmin; Li, Lu; Wang, Zhiheng; Huo, Zhanqiang


    To measure the bracelet automatically, a novel method based on feature energy is proposed. Firstly, the morphological method is utilized to preprocess the image, and the contour consisting of a concentric circle is extracted. Then, a feature energy function, which is relevant to the distances from one pixel to the edge points, is defined taking into account the geometric properties of the concentric circle. The input image is subsequently transformed to the feature energy distribution map (FEDM) by computing the feature energy of each pixel. The center of the concentric circle is thus located by detecting the maximum on the FEDM; meanwhile, the radii of the concentric circle are determined according to the feature energy function of the center pixel. Finally, with the use of a calibration template, the internal diameter and thickness of the bracelet are measured. The experimental results show that the proposed method can measure the true sizes of the bracelet accurately with the simplicity, directness and robustness compared to the existing methods.

  12. Analogy between gambling and measurement-based work extraction (United States)

    Vinkler, Dror A.; Permuter, Haim H.; Merhav, Neri


    In information theory, one area of interest is gambling, where mutual information characterizes the maximal gain in wealth growth rate due to knowledge of side information; the betting strategy that achieves this maximum is named the Kelly strategy. In the field of physics, it was recently shown that mutual information can characterize the maximal amount of work that can be extracted from a single heat bath using measurement-based control protocols, i.e. using ‘information engines’. However, to the best of our knowledge, no relation between gambling and information engines has been presented before. In this paper, we briefly review the two concepts and then demonstrate an analogy between gambling, where bits are converted into wealth, and information engines, where bits representing measurements are converted into energy. From this analogy follows an extension of gambling to the continuous-valued case, which is shown to be useful for investments in currency exchange rates or in the stock market using options. Moreover, the analogy enables us to use well-known methods and results from one field to solve problems in the other. We present three such cases: maximum work extraction when the probability distributions governing the system and measurements are unknown, work extraction when some energy is lost in each cycle, e.g. due to friction, and an analysis of systems with memory. In all three cases, the analogy enables us to use known results in order to obtain new ones.

  13. Heart rate measurement based on face video sequence (United States)

    Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian


    This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.

  14. Coordinate measuring system based on microchip lasers for reverse prototyping (United States)

    Iakovlev, Alexey; Grishkanich, Alexsandr S.; Redka, Dmitriy; Tsvetkov, Konstantin


    According to the current great interest concerning Large-Scale Metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance, are assuming a more and more important role among system requirements. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of chip and microlasers as radiators on the linear-angular characteristics of existing measurement systems. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The system consists of a distributed network-based layout, whose modularity allows to fit differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load.

  15. Electroencephalogram measurement using polymer-based dry microneedle electrode (United States)

    Arai, Miyako; Nishinaka, Yuya; Miki, Norihisa


    In this paper, we report a successful electroencephalogram (EEG) measurement using polymer-based dry microneedle electrodes. The electrodes consist of needle-shaped substrates of SU-8, a silver film, and a nanoporous parylene protective film. Differently from conventional wet electrodes, microneedle electrodes do not require skin preparation and a conductive gel. SU-8 is superior as a structural material to poly(dimethylsiloxane) (PDMS; Dow Corning Toray Sylgard 184) in terms of hardness, which was used in our previous work, and facilitates the penetration of needles through the stratum corneum. SU-8 microneedles can be successfully inserted into the skin without breaking and could maintain a sufficiently low skin-electrode contact impedance for EEG measurement. The electrodes successfully measured EEG from the frontal pole, and the quality of acquired signals was verified to be as high as those obtained using commercially available wet electrodes without any skin preparation or a conductive gel. The electrodes are readily applicable to record brain activities for a long period with little stress involved in skin preparation to the users.

  16. An optomechatronic curvature measurement array based on fiber Bragg gratings (United States)

    Chang, Hsing-Cheng; Chang, I.-Nan; Chen, Ya-Hui; Lin, Shyan-Lung; Hung, San-Shan; Lin, Jung-Chih; Liu, Wen-Fung


    This study investigated an optomechatronic array-integrated signal processing module and a human-machine interface based on fiber Bragg grating sensing elements embedded in an elastic support matrix that involves using a self-located electromagnetic mechanism for curvature sensing and solid contour reconstruction. Using bilinear interpolation and average calculation methods, the smooth and accurate surface contours of convex and concave lenses are reconstructed in real-time. The elastic supporting optical sensing array is self-balanced to reduce operational errors. Compared with our previous single-head sensor, the sensitivity of the proposed array is improved by more than 15%. In the curvature range from -20.15 to +27.09 m-1, the sensitivities are 3.53 pm m for the convex measurement and 2.15 pm m for the concave measurement with an error rate below 8.89%. The curvature resolutions are 0.283 and 0.465 m-1 for convex and concave lenses, respectively. This array could be applied in the curvature measurement of solar collectors to monitor energy conversion efficiency or could be used to monitor the wafer-level thin-film fabrication process.

  17. Accurate fluid force measurement based on control surface integration (United States)

    Lentink, David


    Nonintrusive 3D fluid force measurements are still challenging to conduct accurately for freely moving animals, vehicles, and deforming objects. Two techniques, 3D particle image velocimetry (PIV) and a new technique, the aerodynamic force platform (AFP), address this. Both rely on the control volume integral for momentum; whereas PIV requires numerical integration of flow fields, the AFP performs the integration mechanically based on rigid walls that form the control surface. The accuracy of both PIV and AFP measurements based on the control surface integration is thought to hinge on determining the unsteady body force associated with the acceleration of the volume of displaced fluid. Here, I introduce a set of non-dimensional error ratios to show which fluid and body parameters make the error negligible. The unsteady body force is insignificant in all conditions where the average density of the body is much greater than the density of the fluid, e.g., in gas. Whenever a strongly deforming body experiences significant buoyancy and acceleration, the error is significant. Remarkably, this error can be entirely corrected for with an exact factor provided that the body has a sufficiently homogenous density or acceleration distribution, which is common in liquids. The correction factor for omitting the unsteady body force, {{{ {ρ f}} {1 - {ρ f} ( {{ρ b}+{ρ f}} )}.{( {{{{ρ }}b}+{ρ f}} )}}} , depends only on the fluid, {ρ f}, and body, {{ρ }}b, density. Whereas these straightforward solutions work even at the liquid-gas interface in a significant number of cases, they do not work for generalized bodies undergoing buoyancy in combination with appreciable body density inhomogeneity, volume change (PIV), or volume rate-of-change (PIV and AFP). In these less common cases, the 3D body shape needs to be measured and resolved in time and space to estimate the unsteady body force. The analysis shows that accounting for the unsteady body force is straightforward to non




    Numerous centrality measures have been developed to quantify the importances of nodes in time-independent networks, and many of them can be expressed as the leading eigenvector of some matrix. With the increasing availability of network data that changes in time, it is important to extend such eigenvector-based centrality measures to time-dependent networks. In this paper, we introduce a principled generalization of network centrality measures that is valid for any eigenvector-based centrality. We consider a temporal network with N nodes as a sequence of T layers that describe the network during different time windows, and we couple centrality matrices for the layers into a supra-centrality matrix of size NT × NT whose dominant eigenvector gives the centrality of each node i at each time t. We refer to this eigenvector and its components as a joint centrality, as it reflects the importances of both the node i and the time layer t. We also introduce the concepts of marginal and conditional centralities, which facilitate the study of centrality trajectories over time. We find that the strength of coupling between layers is important for determining multiscale properties of centrality, such as localization phenomena and the time scale of centrality changes. In the strong-coupling regime, we derive expressions for time-averaged centralities, which are given by the zeroth-order terms of a singular perturbation expansion. We also study first-order terms to obtain first-order-mover scores, which concisely describe the magnitude of nodes’ centrality changes over time. As examples, we apply our method to three empirical temporal networks: the United States Ph.D. exchange in mathematics, costarring relationships among top-billed actors during the Golden Age of Hollywood, and citations of decisions from the United States Supreme Court. PMID:29046619


    Taylor, Dane; Myers, Sean A; Clauset, Aaron; Porter, Mason A; Mucha, Peter J


    Numerous centrality measures have been developed to quantify the importances of nodes in time-independent networks, and many of them can be expressed as the leading eigenvector of some matrix. With the increasing availability of network data that changes in time, it is important to extend such eigenvector-based centrality measures to time-dependent networks. In this paper, we introduce a principled generalization of network centrality measures that is valid for any eigenvector-based centrality. We consider a temporal network with N nodes as a sequence of T layers that describe the network during different time windows, and we couple centrality matrices for the layers into a supra-centrality matrix of size NT × NT whose dominant eigenvector gives the centrality of each node i at each time t. We refer to this eigenvector and its components as a joint centrality, as it reflects the importances of both the node i and the time layer t. We also introduce the concepts of marginal and conditional centralities, which facilitate the study of centrality trajectories over time. We find that the strength of coupling between layers is important for determining multiscale properties of centrality, such as localization phenomena and the time scale of centrality changes. In the strong-coupling regime, we derive expressions for time-averaged centralities, which are given by the zeroth-order terms of a singular perturbation expansion. We also study first-order terms to obtain first-order-mover scores, which concisely describe the magnitude of nodes' centrality changes over time. As examples, we apply our method to three empirical temporal networks: the United States Ph.D. exchange in mathematics, costarring relationships among top-billed actors during the Golden Age of Hollywood, and citations of decisions from the United States Supreme Court.

  20. Creating synergy between ground and space-based precipitation measurements (United States)

    Gourley, J. J.; Hong, Y.; Petersen, W. A.; Howard, K.; Flamig, Z.; Wen, Y.


    As the successor of the Tropical Rainfall Measuring Mission (TRMM) satellite launched in 1997, the multi-national Global Precipitation Measurement (GPM) Mission, to be launched in 2013, will provide next-generation global precipitation estimates from space within a unified framework. On the ground, several countries worldwide are in the throes of expanding their weather radar networks with gap-filling radars and upgrading them to include polarimetric capabilities. While significant improvements in precipitation estimation capabilities have been realized from space- and ground-based platforms separately, little effort has been focused on aligning these communities for synergistic, joint development of algorithms. In this study, we demonstrate the integration of real-time rainfall products from the Tropical Rainfall Measurement Mission (TRMM) into the National Severe Storms Laboratory’s (NSSL) National Mosaic and QPE (NMQ/Q2; system. The NMQ system enables a CONUS-wide comparison of TRMM products to NEXRAD-based Q2 rainfall products. Moreover, NMQ’s ground validation software ingests and quality controls data from all automatic-reporting rain gauge networks throughout the US and provides robust graphical and statistical validation tools, accessible by anyone with internet access. This system will readily incorporate future products from GPM as well as those from the dual-polarization upgrade to the NEXRAD network. While initial efforts are on the intercomparison of rainfall products, we envision this system will ultimately promote the development of precipitation algorithms that capitalize on the strengths of spatiotemporal and error characteristics of space and ground remote-sensing data. An example algorithm is presented where the vertical structure of precipitating systems over complex terrain is more completely resolved using combined information from NMQ and TRMM precipitation radar (PR), leading to more accurate surface rainfall estimates.

  1. Gait assessment system based on novel gait variability measures. (United States)

    Wang, Xingchen; Ristic-Durrant, Danijela; Spranger, Matthias; Graser, Axel


    In this paper, a novel gait assessment system based on measures of gait variability reflected through the variability of shapes of gait cycles trajectories is proposed. The presented gait assessment system is based on SVM (support vector machine) classifier and on gait variability-based features calculated from the hip and knee joint angle trajectories recorded using wearable IMUs during walking trials. A system classifier was trained to distinguish healthy gait patterns from the pathological ones. The features were extracted by calculating the distances between the joint trajectories of the individual gait cycles using 4 different distance functions. As result, the system is able to provide a Gait Variability Index (GVI), which is a numeric value that can be used as an indicator of a degree to which a pathological gait pattern is close to a healthy gait pattern. The system and GVI were tested in three experiments, involving subjects suffering from gait disorders caused by different neurological diseases. The results demonstrated that the proposed gait assessment system would be suitable for supporting clinicians in the evaluation of gait performances during the gait rehabilitation procedures.

  2. Measuring information-based energy and temperature of literary texts (United States)

    Chang, Mei-Chu; Yang, Albert C.-C.; Eugene Stanley, H.; Peng, C.-K.


    We apply a statistical method, information-based energy, to quantify informative symbolic sequences. To apply this method to literary texts, it is assumed that different words with different occurrence frequencies are at different energy levels, and that the energy-occurrence frequency distribution obeys a Boltzmann distribution. The temperature within the Boltzmann distribution can be an indicator for the author's writing capacity as the repertory of thoughts. The relative temperature of a text is obtained by comparing the energy-occurrence frequency distributions of words collected from one text versus from all texts of the same author. Combining the relative temperature with the Shannon entropy as the text complexity, the information-based energy of the text is defined and can be viewed as a quantitative evaluation of an author's writing performance. We demonstrate the method by analyzing two authors, Shakespeare in English and Jin Yong in Chinese, and find that their well-known works are associated with higher information-based energies. This method can be used to measure the creativity level of a writer's work in linguistics, and can also quantify symbolic sequences in different systems.

  3. Ozone Measurements Monitoring Using Data-Based Approach

    KAUST Repository

    Harrou, Fouzi


    The complexity of ozone (O3) formation mechanisms in the troposphere make the fast and accurate modeling of ozone very challenging. In the absence of a process model, principal component analysis (PCA) has been extensively used as a data-based monitoring technique for highly correlated process variables; however conventional PCA-based detection indices often fail to detect small or moderate anomalies. In this work, we propose an innovative method for detecting small anomalies in highly correlated multivariate data. The developed method combine the multivariate exponentially weighted moving average (MEWMA) monitoring scheme with PCA modelling in order to enhance anomaly detection performance. Such a choice is mainly motivated by the greater ability of the MEWMA monitoring scheme to detect small changes in the process mean. The proposed PCA-based MEWMA monitoring scheme is successfully applied to ozone measurements data collected from Upper Normandy region, France, via the network of air quality monitoring stations. The detection results of the proposed method are compared to that declared by Air Normand air monitoring association.

  4. Robust QKD-based private database queries based on alternative sequences of single-qubit measurements (United States)

    Yang, YuGuang; Liu, ZhiChao; Chen, XiuBo; Zhou, YiHua; Shi, WeiMin


    Quantum channel noise may cause the user to obtain a wrong answer and thus misunderstand the database holder for existing QKD-based quantum private query (QPQ) protocols. In addition, an outside attacker may conceal his attack by exploiting the channel noise. We propose a new, robust QPQ protocol based on four-qubit decoherence-free (DF) states. In contrast to existing QPQ protocols against channel noise, only an alternative fixed sequence of single-qubit measurements is needed by the user (Alice) to measure the received DF states. This property makes it easy to implement the proposed protocol by exploiting current technologies. Moreover, to retain the advantage of flexible database queries, we reconstruct Alice's measurement operators so that Alice needs only conditioned sequences of single-qubit measurements.

  5. Noncontact temperature measurement. I - Interpolation based techniques. II - Least squares based techniques (United States)

    Khan, Mansoor A.; Allemand, Charly; Eagar, Thomas W.


    Two types of techniques for noncontact temperature measurements are described. The one type, the interpolation-based techniques, is based on the ratio pyrometry techniques (two, three, and four color). It is shown that the ratio pyrometry methods are very sensitive to measurement noise and that the sensitivity increases quickly with the number of terms in the ratio. In these methods (under certain conditions) reference temperature can be used to make accurate predictions regarding the temperature elsewhere in the system. The other type of technique is based on measurements of the emitted intensity at multiple wavelengths and the simultaneous calculation of emissivity and temperature through the use of a least square curve fitting technique. Using computer simulations, it is shown that the theory and the algorithms developed for this method can accurately predict both the temperature and the uncertainty associated with each temperature prediction.

  6. Virtual Instrumentation Based Equipment for Bio-medical Measurements

    Directory of Open Access Journals (Sweden)

    Beriliu ILIE


    Full Text Available This paper presents our equipment forbio-chemical measurement, designed for monitoringthe functional parameters of the persons undergoingeffort tests. This equipment, based on a virtualinstrumentation, has more functions: the acquisitionof biological signals from the participant running inthe effort tests for the anaerobic threshold; thehardware and software configuration of theequipment; the setting of maximal parametersdepending on the type of test; the processing of theacquired data in real time for various alarms, endpoints and displaying this data on the screen; thetransmission of the acquired signals in real time orof the stored ones to a computer for further analysis.This provides a useful and elegant tool for bothhardware and software development and for dailyusage for research or stress tests regularly carriedout in the Cardiology or Sports MedicineDepartments of Hospitals.

  7. Prospective measurement of a problem-based learning course sequence. (United States)

    Dolder, Christian R; Olin, Jacqueline L; Alston, Gregory L


    To measure the effect, over time, of a 2-year problem-based learning (PBL) sequence on the skills, knowledge, and abilities it was designed to develop and enhance. At the start of each PBL semester, students were provided a "work sample" case with a main medical issue not previously covered in the curriculum. A standardized form containing 6 sections (hypotheses, learning issues to investigate, how hypotheses ruled in/out, primary-problem identification, plan, and goals of plan) was completed for each case. To rate student performance, investigators used a standardized form with 5-point Likert scale. Sixty-seven students who completed 4 assessments were included in data analyses. Scores significantly improved for each semester compared with baseline. Minimal significant differences were observed among semesters 2, 3, and 4. The 2-year PBL sequence improved students' performance compared with baseline, but the performance ceiling observed in our study requires further investigation.

  8. Measuring intracellular redox conditions using GFP-based sensors

    DEFF Research Database (Denmark)

    Björnberg, Olof; Ostergaard, Henrik; Winther, Jakob R


    Recent years have seen the development of methods for analyzing the redox conditions in specific compartments in living cells. These methods are based on genetically encoded sensors comprising variants of Green Fluorescent Protein in which vicinal cysteine residues have been introduced at solvent......-exposed positions. Several mutant forms have been identified in which formation of a disulfide bond between these cysteine residues results in changes of their fluorescence properties. The redox sensors have been characterized biochemically and found to behave differently, both spectroscopically and in terms...... of redox properties. As genetically encoded sensors they can be expressed in living cells and used for analysis of intracellular redox conditions; however, which parameters are measured depends on how the sensors interact with various cellular redox components. Results of both biochemical and cell...

  9. Accurate position estimation methods based on electrical impedance tomography measurements (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.


    than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  10. Estimation of Fire Emissions from Satellite-Based Measurements (United States)

    Ichoku, Charles; Kaufman, Yoram J.


    Biomass burning is a worldwide phenomenon affecting many vegetated parts of the globe regularly. Fires emit large quantities of aerosol and trace gases into the atmosphere, thus influencing the atmospheric chemistry and climate. Traditional methods of fire emissions estimation achieved only limited success, because they were based on peripheral information such as rainfall patterns, vegetation types and changes, agricultural practices, and surface ozone concentrations. During the last several years, rapid developments in satellite remote sensing has allowed more direct estimation of smoke emissions using remotely-sensed fire data. However, current methods use fire pixel counts or burned areas, thereby depending on the accuracy of independent estimations of the biomass fuel loadings, combustion efficiency, and emission factors. With the enhanced radiometric range of its 4-micron fire channel, the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor, which flies aboard both of the Earth Observing System EOS) Terra and Aqua Satellites, is able to measure the rate of release of fire radiative energy (FRE) in MJ/s (something that older sensors could not do). MODIS also measures aerosol distribution. Taking advantage of these new resources, we have developed a procedure combining MODIS fire and aerosol products to derive FRE-based smoke emission coefficients (C(e), in kg/MJ) for different regions of the globe. These coefficients are simply used to multiply FRE from MODIS to derive the emitted smoke aerosol mass. Results from this novel methodology are very encouraging. For instance, it was found that the smoke total particulate mass emission coefficient for the Brazilian Cerrado ecosystem (approximately 0.022 kg/MJ) is about twice the value for North America, Western Europe, or Australia, but about 50% lower than the value for southern Africa.

  11. Optical imaging of Tc-99m-based tracers: in vitro and in vivo results. (United States)

    Spinelli, Antonello E; Lo Meo, Sergio; Calandrino, Riccardo; Sbarbati, Andrea; Boschi, Federico


    It has been recently shown that optical imaging (OI) methods can be used to image the in vivo biodistribution of several radiopharmaceuticals labeled with beta or alpha emitters. In this work particular attention has been focused on investigating the weaker optical signal induced by an almost pure gamma emitter like Tc-99m. Visible light emission measurements of a water solution containing Tc-99m were performed using a small animal OI system. A sequence of images was acquired for 24 h in order to study the decay of the luminescence signal. The difference between the luminescence decay half life and well-known Tc-99m half life was equal to 1%. in vivo imaging was performed by injecting one control nude mice with Tc-99m-MDP. Optical images obtained with equipment designed for bioluminescence imaging showed that a visible light emission was distinguishable and correctly localized in the bladder region where a higher concentration of Tc-99m-MDP was expected. The bladder to background ratio was always greater than 1. We conclude that the experimental data presented in this paper show that it is possible to detect in vivo luminescence optical photons induced by Tc-99m. This is important especially considering the large number of Tc-99m-based radiopharmaceutical currently available.

  12. Uav Positioning and Collision Avoidance Based on RSS Measurements (United States)

    Masiero, A.; Fissore, F.; Guarnieri, A.; Pirotti, F.; Vettore, A.


    In recent years, Unmanned Aerial Vehicles (UAVs) are attracting more and more attention in both the research and industrial communities: indeed, the possibility to use them in a wide range of remote sensing applications makes them a very flexible and attractive solution in both civil and commercial cases (e.g. precision agriculture, security and control, monitoring of sites, exploration of areas difficult to reach). Most of the existing UAV positioning systems rely on the use of the GPS signal. Despite this can be a satisfactory solution in open environments where the GPS signal is available, there are several operating conditions of interest where it is unavailable or unreliable (e.g. close to high buildings, or mountains, in indoor environments). Consequently, a different approach has to be adopted in these cases. This paper considers the use ofWiFi measurements in order to obtain position estimations of the device of interest. More specifically, to limit the costs for the devices involved in the positioning operations, an approach based on radio signal strengths (RSS) measurements is considered. Thanks to the use of a Kalman filter, the proposed approach takes advantage of the temporal dynamic of the device of interest in order to improve the positioning results initially provided by means of maximum likelihood estimations. The considered UAVs are assumed to be provided with communication devices, which can allow them to communicate with each other in order to improve their cooperation abilities. In particular, the collision avoidance problem is examined in this work.

  13. Polarized light based birefringence measurements for monitoring myocardial regeneration (United States)

    Wood, Michael F. G.; Ghosh, Nirmalya; Li, Shu-Hong; Weisel, Richard D.; Wilson, Brian C.; Li, Ren-Ke; Vitkin, Alex


    Myocardial infarction leads to remodeling of the myocardium, resulting in a deterioration of cardiac function. This remodeling involves changes in the extracellular matrix, particularly an increase in collagen. Recently developed stem cell based regenerative treatments have been shown to reduce myocardial remodeling and collagen formation after infarction leading to an improvement in overall cardiac function. However, this emerging field is in dire need of biomarkers to monitor the progress and success of these treatments. Collagen is a fibrous protein and exhibits birefringence due to different refractive indices parallel and perpendicular to the direction of the fibers. As a result, changes in the collagen content and organization in the myocardium should lead to changes in birefringence. Birefringence measurements were made through ex vivo myocardial tissues from rats with induced myocardial infarctions including a number that had undergone regenerative treatment with mesenchymal stem cells. Results show a decrease in birefringence from normal to infracted myocardium, indicating a decrease in tissue organization associated with scar formation, however, an increase in birefringence was seen in those myocardial tissues that had undergone regenerative treatment indicating reorganization of tissue structure. These results demonstrate promise for this technique and are motivating further work towards performing measurements in vivo.

  14. Assessing heart rate variability through wavelet-based statistical measures. (United States)

    Wachowiak, Mark P; Hay, Dean C; Johnson, Michel J


    Because of its utility in the investigation and diagnosis of clinical abnormalities, heart rate variability (HRV) has been quantified with both time and frequency analysis tools. Recently, time-frequency methods, especially wavelet transforms, have been applied to HRV. In the current study, a complementary computational approach is proposed wherein continuous wavelet transforms are applied directly to ECG signals to quantify time-varying frequency changes in the lower bands. Such variations are compared for resting and lower body negative pressure (LBNP) conditions using statistical and information-theoretic measures, and compared with standard HRV metrics. The latter confirm the expected lower variability in the LBNP condition due to sympathetic nerve activity (e.g. RMSSD: p=0.023; SDSD: p=0.023; LF/HF: p=0.018). Conversely, using the standard Morlet wavelet and a new transform based on windowed complex sinusoids, wavelet analysis of the ECG within the observed range of heart rate (0.5-1.25Hz) exhibits significantly higher variability, as measured by frequency band roughness (Morlet CWT: p=0.041), entropy (Morlet CWT: p=0.001), and approximate entropy (Morlet CWT: p=0.004). Consequently, this paper proposes that, when used with well-established HRV approaches, time-frequency analysis of ECG can provide additional insights into the complex phenomenon of heart rate variability. Copyright © 2016. Published by Elsevier Ltd.

  15. Optimal source localization problem based on TOA measurements

    Directory of Open Access Journals (Sweden)

    Rosić Maja


    Full Text Available Determining an optimal emitting source location based on the time of arrival (TOA measurements is one of the important problems in Wireless Sensor Networks (WSNs. The nonlinear least-squares (NLS estimation technique is employed to obtain the location of an emitting source. This optimization problem has been formulated by the minimization of the sum of squared residuals between estimated and measured data as the objective function. This paper presents a hybridization of Genetic Algorithm (GA for the determination of the global optimum solution with the local search Newton-Raphson (NR method. The corresponding Cramer-Rao lower bound (CRLB on the localization errors is derived, which gives a lower bound on the variance of any unbiased estimator. Simulation results under different signal-to-noise-ratio (SNR conditions show that the proposed hybrid Genetic Algorithm-Newton-Raphson (GA-NR improves the accuracy and efficiency of the optimal solution compared to the regular GA. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. TR32028

  16. A Dynamic Attitude Measurement System Based on LINS

    Directory of Open Access Journals (Sweden)

    Hanzhou Li


    Full Text Available A dynamic attitude measurement system (DAMS is developed based on a laser inertial navigation system (LINS. Three factors of the dynamic attitude measurement error using LINS are analyzed: dynamic error, time synchronization and phase lag. An optimal coning errors compensation algorithm is used to reduce coning errors, and two-axis wobbling verification experiments are presented in the paper. The tests indicate that the attitude accuracy is improved 2-fold by the algorithm. In order to decrease coning errors further, the attitude updating frequency is improved from 200 Hz to 2000 Hz. At the same time, a novel finite impulse response (FIR filter with three notches is designed to filter the dither frequency of the ring laser gyro (RLG. The comparison tests suggest that the new filter is five times more effective than the old one. The paper indicates that phase-frequency characteristics of FIR filter and first-order holder of navigation computer constitute the main sources of phase lag in LINS. A formula to calculate the LINS attitude phase lag is introduced in the paper. The expressions of dynamic attitude errors induced by phase lag are derived. The paper proposes a novel synchronization mechanism that is able to simultaneously solve the problems of dynamic test synchronization and phase compensation. A single-axis turntable and a laser interferometer are applied to verify the synchronization mechanism. The experiments results show that the theoretically calculated values of phase lag and attitude error induced by phase lag can both match perfectly with testing data. The block diagram of DAMS and physical photos are presented in the paper. The final experiments demonstrate that the real-time attitude measurement accuracy of DAMS can reach up to 20″ (1σ and the synchronization error is less than 0.2 ms on the condition of three axes wobbling for 10 min.


    Directory of Open Access Journals (Sweden)

    Ming-Chang LEE


    Full Text Available In order to achieve commercial banks liquidity, safety and profitability objective requirements, loan portfolio risk analysis based optimization decisions are rational allocation of assets.  The risk analysis and asset allocation are the key technology of banking and risk management.  The aim of this paper, build a loan portfolio optimization model based on risk analysis.  Loan portfolio rate of return by using Value-at-Risk (VaR and Conditional Value-at-Risk (CVaR constraint optimization decision model reflects the bank's risk tolerance, and the potential loss of direct control of the bank.  In this paper, it analyze a general risk management model applied to portfolio problems with VaR and CVaR risk measures by using Using the Lagrangian Algorithm.  This paper solves the highly difficult problem by matrix operation method.  Therefore, the combination of this paper is easy understanding the portfolio problems with VaR and CVaR risk model is a hyperbola in mean-standard deviation space.  It is easy calculation in proposed method.

  18. Research on cloud-based remote measurement and analysis system (United States)

    Gao, Zhiqiang; He, Lingsong; Su, Wei; Wang, Can; Zhang, Changfan


    The promising potential of cloud computing and its convergence with technologies such as cloud storage, cloud push, mobile computing allows for creation and delivery of newer type of cloud service. Combined with the thought of cloud computing, this paper presents a cloud-based remote measurement and analysis system. This system mainly consists of three parts: signal acquisition client, web server deployed on the cloud service, and remote client. This system is a special website developed using and Flex RIA technology, which solves the selective contradiction between two monitoring modes, B/S and C/S. This platform supplies customer condition monitoring and data analysis service by Internet, which was deployed on the cloud server. Signal acquisition device is responsible for data (sensor data, audio, video, etc.) collection and pushes the monitoring data to the cloud storage database regularly. Data acquisition equipment in this system is only conditioned with the function of data collection and network function such as smartphone and smart sensor. This system's scale can adjust dynamically according to the amount of applications and users, so it won't cause waste of resources. As a representative case study, we developed a prototype system based on Ali cloud service using the rotor test rig as the research object. Experimental results demonstrate that the proposed system architecture is feasible.

  19. Rank-Based Analysis of Unbalanced Repeated Measures Data

    Directory of Open Access Journals (Sweden)

    M. Mushfiqur Rashid


    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this article, we have developed a rank (intra-subject based analysis of clinical trials with unbalanced repeated measures data. We assume that the errors within each patient are exchangeable and continuous random variables. This rank-based inference is valid when the unbalanced data are missing either completely at random or by design. A drop in dispersion test is developed for general linear hypotheses. A numerical example is given to illustrate the procedure.

  20. Video-based measurements for wireless capsule endoscope tracking (United States)

    Spyrou, Evaggelos; Iakovidis, Dimitris K.


    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.

  1. Comparison of pharmacy-based and diagnosis-based comorbidity measures from medical administrative data. (United States)

    Cortaredona, Sébastien; Pambrun, Elodie; Verdoux, Hélène; Verger, Pierre


    Health status is sometimes quantified by chronic condition (CC) scores calculated from medical administrative data. We sought to modify two pharmacy-based comorbidity measures and compare their performance in predicting hospitalization and/or death. The reference was a diagnosis-based score. One of the two measures applied an updated approach linking specific ATC codes of dispensed drugs to 22 CCs; the other used a list of 37 drug categories, without linking them to specific CCs. Using logistic regressions that took repeated measures into account and hospitalization and/or death the following year as the outcome, we assigned weights to each CC/drug category. Comorbidity scores were calculated as the weighted sum of the 22 CCs/37 drug categories. We compared the performance of both measures in predicting hospitalization and/or death with that of a diagnosis-based score based on 30 groups of long-term illnesses (LTIs), a status granted in France to exempt beneficiaries with chronic diseases from copayments. We assessed the predictive performance of the scores with the quasi-likelihood under the independence model criterion (QIC), the c statistic and the Brier score. The two pharmacy-based scores performed better than the LTI score, with lower QIC and Brier scores and higher c statistics. Their predictive performance was very similar. While there is no clear consensus or recommendations about the optimal choice of comorbidity measure, both pharmacy-based scores may be useful for limiting confounding in observational studies among general populations of adults from health insurance databases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Physiological Signal-Based Method for Measurement of Pain Intensity

    Directory of Open Access Journals (Sweden)

    Yaqi Chu


    Full Text Available The standard method for prediction of the absence and presence of pain has long been self-report. However, for patients with major cognitive or communicative impairments, it would be better if clinicians could quantify pain without having to rely on the patient's self-description. Here, we present a newly pain intensity measurement method based on multiple physiological signals, including blood volume pulse (BVP, electrocardiogram (ECG, and skin conductance level (SCL, all of which are induced by external electrical stimulation. The proposed pain prediction system consists of signal acquisition and preprocessing, feature extraction, feature selection and feature reduction, and three types of pattern classifiers. Feature extraction phase is devised to extract pain-related characteristics from short-segment signals. A hybrid procedure of genetic algorithm-based feature selection and principal component analysis-based feature reduction was established to obtain high-quality features combination with significant discriminatory information. Three types of classification algorithms—linear discriminant analysis, k-nearest neighbor algorithm, and support vector machine—are adopted during various scenarios, including multi-signal scenario, multi-subject and between-subject scenario, and multi-day scenario. The classifiers gave correct classification ratios much higher than chance probability, with the overall average accuracy of 75% above for four pain intensity. Our experimental results demonstrate that the proposed method can provide an objective and quantitative evaluation of pain intensity. The method might be used to develop a wearable device that is suitable for daily use in clinical settings.

  3. Fusion of PCA-Based and LDA-Based Similarity Measures for Face Verification

    Directory of Open Access Journals (Sweden)

    Kittler Josef


    Full Text Available The problem of fusing similarity measure-based classifiers is considered in the context of face verification. The performance of face verification systems using different similarity measures in two well-known appearance-based representation spaces, namely Principle Component Analysis (PCA and Linear Discriminant Analysis (LDA is experimentally studied. The study is performed for both manually and automatically registered face images. The experimental results confirm that our optimised Gradient Direction (GD metric within the LDA feature space outperforms the other adopted metrics. Different methods of selection and fusion of the similarity measure-based classifiers are then examined. The experimental results demonstrate that the combined classifiers outperform any individual verification algorithm. In our studies, the Support Vector Machines (SVMs and Weighted Averaging of similarity measures appear to be the best fusion rules. Another interesting achievement of the work is that although features derived from the LDA approach lead to better results than those of the PCA algorithm for all the adopted scoring functions, fusing the PCA- and LDA-based scores improves the performance of the system.

  4. (n,p) emission channeling measurements on ion-implanted beryllium

    CERN Multimedia

    Jakubek, J; Uher, J


    We propose to perform emission-channeling measurements using thermal neutron induced proton emission from ion-implanted $^{7}$Be. The physics questions addressed concern the beryllium doping of III-V and II-VI semiconductors and the host dependence of the electron capture half-life of $^{7}$Be.

  5. Sorption isotherms: A review on physical bases, modeling and measurement

    Energy Technology Data Exchange (ETDEWEB)

    Limousin, G. [Atomic Energy Commission, Tracers Technology Laboratory, 38054 Grenoble Cedex (France) and Laboratoire d' etude des Transferts en Hydrologie et Environnement (CNRS-INPG-IRD-UJF), BP 53, 38041 Grenoble Cedex (France)]. E-mail:; Gaudet, J.-P. [Laboratoire d' etude des Transferts en Hydrologie et Environnement (CNRS-INPG-IRD-UJF), BP 53, 38041 Grenoble Cedex (France); Charlet, L. [Laboratoire de Geophysique Interne et Techtonophysique - CNRS-IRD-LCPC-UJF-Universite de Savoie, BP 53, 38041 Grenoble Cedex (France); Szenknect, S. [Atomic Energy Commission, Tracers Technology Laboratory, 38054 Grenoble Cedex (France); Barthes, V. [Atomic Energy Commission, Tracers Technology Laboratory, 38054 Grenoble Cedex (France); Krimissa, M. [Electricite de France, Division Recherche et Developpement, Laboratoire National d' Hydraulique et d' Environnement - P78, 6 quai Watier, 78401 Chatou (France)


    The retention (or release) of a liquid compound on a solid controls the mobility of many substances in the environment and has been quantified in terms of the 'sorption isotherm'. This paper does not review the different sorption mechanisms. It presents the physical bases underlying the definition of a sorption isotherm, different empirical or mechanistic models, and details several experimental methods to acquire a sorption isotherm. For appropriate measurements and interpretations of isotherm data, this review emphasizes 4 main points: (i) the adsorption (or desorption) isotherm does not provide automatically any information about the reactions involved in the sorption phenomenon. So, mechanistic interpretations must be carefully verified. (ii) Among studies, the range of reaction times is extremely wide and this can lead to misinterpretations regarding the irreversibility of the reaction: a pseudo-hysteresis of the release compared with the retention is often observed. The comparison between the mean characteristic time of the reaction and the mean residence time of the mobile phase in the natural system allows knowing if the studied retention/release phenomenon should be considered as an instantaneous reversible, almost irreversible phenomenon, or if reaction kinetics must be taken into account. (iii) When the concentration of the retained substance is low enough, the composition of the bulk solution remains constant and a single-species isotherm is often sufficient, although it remains strongly dependent on the background medium. At higher concentrations, sorption may be driven by the competition between several species that affect the composition of the bulk solution. (iv) The measurement method has a great influence. Particularly, the background ionic medium, the solid/solution ratio and the use of flow-through or closed reactor are of major importance. The chosen method should balance easy-to-use features and representativity of the studied

  6. GIS Based Measurement and Regulatory Zoning of Urban Ecological Vulnerability

    Directory of Open Access Journals (Sweden)

    Xiaorui Zhang


    Full Text Available Urban ecological vulnerability is measured on the basis of ecological sensitivity and resilience based on the concept analysis of vulnerability. GIS-based multicriteria decision analysis (GIS-MCDA methods are used, supported by the spatial analysis tools of GIS, to define different levels of vulnerability for areas of the urban ecology. These areas are further classified into different types of regulatory zones. Taking the city of Hefei in China as the empirical research site, this study uses GIS-MCDA, including the index system, index weights and overlay rules, to measure the degree of its ecological vulnerability on the GIS platform. There are eight indices in the system. Raking and analytical hierarchy process (AHP methods are used to calculate index weights according to the characteristics of the index system. The integrated overlay rule, including selection of the maximum value, and weighted linear combination (WLC are applied as the overlay rules. In this way, five types of vulnerability areas have been classified as follows: very low vulnerability, low vulnerability, medium vulnerability, high vulnerability and very high vulnerability. They can be further grouped into three types of regulatory zone of ecological green line, ecological grey line and ecological red line. The study demonstrates that ecological green line areas are the largest (53.61% of the total study area and can be intensively developed; ecological grey line areas (19.59% of the total area can serve as the ecological buffer zone, and ecological red line areas (26.80% cannot be developed and must be protected. The results indicate that ecological green line areas may provide sufficient room for future urban development in Hefei city. Finally, the respective regulatory countermeasures are put forward. This research provides a scientific basis for decision-making around urban ecological protection, construction and sustainable development. It also provides theoretical method

  7. Measures of frailty in population-based studies: an overview (United States)


    Background Although research productivity in the field of frailty has risen exponentially in recent years, there remains a lack of consensus regarding the measurement of this syndrome. This overview offers three services: first, we provide a comprehensive catalogue of current frailty measures; second, we evaluate their reliability and validity; third, we report on their popularity of use. Methods In order to identify relevant publications, we searched MEDLINE (from its inception in 1948 to May 2011); scrutinized the reference sections of the retrieved articles; and consulted our own files. An indicator of the frequency of use of each frailty instrument was based on the number of times it had been utilized by investigators other than the originators. Results Of the initially retrieved 2,166 papers, 27 original articles described separate frailty scales. The number (range: 1 to 38) and type of items (range of domains: physical functioning, disability, disease, sensory impairment, cognition, nutrition, mood, and social support) included in the frailty instruments varied widely. Reliability and validity had been examined in only 26% (7/27) of the instruments. The predictive validity of these scales for mortality varied: for instance, hazard ratios/odds ratios (95% confidence interval) for mortality risk for frail relative to non-frail people ranged from 1.21 (0.78; 1.87) to 6.03 (3.00; 12.08) for the Phenotype of Frailty and 1.57 (1.41; 1.74) to 10.53 (7.06; 15.70) for the Frailty Index. Among the 150 papers which we found to have used at least one of the 27 frailty instruments, 69% (n = 104) reported on the Phenotype of Frailty, 12% (n = 18) on the Frailty Index, and 19% (n = 28) on one of the remaining 25 instruments. Conclusions Although there are numerous frailty scales currently in use, reliability and validity have rarely been examined. The most evaluated and frequently used measure is the Phenotype of Frailty. PMID:23786540

  8. First direct observation of bound-state beta-decay. Measurements of branching and lifetime of {sup 207}Tl{sup 81+} fragments

    Energy Technology Data Exchange (ETDEWEB)

    Boutin, D.


    The first experimental observation of bound-state beta-decay showed, that due solely to the electron stripping, a stable nuclide, e.g. {sup 163}Dy, became unstable. Also a drastic modification of the half-life of bare {sup 187}Re, from 4.12(2) x 10{sup 10} years down to 32.9(20) years, could be observed. It was mainly due to the possibility for the mother nuclide to decay into a previously inaccessible nuclear level of the daughter nuclide. It was proposed to study a nuclide where this decay mode was competing with continuum-state beta-decay, in order to measure their respective branchings. The ratio {beta}{sub b}/{beta}{sub c} could also be evaluated for the first time. {sup 207}Tl was chosen due to its high atomic number, and Q-value of about 1.4 MeV, small enough to enhance the {beta}{sub b} probability and large enough to allow the use of time-resolved Schottky Mass Spectrometry (SMS) to study the evolution of mother and bound-state beta-decay daughter ions. The decay properties of the ground state and isomeric state of {sup 207}Tl{sup 81+} have been investigated at the GSI accelerator facility in two separate experiments. For the first time {beta}-decay where the electron could go either to a bound state (atomic orbitals) and lead to {sup 207}Pb{sup 81+} as a daughter nuclide, or to a continuum state and lead to {sup 207}Pb{sup 82+}, has been observed. The respective branchings of these two processes could be measured as well. The deduced total nuclear half-life of 255(17) s for {sup 207}Tl{sup 81+}, was slightly modified with respect to the half-life of the neutral atom of 286(2) s. It was nevertheless in very good agreement with calculations based on the assumption that the beta-decay was following an allowed type of transition. The branching {beta}{sub b}/{beta}{sub c}=0.192(20), was also in very good agreement with the same calculations. The application of stochastic precooling allowed to observe in addition the 1348 keV short-lived isomeric state of {sup


    Directory of Open Access Journals (Sweden)

    N. A. Degotinsky


    Full Text Available Subject of Research. Westudied a method of estimating the object distance on the basis of its single defocused photograph. The method is based on the analysis of image defocus at the contour points corresponding to borders of photographed objects. It is supposed that the brightness drop in not defocused image of border can be simulated with an ideal step function – the Heaviside function. Method. The contours corresponding to local maxima of brightness gradient are detected in the initial image to be analyzed and recorded for further analysis. Then the initial image is subjected to additional defocusing by a Gaussian filter having the dispersion parameter of defined in advance value. The ratios of local gradient values for the initial and additionally defocused images are then calculated at the contour points, and the defocus values of initial image at the points of objects borders are estimated based on these ratios. A sparse map of relative remoteness is built on the basis of these estimations for the border points of photographed objects, and a dense depth map of relative distances is then calculated using a special interpolation technique. Main Results. The efficiency of described technique is illustrated with the results of distance estimation in the photographs of real environment. Practical Relevance. In contrast to the widely applied stereo-technique and distance measurement methods analyzing the sets of defocused images, the investigated approach enables dealing with a single photograph acquired in a standard way without setting any additional conditions and limitations. If the relative remoteness of objects may be estimated instead of their absolute distances, no special calibration is needed for the camera applied, and the pictures taken once previously in diversified situations can be analyzed using the considered technique.

  10. Injection quality measurements with diamond based particle detectors

    CERN Document Server

    Stein, Oliver; CERN. Geneva. ATS Department


    During the re-commissioning phase of the LHC after the long shutdown 1 very high beam losses were observed at the TDI during beam injection. The losses reached up to 90% of the dump threshold. To decrease the through beam losses induced stress on the accelerator components these loss levels need to be reduced. Measurements with diamond based particle detectors (dBLMs), which have nano-second time resolution, revealed that the majority of these losses come from recaptured SPS beam surrounding the nominal bunch train. In this MD the injection loss patterns and loss intensities were investigated in greater detail. Performed calibration shots on the TDI (internal beam absorber for injection) gave a conversion factor from impacting particles intensities to signal in the dBLMs (0.1Vs/109 protons). Using the SPS tune kicker for cleaning the recaptured beam in the SPS and changing the LHC injection kicker settings resulted in a reduction of the injection losses. For 144 bunch injections the loss levels were decreased...

  11. EMG Processing Based Measures of Fatigue Assessment during Manual Lifting (United States)

    Marhaban, M. H.; Abdullah, A. R.


    Manual lifting is one of the common practices used in the industries to transport or move objects to a desired place. Nowadays, even though mechanized equipment is widely available, manual lifting is still considered as an essential way to perform material handling task. Improper lifting strategies may contribute to musculoskeletal disorders (MSDs), where overexertion contributes as the highest factor. To overcome this problem, electromyography (EMG) signal is used to monitor the workers' muscle condition and to find maximum lifting load, lifting height and number of repetitions that the workers are able to handle before experiencing fatigue to avoid overexertion. Past researchers have introduced several EMG processing techniques and different EMG features that represent fatigue indices in time, frequency, and time-frequency domain. The impact of EMG processing based measures in fatigue assessment during manual lifting are reviewed in this paper. It is believed that this paper will greatly benefit researchers who need a bird's eye view of the biosignal processing which are currently available, thus determining the best possible techniques for lifting applications. PMID:28303251

  12. Fuzzy Fireworks Algorithm Based on a Sparks Dispersion Measure

    Directory of Open Access Journals (Sweden)

    Juan Barraza


    Full Text Available The main goal of this paper is to improve the performance of the Fireworks Algorithm (FWA. To improve the performance of the FWA we propose three modifications: the first modification is to change the stopping criteria, this is to say, previously, the number of function evaluations was utilized as a stopping criteria, and we decided to change this to specify a particular number of iterations; the second and third modifications consist on introducing a dispersion metric (dispersion percent, and both modifications were made with the goal of achieving dynamic adaptation of the two parameters in the algorithm. The parameters that were controlled are the explosion amplitude and the number of sparks, and it is worth mentioning that the control of these parameters is based on a fuzzy logic approach. To measure the impact of these modifications, we perform experiments with 14 benchmark functions and a comparative study shows the advantage of the proposed approach. We decided to call the proposed algorithms Iterative Fireworks Algorithm (IFWA and two variants of the Dispersion Percent Iterative Fuzzy Fireworks Algorithm (DPIFWA-I and DPIFWA-II, respectively.

  13. Measurement-Based Performance Evaluation of Advanced MIMO Transceiver Designs

    Directory of Open Access Journals (Sweden)

    Schneider Christian


    Full Text Available This paper describes the methodology and the results of performance investigations on a multiple-input multiple-output (MIMO transceiver scheme for frequency-selective radio channels. The method relies on offline simulations and employs real-time MIMO channel sounder measurement data to ensure a realistic channel modeling. Thus it can be classified in between the performance evaluation using some predefined channel models and the evaluation of a prototype hardware in field experiments. New aspects for the simulation setup are discussed, which are frequently ignored when using simpler model-based evaluations. Example simulations are provided for an iterative ("turbo" MIMO equalizer concept. The dependency of the achievable bit error rate performance on the propagation characteristics and on the variation in some system design parameters is shown, whereas the antenna constellation is of particular concern for MIMO systems. Although in many of the considered constellations turbo MIMO equalization appears feasible in real field scenarios, there exist cases with poor performance as well, indicating that in practical applications link adaptation of the transmitter and receiver processing to the environment is necessary.

  14. Measurement-Based Performance Evaluation of Advanced MIMO Transceiver Designs (United States)

    Trautwein, Uwe; Schneider, Christian; Thomä, Reiner


    This paper describes the methodology and the results of performance investigations on a multiple-input multiple-output (MIMO) transceiver scheme for frequency-selective radio channels. The method relies on offline simulations and employs real-time MIMO channel sounder measurement data to ensure a realistic channel modeling. Thus it can be classified in between the performance evaluation using some predefined channel models and the evaluation of a prototype hardware in field experiments. New aspects for the simulation setup are discussed, which are frequently ignored when using simpler model-based evaluations. Example simulations are provided for an iterative ("turbo") MIMO equalizer concept. The dependency of the achievable bit error rate performance on the propagation characteristics and on the variation in some system design parameters is shown, whereas the antenna constellation is of particular concern for MIMO systems. Although in many of the considered constellations turbo MIMO equalization appears feasible in real field scenarios, there exist cases with poor performance as well, indicating that in practical applications link adaptation of the transmitter and receiver processing to the environment is necessary.

  15. Online monitoring of Mezcal fermentation based on redox potential measurements. (United States)

    Escalante-Minakata, P; Ibarra-Junquera, V; Rosu, H C; De León-Rodríguez, A; González-García, R


    We describe an algorithm for the continuous monitoring of the biomass and ethanol concentrations as well as the growth rate in the Mezcal fermentation process. The algorithm performs its task having available only the online measurements of the redox potential. The procedure combines an artificial neural network (ANN) that relates the redox potential to the ethanol and biomass concentrations with a nonlinear observer-based algorithm that uses the ANN biomass estimations to infer the growth rate of this fermentation process. The results show that the redox potential is a valuable indicator of the metabolic activity of the microorganisms during Mezcal fermentation. In addition, the estimated growth rate can be considered as a direct evidence of the presence of mixed culture growth in the process. Usually, mixtures of microorganisms could be intuitively clear in this kind of processes; however, the total biomass data do not provide definite evidence by themselves. In this paper, the detailed design of the software sensor as well as its experimental application is presented at the laboratory level.

  16. Observing Tsunamis in the Ionosphere Using Ground Based GPS Measurements (United States)

    Galvan, D. A.; Komjathy, A.; Song, Y. Tony; Stephens, P.; Hickey, M. P.; Foster, J.


    Ground-based Global Positioning System (GPS) measurements of ionospheric Total Electron Content (TEC) show variations consistent with atmospheric internal gravity waves caused by ocean tsunamis following recent seismic events, including the Tohoku tsunami of March 11, 2011. We observe fluctuations correlated in time, space, and wave properties with this tsunami in TEC estimates processed using JPL's Global Ionospheric Mapping Software. These TEC estimates were band-pass filtered to remove ionospheric TEC variations with periods outside the typical range of internal gravity waves caused by tsunamis. Observable variations in TEC appear correlated with the Tohoku tsunami near the epicenter, at Hawaii, and near the west coast of North America. Disturbance magnitudes are 1-10% of the background TEC value. Observations near the epicenter are compared to estimates of expected tsunami-driven TEC variations produced by Embry Riddle Aeronautical University's Spectral Full Wave Model, an atmosphere-ionosphere coupling model, and found to be in good agreement. The potential exists to apply these detection techniques to real-time GPS TEC data, providing estimates of tsunami speed and amplitude that may be useful for future early warning systems.

  17. EMG Processing Based Measures of Fatigue Assessment during Manual Lifting

    Directory of Open Access Journals (Sweden)

    E. F. Shair


    Full Text Available Manual lifting is one of the common practices used in the industries to transport or move objects to a desired place. Nowadays, even though mechanized equipment is widely available, manual lifting is still considered as an essential way to perform material handling task. Improper lifting strategies may contribute to musculoskeletal disorders (MSDs, where overexertion contributes as the highest factor. To overcome this problem, electromyography (EMG signal is used to monitor the workers’ muscle condition and to find maximum lifting load, lifting height and number of repetitions that the workers are able to handle before experiencing fatigue to avoid overexertion. Past researchers have introduced several EMG processing techniques and different EMG features that represent fatigue indices in time, frequency, and time-frequency domain. The impact of EMG processing based measures in fatigue assessment during manual lifting are reviewed in this paper. It is believed that this paper will greatly benefit researchers who need a bird’s eye view of the biosignal processing which are currently available, thus determining the best possible techniques for lifting applications.

  18. Measurement-Based Vehicle Load Model for Urban Expressway Bridges

    Directory of Open Access Journals (Sweden)

    Bin Chen


    Full Text Available Significant changes in vehicle loads have occurred in China due to the development of the automobile industry and transportation within the past two decades, particularly the rapid increase in traffic flow and the large-scale emergence of heavy trucks. However, research into vehicle loadings on urban bridges is not well developed. In this study, based on traffic flow data collected using a weigh-in-motion system installed on an expressway in an urban logistics zone, we analyzed the traffic flow, vehicle types, and gross vehicle weight (GVW features and developed models for the vehicle load and fatigue load. According to the axle space, axle types, and axle number, the trucks in the traffic flow were classified into 10 representative vehicle types. The probability distribution of the GVW was fitted to a three-class mixed log-normal distribution. Using the improved Gumbel method, we determined the extreme value distribution of the vehicle loadings in the purpose reference period and assessed the vehicle loadings of urban bridges. In addition, using the equivalent damage theory, six equivalent vehicle models were established according to the measurements of the axle weight and axle space, thereby obtaining a simplified model of fatigue vehicle loadings on urban expressway bridges.

  19. Toward a theoretically based measurement model of the good life. (United States)

    Cheung, C K


    A theoretically based conceptualization of the good life should differentiate 4 dimensions-the hedonist good life, the dialectical good life, the humanist good life, and the formalist good life. These 4 dimensions incorporate previous fragmentary measures, such as life satisfaction, depression, work alienation, and marital satisfaction, to produce an integrative view. In the present study, 276 Hong Kong Chinese husbands and wives responded to a survey of 13 indicators for these 4 good life dimensions. Confirmatory hierarchical factor analysis showed that these indicators identified the 4 dimensions of the good life, which in turn converged to identify a second-order factor of the overall good life. The model demonstrates discriminant validity in that the first-order factors had high loadings on the overall good life factor despite being linked by a social desirability factor. Analysis further showed that the second-order factor model applied equally well to husbands and wives. Thus, the conceptualization appears to be theoretically and empirically adequate in incorporating previous conceptualizations of the good life.

  20. A Transformer Partial Discharge Measurement System Based on Fluorescent Fiber

    Directory of Open Access Journals (Sweden)

    Fan Liu


    Full Text Available Based on the physical phenomena of optical effects produced by the partial discharge (PD and on the characteristics of fluorescent fiber sensing of weak fluorescent signals, a PD measurement system using a fluorescent fiber sensor was designed. The main parameters of the sensing system were calculated, an experimental testing platform for PD simulation in the lab was established, and PD signals were then detected through ultra-high frequency (UHF and optical methods under a needle-plate discharge model. PD optical pulses in transformer oil contained signal-peak and multi-peak pulse waveforms. Compared with UHF detection results, the number of PD pulses and the elapsed PD pulse phase time revealed a good corresponding relationship. However, PD signal amplitudes presented the opposite, thus indicating that PD UHF signals reflected pulse amplitude value, whereas PD optical signals reflected pulse energy magnitude. The n-u-φ three-dimensional distributions indicated that most of the PD signals concentrated in the nearby industrial frequency voltage peak value. Overall, the proposed fluorescent fiber sensing system design can be used successfully in transformer PD signal detection.

  1. Ground-based measurements of UV Index (UVI at Helwan

    Directory of Open Access Journals (Sweden)

    H. Farouk


    Full Text Available On October 2010 UV Index (UVI ground-based measurements were carried out by weather station at solar laboratory in NRIAG. The daily variation has maximum values in spring and summer days, while minimum values in autumn and winter days. The low level of UVI between 2.55 and 2.825 was found in December, January and February. The moderate level of UVI between 3.075 and 5.6 was found in March, October and November. The high level of UVI between 6.7 and 7.65 was found in April, May and September. The very high level of UVI between 8 and 8.6 was found in June, July and August. High level of radiation over 6 months per year including 3 months with a very high level UVI. According to the equation {UVI=a[SZA]b} the UVI increases with decreasing SZA by 82% on a daily scale and 88% on a monthly scale. Helwan exposure to a high level of radiation over 6 months per year including 3 months with a very high level UVI, so it is advisable not to direct exposure to the sun from 11 am to 2:00 pm.

  2. Spatial representativeness of ground-based solar radiation measurements (United States)

    Zyta Hakuba, Maria; Folini, Doris; Wild, Martin


    The validation of gridded surface solar radiation (SSR) data, i.e., satellite-derived or climate model calculated, relies on the comparison with ground-based in-situ measurements. Detached from any modeling or temporal averaging biases, the question remains how representative a point measurement is for a larger-scale grid cell. In the present study, we make extensive use of high-resolution (0.03°) SSR data from the Satellite Application Facility on climate monitoring (CM SAF) to study in detail: 1) the spatial variability in SSR over Europe, 2) the sub-grid variability within an example grid of 1° resolution, 3) the representativeness of 143 surface sites (BSRN and GEBA) for their corresponding 1° grid cells, and 4) the point-centered and grid-independent surface sites' representativeness for larger-grid cells up to 3°. These analyses are done on a climatological annual mean basis over the period 2001-2005. Annually, the spatial variability as given in the CM SAF data set is largest in regions of sudden changes in weather conditions and topography, e.g., in Northern Spain, the Alpine region, the Carpathians, and Adriatic coast. The 1° sub-grid variability (mean absolute deviation from grid cell mean, relative to grid cell mean, RMAD) is on average 1.64 % (2.43 Wm-2) over European land, with maximum RMAD of up to 10% in Northern Spain. The surface sites' (GEBA and BSRN) representativeness for larger-grid cells is highly dependent on region and grid size. The difference between the CM SAF value at the GEBA site's location and the grid cell mean (calculated from CM SAF data) can vary from almost 0% to more than 10% for a 1° grid cell, and up to 15% for a 3° grid cell. On average, this spatial sampling error is below 5% even for grid cells of 3° resolution. We show that the latitudinal shift of a point relative to the larger-grid cell center may account for a spatial sampling error of up to +-1.81 Wm-2 (for a maximum distance of +-0.5° within 1° grid cell

  3. A Raspberry Pi Based Portable Endoscopic 3D Measurement System

    Directory of Open Access Journals (Sweden)

    Jochen Schlobohm


    Full Text Available Geometry measurements are very important to monitor a machine part’s health and performance. Optical measurement system have several advantages for the acquisition of a parts geometry: measurement speed, precision, point density and contactless operation. Measuring parts inside of assembled machines is also desirable to keep maintenance cost low. The Raspberry Pi is a small and cost efficient computer that creates new opportunities for compact measurement systems. We have developed a fringe projection system which is capable of measuring in very limited space. A Raspberry Pi 2 is used to generate the projection patterns, acquire the image and reconstruct the geometry. Together with a small LED projector, the measurement system is small and easy to handle. It consists of off-the-shelf products which are nonetheless capable of measuring with an uncertainty of less than 100 μ m .

  4. Dynamic portfolio managment based on complex quantile risk measures

    Directory of Open Access Journals (Sweden)

    Ekaterina V. Tulupova


    Full Text Available The article focuses on effectiveness evaluation combined measures of financial risks, which are convex combinations of measures VaR, CVaR and their analogues for the right distribution tail functions of a portfolio returns.

  5. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya


    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  6. Adult oral health inequalities described using area-based and household-based socioeconomic status measures. (United States)

    Jamieson, Lisa M; Thomson, W Murray


    To describe adult oral health inequalities using an area-based and household-based measure of socioeconomic status (SES). Self-report questionnaires (seeking information on sociodemographic, oral health and oral self-care) were sent to a random sample of adults from the Dunedin South Electorate, New Zealand. Household- and area-based SES measures were collected. The main outcome measures were edentulism prevalence, average-poor self-rated oral health and not having visited a dentist for 2+ years. Data were weighted to produce population-based estimates. The response rate was 78.2%; the sample mean age was 47 years (sd, 17; range 18-92 years) and females comprised 54.0%. Edentulism was most prevalent among those from low-SES households who were resident in high-deprivation areas (P<0.0001). Poor self-rated oral health (P<0.0001) and 2+ years since the last dental visit (P<0.0001) were also most prevalent among these same individuals. In contrast, respondents from high-SES households located in the least deprived areas had the lowest prevalence of edentulism, poor self-reported oral health or 2+ years since their last dental visit. Those from the other household/area SES combinations occupied intermediate positions. There may be added value to dental public health in using a dual socio-economic measurement approach to population research, with greater oral health gains perhaps being possible by concentrating resources and clinical effort on people living in low-SES households in highly-deprived areas, rather than those living in low-SES households in areas that are not deprived.

  7. Microfluidic heavy metal immunoassay based on absorbance measurement. (United States)

    Date, Yasumoto; Terakado, Shingo; Sasaki, Kazuhiro; Aota, Arata; Matsumoto, Norio; Shiku, Hitoshi; Ino, Kosuke; Watanabe, Yoshitomo; Matsue, Tomokazu; Ohmura, Naoya


    A simple and rapid flow-based multioperation immunoassay for heavy metals using a microfluidic device was developed. The antigen-immobilized microparticles in a sub-channel were introduced as the solid phase into a main channel structures through a channel flow mechanism and packed into a detection area enclosed by dam-like structures in the microfluidic device. A mixture of a heavy metal and a gold nanoparticle-labeled antibody was made to flow toward the corresponding metal through the main channel and make brief contact with the solid phase. A small portion of the free antibody was captured and accumulated on the packed solid phase. The measured absorbance of the gold label was proportional to the free antibody portion and, thus, to the metal concentration. Each of the monoclonal antibodies specific for cadmium-EDTA, chromium-EDTA, or lead-DTPA was applied to the single-channel microfluidic device. Under optimized conditions of flow rate, volume, and antibody concentration, the theoretical (antibody K(d)-limited) detection levels of the three heavy metal species were achieved within only 7 min. The dynamic range for cadmium, chromium, and lead was 0.57-60.06 ppb, 0.03-0.97 ppb, and 0.04-5.28 ppb, respectively. An integrated microchannel device for simultaneous multiflow was also successfully developed and evaluated. The multiplex cadmium immunoassay of four samples was completed within 8 min for a dynamic range of 0.42-37.48 ppb. Present microfluidic heavy metal immunoassays satisfied the Japanese environmental standard for cadmium, chromium and, lead, which provided in the soil contamination countermeasures act. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. New critical dimension uniformity measurement concept based reticle inspection tool (United States)

    Seo, Kangjoon; Kim, MunSik; Kim, Sang Chul; Shin, JaeCheon; Kim, ChangYeol; Miller, John; Dayal, Aditya; Hutchinson, Trent; Park, KiHun


    The Critical Dimension Uniformity (CDU) specification on photo-mask is getting increasingly tighter which each successive node. The ITRS roadmap for optical masks indicates that, the CDU (3 sigma) for dense lines for binary or attenuated phase shift mask is 3.4nm for 45nm half-pitch (45HP) node and will go down to 2.4nm for 32HP node. The current variability in mask shop processes results in CDU variation across the photo-mask of ~2-3nm. Hence, we are entering in a phase where the mask CDU specification is approaching the limit of the capability of the current POR (process on record). Hence, mask shops have started exploring more active mechanisms to improve or compensate for the CDU of the masks. A typical application is in feeding back the CDU data to adjust the mask writer dose and compensate for non-uniformity in the CDs, resulting in improved quality of subsequent masks. Another option is to feed the CD uniformity information forward into the wafer FAB and adjust the scanner dose to correct for reticle non-uniformity. For these purposes mask makers prefer a dense measurement of CDs across the reticle in a short time. Mask makers are currently using the CD-SEM tool for data collection. While the resolution of SEM data ensures its position as the industry standard, an output map of CDU from a reticle inspection tool has the advantage of denser sampling over larger areas on the mask. High NA reticle inspection systems scan the entire reticle at high throughput, and are ideally suited for collecting CDU data on a dense grid. In this paper, we describe the basic theory of a new, reticle inspection-based CDU tool, and results on advanced memory masks. We discuss possible applications of CDU maps for optimizing the mask manufacturing and wafer production processes.

  9. Material Spectral Emissivity Measurement Based on Two Reference Blackbodies (United States)

    Cai, Jing; Yang, Yongjun; Liao, Li; Lyu, Guoyi


    Spectral emissivity is one of the important physical properties of materials. Emissivity measurement is critical for accurate temperature measurements and the evaluation of the stealth performance for materials. In this paper, a Fourier transform infrared spectrometer and an energy comparison method are used to study material emissivity measurements. Two reference blackbodies are employed for real-time measurement and correction of the spectrometer background function to enhance the emissivity measurement accuracy, to improve the design of a three-parabolic-mirror optical system, and to enlarge the optical field of view to meet the measurement requirements. The linearity of the system is measured using a mercury cadmium telluride detector and a deuterated triglycine sulfate detector. The results indicate that the linear range of the system meets the emissivity measurement requirements for the temperature range from 50°C to 1000° C. The effective radiation surface is introduced as a parameter of the reference blackbodies to reduce the influence of the measurement distance. The Fourier transform infrared spectrometer is used to measure the spectral emissivity of a conductive silica film and SiC, respectively, at different temperatures in the wavelength range of 1 \\upmu m to 25 \\upmu m. The expanded uncertainty is less than 5 %.

  10. Channel selection based on phase measurement in P300-based brain-computer interface.

    Directory of Open Access Journals (Sweden)

    Minpeng Xu

    Full Text Available Most EEG-based brain-computer interface (BCI paradigms include specific electrode positions. As the structures and activities of the brain vary with each individual, contributing channels should be chosen based on original records of BCIs. Phase measurement is an important approach in EEG analyses, but seldom used for channel selections. In this paper, the phase locking and concentrating value-based recursive feature elimination approach (PLCV-RFE is proposed to produce robust-EEG channel selections in a P300 speller. The PLCV-RFE, deriving from the phase resetting mechanism, measures the phase relation between EEGs and ranks channels by the recursive strategy. Data recorded from 32 electrodes on 9 subjects are used to evaluate the proposed method. The results show that the PLCV-RFE substantially reduces channel sets and improves recognition accuracies significantly. Moreover, compared with other state-of-the-art feature selection methods (SSNRSF and SVM-RFE, the PLCV-RFE achieves better performance. Thus the phase measurement is available in the channel selection of BCI and it may be an evidence to indirectly support that phase resetting is at least one reason for ERP generations.

  11. Tracking of urban aerosols using combined LIDAR-based remote sensing and ground-based measurements

    Directory of Open Access Journals (Sweden)

    T.-Y. He


    Full Text Available A measuring campaign was performed over the neighboring towns of Nova Gorica in Slovenia and Gorizia in Italy on 24 and 25 May 2010, to investigate the concentration and distribution of urban aerosols. Tracking of two-dimensional spatial and temporal aerosol distributions was performed using scanning elastic LIDAR, operating at 1064 nm. In addition, PM10 concentrations of particles, NOx concentrations and meteorological data were continuously monitored within the LIDAR scanning region. Based on the data we collected, we investigated the flow dynamics and the aerosol concentrations within the lower troposphere and found an evidence for daily aerosol cycles. We observed a number of cases with spatially localized increased LIDAR returns, which are associated with the presence of point sources of particulate matter. Daily aerosol concentration cycles were also clearly visible with a peak in aerosol concentration during the morning rush hours and daily plateau at around 17:00 Central European Time. We also found that horizontal atmospheric extinction at the height of 200 m, averaged in limited region with a radius of 300 m directly above the ground-based measuring site, was linearly correlated to the PM10 concentration with a correlation coefficient of 0.84. When considering the average of the horizontal atmospheric extinction over the entire scanning region, a strong dependence on traffic conditions (concentration of NOx in the vicinity of the ground-based measuring site was observed.

  12. The computer-based model of quantum measurements (United States)

    Sevastianov, L. A.; Zorin, A. V.


    Quantum theory of measurements is an extremely important part of quantum mechanics. Currently perturbations by quantum measurements of observable quantities of atomic systems are rarely taken into account in computing algorithms and calculations. In the previous studies of the authors, constructive model of quantum measurements has been developed and implemented in the form of symbolic and numerical calculations for the hydrogen-like atoms. This work describes a generalization of these results to the alkali metal atoms.

  13. The Relationship among Measures of Written Expression Using Curriculum-Based Measurement and the Arizona Instrument to Measure Skills (AIMS) at the Middle School Level (United States)

    Lopez, Francesca A.; Thompson, Sandra S.


    The authors examined the predictor-criterion relationship between measures of written expression using spring curriculum-based measures (W-CBM) and the spring administration of the state-mandated high-stakes test the Arizona Instrument to Measure Standards (AIMS) in writing. Students (N = 83) in Grades 6, 7, and 8 wrote expressive narratives for 3…

  14. Measurement model equivalence in web- and paper-based surveys ...

    African Journals Online (AJOL)

    The aim of this research is to investigate whether web-based and paper-based organisational climate surveys can be regarded as equivalent techniques of data collection. Due to the complex geographical placement of various units of the participating organisation and limited internet access, both paper-based and

  15. Operational measure of entanglement based on experimental consequences.

    Energy Technology Data Exchange (ETDEWEB)

    Grondalski, J. P. (John P.); James, D. F. (Daniel F.)


    The maximum eigenvalue of the real part of the density matrix expressed in a maximally entangled basis with a particular phase relationship can be used as an operational measure of entanglement. This measure is related to the fidelity, maximized with a local unitary operating on either subsystem, of a standard dense coding, teleportation, or entanglement swapping protocol.

  16. A Quantitative Measure of Relevance Based on Kelly Gambling Theory

    NARCIS (Netherlands)

    Madsen, M.W.; Colinet, M.; Katrenko, S.; Rendsvig, R.K.


    This paper proposes a quantitative measure relevance which can quantify the difference between useful and useless facts. This measure evaluates sources of information according to how they affect the expected logarithmic utility of an agent. A number of reasons are given why this is often preferable

  17. Bread Water Content Measurement Based on Hyperspectral Imaging

    DEFF Research Database (Denmark)

    Liu, Zhi; Møller, Flemming


    Water content is one of the most important properties of the bread for tasting assesment or store monitoring. Traditional bread water content measurement methods mostly are processed manually, which is destructive and time consuming. This paper proposes an automated water content measurement...

  18. Ergonomic measures in construction work: enhancing evidence-based implementation

    NARCIS (Netherlands)

    Visser, S.


    Despite the development and availability of ergonomic measures in the construction industry, the number of construction workers reporting high physical work demands remains high. A reduction of the high physical work demands can be achieved by using ergonomic measures. However, these ergonomic

  19. High-spin states and a new band based on the isomeric state in {sup 152}Nd

    Energy Technology Data Exchange (ETDEWEB)

    Yeoh, E.Y.; Wang, J.G.; Ding, H.B.; Gu, L.; Xu, Q.; Xiao, Z.G. [Tsinghua University, Department of Physics, Beijing (China); Zhu, S.J. [Tsinghua University, Department of Physics, Beijing (China); Vanderbilt University, Department of Physics, Nashville, TN (United States); Hamilton, J.H.; Ramayya, A.V.; Hwang, J.K.; Liu, S.H.; Li, K. [Vanderbilt University, Department of Physics, Nashville, TN (United States); Yang, Y.C.; Sun, Y. [Shanghai Jiao Tong University, Department of Physcis, Shanghai (China); Luo, Y.X. [Vanderbilt University, Department of Physics, Nashville, TN (United States); Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Rasmussen, J.O.; Lee, I.Y. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Ma, W.C. [Mississippi State University, Department of Physics, Mississippi State, MS (United States)


    High-spin states of the neutron-rich {sup 152}Nd nucleus have been reinvestigated by measuring the prompt {gamma} -rays in the spontaneous fission of {sup 252}Cf. The ground-state band and a side negative-parity band have been updated. A new band based on the 2243.7keV isomeric state has been identified. The half-life for the isomeric state has been measured to be 63(7)ns. The projected shell model is employed to study the band structure of this nucleus. The results show that the calculated levels of the bands are in good agreement with the experimental ones, and the isomeric state and the negative-parity band are based on the proton {pi} 5/2{sup -}[532] x {pi}9/2{sup +}[404] and neutron {nu} 3/2{sup -}[521] x {nu}5/2{sup +}[642] two-quasiparticles configurations, respectively. (orig.)

  20. Spacecraft Angular Velocity Estimation Algorithm Based on Orientation Quaternion Measurements

    Directory of Open Access Journals (Sweden)

    M. V. Li


    Full Text Available The spacecraft (SC mission involves providing the appropriate orientation and stabilization of the associated axes in space. One of the main sources of information for the attitude control system is the angular rate sensor blocks. One way to improve a reliability of the system is to provide a back up of the control algorithms in case of failure of these blocks. To solve the problem of estimation of SP angular velocity vector in the inertial system of coordinates with a lack of information from the angular rate sensors is supposed the use of orientation data from the star sensors; in this case at each clock of the onboard digital computer. The equations in quaternions are used to describe the kinematics of rotary motion. Their approximate solution is used to estimate the angular velocity vector. Methods of modal control and multi-dimensional decomposition of a control object are used to solve the problem of observation and identification of the angular rates. These methods enabled us to synthesize the SP angular velocity vector estimation algorithm and obtain the equations, which relate the error quaternion with the calculated estimate of the angular velocity. Mathematical modeling was carried out to test the algorithm. Cases of different initial conditions were simulated. Time between orientation quaternion measurements and angular velocity of the model was varied. The algorithm was compared with a more accurate algorithm, built on more complete equations. Graphs of difference in angular velocity estimation depending on the number of iterations are presented. The difference in angular velocity estimation is calculated from results of the synthesized algorithm and the algorithm for more accurate equations. Graphs of error distribution for angular velocity estimation with initial conditions being changed are also presented, and standard deviations of estimation errors are calculated. The synthesized algorithm is inferior in accuracy assessment to

  1. Measurements of DSD Second Moment Based on Laser Extinction (United States)

    Lane, John E.; Jones, Linwood; Kasparis, Takis C.; Metzger, Philip


    Using a technique recently developed for estimating the density of surface dust dispersed during a rocket landing, measuring the extinction of a laser passing through rain (or dust in the rocket case) yields an estimate of the 2nd moment of the particle cloud, and rainfall drop size distribution (DSD) in the terrestrial meteorological case. With the exception of disdrometers, instruments that measure rainfall make in direct measurements of the DSD. Most common of these instruments are the rainfall rate gauge measuring the 1 1/3 th moment, (when using a D(exp 2/3) dependency on terminal velocity). Instruments that scatter microwaves off of hydrometeors, such as the WSR-880, vertical wind profilers, and microwave disdrometers, measure the 6th moment of the DSD. By projecting a laser onto a target, changes in brightness of the laser spot against the target background during rain, yield a measurement of the DSD 2nd moment, using the Beer-Lambert law. In order to detect the laser attenuation within the 8-bit resolution of most camera image arrays, a minimum path length is required, depending on the intensity of the rainfall rate. For moderate to heavy rainfall, a laser path length of 100 m is sufficient to measure variations in optical extinction using a digital camera. A photo-detector could replace the camera, for automated installations. In order to spatially correlate the 2nd moment measurements to a collocated disdrometer or tipping bucket, the laser's beam path can be reflected multiple times using mirrors to restrict the spatial extent of the measurement. In cases where a disdrometer is not available, complete DSD estimates can be produced by parametric fitting of DSD model to the 2nd moment data in conjunction with tipping bucket data. In cases where a disdrometer is collocated, the laser extinction technique may yield a significant improvement to insitu disdrometer validation and calibration strategies

  2. Disordered Speech Assessment Using Automatic Methods Based on Quantitative Measures

    Directory of Open Access Journals (Sweden)

    Christine Sapienza


    Full Text Available Speech quality assessment methods are necessary for evaluating and documenting treatment outcomes of patients suffering from degraded speech due to Parkinson's disease, stroke, or other disease processes. Subjective methods of speech quality assessment are more accurate and more robust than objective methods but are time-consuming and costly. We propose a novel objective measure of speech quality assessment that builds on traditional speech processing techniques such as dynamic time warping (DTW and the Itakura-Saito (IS distortion measure. Initial results show that our objective measure correlates well with the more expensive subjective methods.

  3. Neutrosophic Cubic MCGDM Method Based on Similarity Measure

    Directory of Open Access Journals (Sweden)

    Surapati Pramanik


    Full Text Available The notion of neutrosophic cubic set is originated from the hybridization of the concept of neutrosophic set and interval valued neutrosophic set. We define similarity measure for neutrosophic cubic sets and prove some of its basic properties.

  4. An Antenna Measurement System Based on Optical Feeding

    Directory of Open Access Journals (Sweden)

    Ryohei Hosono


    the advantage of the system is demonstrated by measuring an ultra-wideband (UWB antenna both by the optical and electrical feeding systems and comparing with a calculated result. Ripples in radiation pattern due to the electrical feeding are successfully suppressed by the optical feeding. For example, in a radiation measurement on the azimuth plane at 3 GHz, ripple amplitude of 1.0 dB that appeared in the electrical feeding is reduced to 0.3 dB. In addition, a circularly polarized (CP antenna is successfully measured by the proposed system to show that the system is available not only for amplitude but also phase measurements.

  5. Evidence conflict measure based on OWA operator in open world

    National Research Council Canada - National Science Library

    Wen Jiang; Shiyu Wang; Xiang Liu; Hanqing Zheng; Boya Wei


    .... In this paper, a new method which combines generalized conflict coefficient, generalized evidence distance, and generalized interval correlation coefficient based on ordered weighted averaging (OWA...

  6. Meia-vida do ametryn em argissolo vermelho-amarelo e latossolo vermelho-amarelo, com diferentes valores de pH Determination of half-life of ametryn on red-yellow latosol and red-yellow ultisol with different pH values

    Directory of Open Access Journals (Sweden)

    S.R.B. Andrade


    Full Text Available Objetivou-se com este trabalho determinar a meia-vida (t½ do herbicida ametryn em Argissolo Vermelho-Amarelo e Latossolo Vermelho-Amarelo, com diferentes valores de pH. Foram utilizados vasos revestidos internamente com filme plástico e preenchidos com 330,0 g de amostras dos solos em estudo (Latossolo Vermelho-Amarelo - LVA com valores de pH corrigidos para 4,4, 4,9 e 5,8, e Argissolo Vermelho-Amarelo - PVA com pH 5,9. As amostras desses solos foram coletadas em pastagens degradadas isentas da aplicação de herbicidas. A essas amostras foi aplicado o ametryn na dose de 2,5 L ha-1. Doze horas após essa aplicação, foram retiradas as primeiras amostras de solo dos vasos, para determinação da concentração no tempo zero, e a cada cinco dias foram retiradas novas amostras de outros vasos, visando à determinação da concentração de ametryn ao longo do tempo. A extração do ametryn da matriz solo foi realizada por Extração Sólido Líquido com Partição em Baixa Temperatura (ESL-PBT, e o herbicida, quantificado por cromatografia líquida de alta eficiência - CLAE. Foi realizado, em paralelo, um teste biológico para determinação indireta da persistência do herbicida. A análise dos dados indicou que a meia-vida (t½ do ametryn nos solos avaliados foi de 26, 19, 12 e 11 dias para os solos LVA pH 4,4; LVA pH 4,9; LVA pH 5,8; e PVA pH 5,9, respectivamente. Ambos os métodos (cromatografia ou bioensaios utilizados para avaliação da persistência do ametryn nos solos evidenciaram que a degradação desse herbicida é muito influenciada pelo pH do solo e pelo teor de matéria orgânica.The objective of this study was to determine the half-life (t½ for the herbicide ametryn in Red-Yellow Latosol (LVA and Red-Yellow Ultisol (PVA with different pH values. Thus, plastic pots coated inside with plastic film were filled with 330 g of samples from the soils under study (LVA with pH values adjusted to 4.4, 4.9 and 5.8, and PVA pH 5

  7. The Surprisingly Low Motivational Power of Future Rewards: Comparing Conventional Money-Based Measures of Discounting with Motivation-Based Measures (United States)

    Ebert, Jane E. J.


    Temporal discount rates are often poor predictors of behaviors that we expect will be motivated by the future. The current research suggests this may be because conventional discounting measures are poor measures of the motivational value of future rewards. In six studies, I develop motivation-based measures of the present value (PV) of future…

  8. Schedule performance measurement based on statistical process control charts


    Oleghe, Omogbai; Salonitis, Konstantinos


    In a job-shop manufacturing environment, achieving a schedule that is on target is difficult due to the dynamism of factors affecting the system, and this makes schedule performance measurement systems hard to design and implement. In the present paper, Statistical Process Control charts are directly applied to a scheduling process for the purpose of objectively measuring schedule performance. SPC charts provide an objective and timely approach to designing, implementing and mo...

  9. Network Monitoring and Diagnosis Based on Available Bandwidth Measurement (United States)


    encouragements helped me pass those tough early days in the US. I would also like to thank my officemates Julio Lopez and Rajesh Balan, both system experts. With...tradeoffs of structured overlays in a dynamic non-transitive network. In MIT 6.829 Fall 2003 class project, December 2003. [52] Ramesh Govindan and Vern ...using packet quartets. In ACM SIGCOMM Internet Measurement Workshop 2002, 2002. [92] Vern Paxson. Measurements and Analysis of End-to-End Internet

  10. Indicators of hypertriglyceridemia from anthropometric measures based on data mining. (United States)

    Lee, Bum Ju; Kim, Jong Yeol


    The best indicator for the prediction of hypertriglyceridemia derived from anthropometric measures of body shape remains a matter of debate. The objectives are to determine the strongest predictor of hypertriglyceridemia from anthropometric measures and to investigate whether a combination of measures can improve the prediction accuracy compared with individual measures. A total of 5517 subjects aged 20-90 years participated in this study. The numbers of normal and hypertriglyceridemia subjects were 3022 and 653 females, respectively, and 1306 and 536 males, respectively. We evaluated 33 anthropometric measures for the prediction of hypertriglyceridemia using statistical analysis and data mining. In the 20-90-year-old groups, age in women was the variable that exhibited the highest predictive power; however, this was not the case in men in all age groups. Of the anthropometric measures, the waist-to-height ratio (WHtR) was the best predictor of hypertriglyceridemia in women. In men, the rib-to-forehead circumference ratio (RFcR) was the strongest indicator. The use of a combination of measures provides better predictive power compared with individual measures in both women and men. However, in the subgroups of ages 20-50 and 51-90 years, the strongest indicators for hypertriglyceridemia were rib circumference in the 20-50-year-old group and WHtR in the 51-90-year-old group in women and RFcR in the 20-50-year-old group and BMI in the 51-90-year-old group in men. Our results demonstrated that the best predictor of hypertriglyceridemia may differ according to gender and age. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.


    Energy Technology Data Exchange (ETDEWEB)



    We address the question of how to identify and measure the degree of intelligence in systems. We define the presence of intelligence as equivalent to the presence of a control relation. We contrast the distinct atomic semioic definitions of models and controls, and discuss hierarchical and anticipatory control. We conclude with a suggestion about moving towards quantitative measures of the degree of such control in systems.

  12. A Vision-Based Sensor for Noncontact Structural Displacement Measurement

    Directory of Open Access Journals (Sweden)

    Dongming Feng


    Full Text Available Conventional displacement sensors have limitations in practical applications. This paper develops a vision sensor system for remote measurement of structural displacements. An advanced template matching algorithm, referred to as the upsampled cross correlation, is adopted and further developed into a software package for real-time displacement extraction from video images. By simply adjusting the upsampling factor, better subpixel resolution can be easily achieved to improve the measurement accuracy. The performance of the vision sensor is first evaluated through a laboratory shaking table test of a frame structure, in which the displacements at all the floors are measured by using one camera to track either high-contrast artificial targets or low-contrast natural targets on the structural surface such as bolts and nuts. Satisfactory agreements are observed between the displacements measured by the single camera and those measured by high-performance laser displacement sensors. Then field tests are carried out on a railway bridge and a pedestrian bridge, through which the accuracy of the vision sensor in both time and frequency domains is further confirmed in realistic field environments. Significant advantages of the noncontact vision sensor include its low cost, ease of operation, and flexibility to extract structural displacement at any point from a single measurement.

  13. An EIRP Measurement Method for Base-Station Antennas Using Field Strengths Measured along a Single Straight Line

    Directory of Open Access Journals (Sweden)

    Soon-Soo Oh


    Full Text Available We describe an EIRP measurement technique for a base-station antenna. The proposed method especially can be applied to the base-station antenna installed in real environments. Fresnel region measurement method is an optimal technique to avoid the far-field multipath interference, and, furthermore, it could shorten the measurement time. For detecting only the field strengths along a single straight line, we also propose a simple phase-retrieval method. For verification, a simulation and experiment have been performed. An anechoic chamber was utilized in this paper before the real environment test with the outdoor measurement system. The transformed far-field pattern and EIRP agree closely with the reference data within a valid angle. The proposed method can be applied for the EIRP in situ measurements without moving a vehicle loading the EIRP measurement apparatus.

  14. Mobile platform of altitude measurement based on a smartphone (United States)

    Roszkowski, Paweł; Kowalczyk, Marcin


    The article presents a low cost, fully - functional meter of altitude and pressure changes in a form of mobile application controlled by Android OS (operating system). The measurements are possible due to pressure sensor inserted in majority of latest modern mobile phones, which are known as smartphones. Using their computing capabilities and other equipment components like GPS receiver in connection with data from the sensor enabled authors to create a sophisticated handheld measuring platform with many unique features. One of them is a drawing altitude maps mode in which user can create maps of altitude changes just by moving around examined area. Another one is a convenient mode for altitude measurement. It is also extended with analysis tools which provide a possibility to compare measured values by displaying the data in a form of plots. The platform consists of external backup server, where the user can secure all gathered data. Moreover, the results of measurement's accuracy examination process which was executed after building the solution were shown. At the end, the realized meter of altitude was compared to other popular altimeters, which are available on the market currently.

  15. Bite force measurement based on fiber Bragg grating sensor (United States)

    Padma, Srivani; Umesh, Sharath; Asokan, Sundarrajan; Srinivas, Talabattula


    The maximum level of voluntary bite force, which results from the combined action of muscle of mastication, joints, and teeth, i.e., craniomandibular structure, is considered as one of the major indicators for the functional state of the masticatory system. Measurement of voluntary bite force provides useful data for the jaw muscle function and activity along with assessment of prosthetics. This study proposes an in vivo methodology for the dynamic measurement of bite force employing a fiber Bragg grating (FBG) sensor known as bite force measurement device (BFMD). The BFMD developed is a noninvasive intraoral device, which transduces the bite force exerted at the occlusal surface into strain variations on a metal plate. These strain variations are acquired by the FBG sensor bonded over it. The BFMD developed facilitates adjustment of the distance between the biting platform, which is essential to capture the maximum voluntary bite force at three different positions of teeth, namely incisor, premolar, and molar sites. The clinically relevant bite forces are measured at incisor, molar, and premolar position and have been compared against each other. Furthermore, the bite forces measured with all subjects are segregated according to gender and also compared against each other.

  16. Distance-based functional diversity measures and their decomposition: a framework based on Hill numbers. (United States)

    Chiu, Chun-Huo; Chao, Anne


    Hill numbers (or the "effective number of species") are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify "the effective number of equally abundant and (functionally) equally distinct species" in an assemblage. We also propose a class of mean functional diversity (per species), which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total) functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity) quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma) can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation) measures, including N-assemblage functional generalizations of the

  17. Novel ultrasonic distance measuring system based on correlation method

    Directory of Open Access Journals (Sweden)

    Gądek K.


    Full Text Available This paper presents an innovative method for measuring the time delay of ultrasonic waves. Pulse methods used in the previous studies was characterized by latency. The method of phase correlation, presented in this article is free from this disadvantages. Due to the phase encoding with the use of Walsh functions the presented method allows to obtain better precision than previous methods. The algorithm to measure delay of the reflected wave with the use of microprocessor ARM Cortex M4 linked to a PC has been worked out and tested. This method uses the signal from the ultrasonic probe to precisely determine the time delay, caused by the propagation in medium, possible. In order to verify the effectiveness of the method a part of the measuring system was implemented in LabVIEW. The presented method proved to be effective, as it is shown in presented simulation results

  18. Testing the FOODBANCS hypothesis: Seasonal variations in near-bottom particle flux, bioturbation intensity, and deposit feeding based on 234Th measurements (United States)

    McClintic, Mark A.; DeMaster, David J.; Thomas, Carrie J.; Smith, Craig R.


    Naturally occurring 234Th (24-d half-life) was used on the West Antarctic continental shelf to evaluate temporal variations in the flux of particulate material reaching the seabed, bioturbation intensity, the seasonal continuity of feeding by benthic fauna, and trends in particle selection during ingestion for six common detritivores (four surface deposit feeders and two subsurface deposit feeders). These measurements were made at three stations during the five FOODBANCS cruises (December 1999, March, June, and October 2000, and March 2001) to assess the nature of pelagic-benthic coupling on the shelf and to evaluate the seabed as a potential food bank for deposit feeders when surface primary production is minimal. Two summer regimes were sampled (March 2000 and March 2001) with the latter exhibiting a distinct 1-2-cm-thick phytodetritus layer in nearly all sediment core samples. At site B, the 234Th fluxes into the near-bottom (150/170 mab) sediment traps were indistinguishable for the December-March 2000, March-June 2000, and June-October 2000 sampling intervals (fluxes ranging from 170 to 280 dpm m -2 d -1). However, the sediment-trap 234Th flux measured for the October 2000-March 2001 interval (1000 dpm m -2 d -1) was ˜5-fold greater than during the other three sampling periods, consistent with the deposition of a phytodetritus layer. The steady-state 234Th fluxes derived from seabed inventories at site B were 2.4-2.7 times greater than the sediment-trap 234Th fluxes, indicating substantial scavenging of this particle-reactive radiotracer in the bottom 150 m of the water column and/or lateral transport near the seabed. The seabed 234Th inventories at the three stations showed no variation during the first four cruises, but were significantly greater during cruise FB-V (March 2001), when the phytodetritus layer occurred. Based on 234Th distributions in the seabed, bioturbation intensities (quantified using the diffusive mixing coefficient, Db) varied from 0

  19. Measurement techniques for UMTS signals radiated by radio base stations

    Energy Technology Data Exchange (ETDEWEB)

    Buscaglia, F.; Gianola, P


    In the most European countries radio coverage for the third radio mobile generation, i.e. the UMTS (Universal Mobile Telecommunications System), will soon be started. In the past few years, national laws specifying limits on exposure to electromagnetic fields have drawn much attention on electromagnetic test bed and measurement procedures for radio mobile equipment/systems. An overview is given of the UMTS system, showing the main characteristics of the radio access network UTRAN (UMTS Terrestrial Radio Access Network). An analysis is also provided as to the measurement techniques and related instrumentation for the electric field intensity radiated by a UMTS radio station. (author)

  20. Measurements of Electromagnetic Fields Emitted from Cellular Base Stations in

    National Research Council Canada - National Science Library

    K. J. Ali


    .... The aim of this work is to determine the safe and unsafe ranges and discuss damage caused by radiation emitted from Asia cell base stations in Shirqat city and discuses the best ways in which can...

  1. A measurement-based technique for incipient anomaly detection

    KAUST Repository

    Harrou, Fouzi


    Fault detection is essential for safe operation of various engineering systems. Principal component analysis (PCA) has been widely used in monitoring highly correlated process variables. Conventional PCA-based methods, nevertheless, often fail to detect small or incipient faults. In this paper, we develop new PCA-based monitoring charts, combining PCA with multivariate memory control charts, such as the multivariate cumulative sum (MCUSUM) and multivariate exponentially weighted moving average (MEWMA) monitoring schemes. The multivariate control charts with memory are sensitive to small and moderate faults in the process mean, which significantly improves the performance of PCA methods and widen their applicability in practice. Using simulated data, we demonstrate that the proposed PCA-based MEWMA and MCUSUM control charts are more effective in detecting small shifts in the mean of the multivariate process variables, and outperform the conventional PCA-based monitoring charts. © 2015 IEEE.

  2. Contactless Quality Monitoring Sensor Based on Electrical Conductivity Measurements

    Directory of Open Access Journals (Sweden)

    Armin SATZ


    Full Text Available A first prototype of a contactless conductivity sensor for AdBlue® quality monitoring is presented. Based on a detailed sensor mode analysis it is shown that capacitive sensors can be designed to sense electrical liquid conductivity. The sensor design process is based on a sensor model, which allows simulating capacitive senor responses for arbitrary electrode and liquid tank geometries. Finally, temperature induced errors are estimated.

  3. Deterministic Predictions of Vessel Responses Based on Past Measurements

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Jensen, Jørgen Juncher


    The paper deals with a prediction procedure from which global wave-induced responses can be deterministically predicted a short time, 10-50 s, ahead of current time. The procedure relies on the autocorrelation function and takes into account prior measurements only; i.e. knowledge about wave cond...

  4. Nonlinear ultrasonic measurements based on cross-correlation filtering techniques (United States)

    Yee, Andrew; Stewart, Dylan; Bunget, Gheorghe; Kramer, Patrick; Farinholt, Kevin; Friedersdorf, Fritz; Pepi, Marc; Ghoshal, Anindya


    Cyclic loading of mechanical components promotes the formation of dislocation dipoles in metals, which can serve as precursors to crack nucleation and ultimately lead to failure. In the laboratory setting, an acoustic nonlinearity parameter has been assessed as an effective indicator for characterizing the progression of fatigue damage precursors. However, the need to use monochromatic waves of medium-to-high acoustic energy has presented a constraint, making it problematic for use in field applications. This paper presents a potential approach for field measurement of acoustic nonlinearity by using general purpose ultrasonic pulser-receivers. Nonlinear ultrasonic measurements during fatigue testing were analyzed by the using contact and immersion pulse-through method. A novel cross-correlation filtering technique was developed to extract the fundamental and higher harmonic waves from the signals. As in the case of the classic harmonic generation, the nonlinearity parameters of the second and third harmonics indicate a strong correlation with fatigue cycles. Consideration was given to potential nonlinearities in the measurement system, and tests have confirmed that measured second harmonic signals exhibit a linear dependence on the input signal strength, further affirming the conclusion that this parameter relates to damage precursor formation from cyclic loading.

  5. Height estimations based on eye measurements throughout a gait cycle

    DEFF Research Database (Denmark)

    Yang, Sylvia X M; Larsen, Peter K; Alkjær, Tine


    (EH) measurement, on the other hand, is less prone to concealment. The purpose of the present study was to investigate: (1) how the eye height varies during the gait cycle, and (2) how the eye height changes with head position. The eyes were plotted manually in APAS for 16 test subjects during...

  6. COMET: A multimedia internet based platform for education in measurement

    NARCIS (Netherlands)

    Grattan, K.T.V.; Regtien, Paulus P.L.; Halaj, M; Kureková, E.; Gabko, P


    The project COMET provides a multimedia training package for metrology and measurement. The package is developed by a consortium of 10 institutes from 7 European countries. It consists of 31 modules, each dealing with a particular aspect of metrology, and is available in English, German, French and

  7. Quality measures for HRR alignment based ISAR imaging algorithms

    CSIR Research Space (South Africa)

    Janse van Rensburg, V


    Full Text Available Some Inverse Synthetic Aperture Radar (ISAR) algorithms form the image in a two-step process of range alignment and phase conjugation. This paper discusses a comprehensive set of measures used to quantify the quality of range alignment, with the aim...

  8. UV ground based measurements in Río Gallegos, Argentina (United States)

    Wolfram, Elian A.; Salvador, Jacobo; D'Elía, Raúl; Quel, Eduardo


    CEILAP's Lidar Division has established an atmospheric remote sensing site in Río Gallegos (51°55' S, 69°14' W) in the southern region of Argentina. SOLAR Campaign was held during 2005-2006. The main objectives of this experiment were to measure stratospheric ozone profiles and surface UV radiation in a subpolar region, where the influence of polar vortex and the Antarctic ozone hole are remarkable. This remote sensing site has lidar instruments and passive sensors to measure solar UV irradiance. In this paper we focused on passive remote sensing sensors and the Río Gallegos erythemal irradiances reported during 2005-2006. Time evolution of UV index was derived from these measurements and the influence of ozone depleted air masses passing over over Río Gallegos is highlighted in this paper. This Patagonian region is characterized by high cloud cover during the day that strongly changes the distribution of UV radiation that reaches the ground surface. For that reason some overpasses of ozone hole are masked by cloud cover avoiding the increase in UVB radiation. Reversely in same opportunities, cloud border increases the surface UV radiation. Both effects are analyzed in this work and the reduction or increase of ultraviolet radiation is quantified by comparing measurement and modeled UV radiation. In addition time evolution of daily UV exposures is presented.

  9. Comet: An internet based platform for education in measurement

    NARCIS (Netherlands)

    Regtien, Paulus P.L.; Halaj, Martin; Kureková, Eva; Gabko, Peter


    The project COMET provides a multimedia training package for metrology and measurement. The package is developed by a consortium of 10 institutes from 7 European countries. It consists of 31 modules, each dealing with a particular aspect of metrology, and is available in English, German, French and

  10. Onboard sea state estimation based on measured ship motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Stredulinsky, David C.


    It is possible to obtain estimates of the sea state at the specific position of an advancing vessel by processing measurements of the vessel’s wave-induced responses. The analogy to a wave rider buoy is clear, although the situation of an advancing ship is more complex due to forward speed. The p...

  11. A microprocessor-based system for measurement of gas exchange. (United States)

    Jaffe, M B; Frick, G; Wilson, D; Johnston, M; Reid, H; Foster, S; Norton, A C


    The basic physical measurements for determining gas exchange are difficult to make accurately even in a well-equipped, human-performance laboratory with experienced personnel. A fully automated system has been developed to achieve the accuracy of standard laboratory measurements. The application of this instrument extends from critical care to stress-testing. Real-time, multitasking software integrates the data collected from several transducers and analyzers and calculates up to several dozen physiological variables, which are range-checked for reasonableness. The operator is provided with user-friendly means to tailor the data-reporting and- collection functions of the system to his own needs and requirements. Because the instrument is controlled by software, the functions of calibration, measurement, timing, reporting, plotting, and data quality assurance are highly cost-effective. Extensive use of formal test procedures permits verifying all systems and data reliability; it also assures meeting the desired specifications. The ease of operation and high-quality results inherent in this system make it unsurpassed in gas-exchange measurements.

  12. Clustering the objective interestingness measures based on tendency of variation in statistical implications

    Directory of Open Access Journals (Sweden)

    Nghia Quoc Phan


    Full Text Available In recent years, the research cluster of objective interestingness measures has rapidly developed in order to assist users to choose the appropriate measure for their application. Researchers in this field mainly focus on three main directions: clustering based on the properties of the measures, clustering based on the behavior of measures and clustering tendency of variation in statistical implications. In this paper we propose a new approach to cluster the objective interestingness measures based on tendency of variation in statistical implications. In this proposal, we built the statistical implication data of 31 objective interestingness measures based on the examination of the partial derivatives on four parameters. From this data, two distance matrices of interestingness measures are established based on Euclidean and Manhattan distance. The similarity trees are built based on distance matrix that gave results of 31 measures clustering with two different clustering thresholds.

  13. Spectral Entropy Based Neuronal Network Synchronization Analysis Based on Microelectrode Array Measurements. (United States)

    Kapucu, Fikret E; Välkki, Inkeri; Mikkonen, Jarno E; Leone, Chiara; Lenk, Kerstin; Tanskanen, Jarno M A; Hyttinen, Jari A K


    Synchrony and asynchrony are essential aspects of the functioning of interconnected neuronal cells and networks. New information on neuronal synchronization can be expected to aid in understanding these systems. Synchronization provides insight in the functional connectivity and the spatial distribution of the information processing in the networks. Synchronization is generally studied with time domain analysis of neuronal events, or using direct frequency spectrum analysis, e.g., in specific frequency bands. However, these methods have their pitfalls. Thus, we have previously proposed a method to analyze temporal changes in the complexity of the frequency of signals originating from different network regions. The method is based on the correlation of time varying spectral entropies (SEs). SE assesses the regularity, or complexity, of a time series by quantifying the uniformity of the frequency spectrum distribution. It has been previously employed, e.g., in electroencephalogram analysis. Here, we revisit our correlated spectral entropy method (CorSE), providing evidence of its justification, usability, and benefits. Here, CorSE is assessed with simulations and in vitro microelectrode array (MEA) data. CorSE is first demonstrated with a specifically tailored toy simulation to illustrate how it can identify synchronized populations. To provide a form of validation, the method was tested with simulated data from integrate-and-fire model based computational neuronal networks. To demonstrate the analysis of real data, CorSE was applied on in vitro MEA data measured from rat cortical cell cultures, and the results were compared with three known event based synchronization measures. Finally, we show the usability by tracking the development of networks in dissociated mouse cortical cell cultures. The results show that temporal correlations in frequency spectrum distributions reflect the network relations of neuronal populations. In the simulated data, CorSE unraveled the

  14. Pose measurement of Anterior Pelvic Plane based on inertial measurement unit in total hip replacement surgeries. (United States)

    Zhe Cao; Shaojie Su; Hong Chen; Hao Tang; Yixin Zhou; Zhihua Wang


    In Total Hip Replacement (THR), inaccurate measurement of Anterior Pelvic Plane (APP), which is usually used as a reference plane, will lead to malposition of the acetabular prosthesis. As a result, the risk of impingement, dislocation and wear will increase and the safe range of motion will be limited. In order to acquire the accurate pose of APP, a measurement system is designed in this paper, which includes two parts: one is used to estimate the initial pose of APP and the other is used to trail dynamic motion of APP. Both parts are composed of an Inertial Measurement Unit (IMU) and magnetometer sensors. An Extended Kalman Filter (EKF) is adopted to fuse the data from IMU and the magnetometer sensors to estimate the orientation of the pelvis. The test results show that the error angle between calculated axis and true axis of the pelvis in geodetic coordinate frame is less than 1.2 degree, which meets the requirement of the surgery.

  15. Site testing study based on weather balloons measurements (United States)

    Aristidi, E.; Agabi, A.; Azouit, M.; Fossat, E.; Vernin, J.; Sadibekova, T.; Travouillon, T.; Lawrence, J. S.; Halter, B.; Roth, W. L.; Walden, V. P.

    We present wind and temperature profiles at Dome C measured during the polar summer by balloon born sonds. Data from 197 flights have been processed for 4 campaigns between 2000 and 2004. We show the exceptionnal wind conditions at Dome C: averaged ground wind speed is 3.6 m s-1. We noticed in mid-november the presence of high altitude strong winds (40 m s-1) probably due to the polar vortex which disappear in summer. These winds seem to have no effect on seeing measurements made with a DIMM at the same period. Temperature profiles exhibit a minimum at height 5500 m (over the snow surface) that defines the tropopause. Surface layer temperature profile has negative gradient in the first 50 m above ground in the afternoon and a strong inversion layer (5°C over 50 m) around midnight. Wind profiles are compared with other astronomical sites, and with a meteorological model from Meteo France.

  16. Remote measurement of microwave distribution based on optical detection

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Zhong; Ding, Wenzheng; Yang, Sihua; Chen, Qun, E-mail:, E-mail:; Xing, Da, E-mail:, E-mail: [MOE Key Laboratory of Laser Life Science and Institute of Laser Life Science, South China Normal University, Guangzhou 510631 (China)


    In this letter, we present the development of a remote microwave measurement system. This method employs an arc discharge lamp that serves as an energy converter from microwave to visible light, which can propagate without transmission medium. Observed with a charge coupled device, quantitative microwave power distribution can be achieved when the operators and electronic instruments are in a distance from the high power region in order to reduce the potential risk. We perform the experiments using pulsed microwaves, and the results show that the system response is dependent on the microwave intensity over a certain range. Most importantly, the microwave distribution can be monitored in real time by optical observation of the response of a one-dimensional lamp array. The characteristics of low cost, a wide detection bandwidth, remote measurement, and room temperature operation make the system a preferred detector for microwave applications.

  17. Recruitment recommendation system based on fuzzy measure and indeterminate integral (United States)

    Yin, Xin; Song, Jinjie


    In this study, we propose a comprehensive evaluation approach based on indeterminate integral. By introducing the related concepts of indeterminate integral and their formulas into the recruitment recommendation system, we can calculate the suitability of each job for different applicants with the defined importance for each criterion listed in the job advertisements, the association between different criteria and subjective assessment as the prerequisite. Thus we can make recommendations to the applicants based on the score of the suitability of each job from high to low. In the end, we will exemplify the usefulness and practicality of this system with samples.

  18. EPR-based distance measurements at ambient temperature (United States)

    Krumkacheva, Olesya; Bagryanskaya, Elena


    Pulsed dipolar (PD) EPR spectroscopy is a powerful technique allowing for distance measurements between spin labels in the range of 2.5-10.0 nm. It was proposed more than 30 years ago, and nowadays is widely used in biophysics and materials science. Until recently, PD EPR experiments were limited to cryogenic temperatures (T biomolecules, the influence of a linker between the spin probe and biomolecule, and future opportunities.

  19. Energy Performance Contracting Methodology Based upon Simulation and Measurement


    Ligier, Simon; Robillart, Maxime; Schalbart, Patrick; Peuportier, Bruno


    International audience; Discrepancies between ex-ante energy performance assessment and actual consumption of buildings hinder the development of energy performance contracting (EPC). To address this issue, uncertainty integration in simulation as well as measurement and verification (M&V) strategies have been studied. In this article, we propose a methodology, combining detailed energy performance simulation and M&V anticipation. Statistical studies using Monte-Carlo analysis allow a guarant...

  20. Charge measurements in stratiform cloud from a balloon based sensor

    Energy Technology Data Exchange (ETDEWEB)

    Nicoll, K A; Harrison, R G, E-mail:, E-mail: [Department of Meteorology, University of Reading, Earley Gate, Reading, Berkshire, UK, RG6 6BB (United Kingdom)


    The electrification of stratiform clouds has is little investigated in comparison with thunderstorms and fair weather atmospheric electricity. Theory indicates that, at the upper and lower horizontal boundaries of layer clouds, charging will arise from vertical flow of cosmogenic ions in the global atmospheric electric circuit. Charge is transferred to droplets and particles, affecting cloud microphysical processes such as collision and droplet activation. Due to the lack of in-situ measurements, the magnitude and distribution of charge in stratiform clouds is not well known. A sensitive, inexpensive, balloon borne charge sensor has been developed to make in-situ measurements of edge charging in stratiform cloud using a standard meteorological radiosonde system. The charge sensor has now been flown through over 20 stratiform clouds and frequently detected charge up to 200 pC m{sup -3} near cloud edges. These results are compared with measurements from the same sensor used to investigate charge in particle layers, such as volcanic ash from the Eyjafjallajoekull eruption, and Saharan dust in the Cape Verde Isles.

  1. Real-time temperature field measurement based on acoustic tomography (United States)

    Bao, Yong; Jia, Jiabin; Polydorides, Nick


    Acoustic tomography can be used to measure the temperature field from the time-of-flight (TOF). In order to capture real-time temperature field changes and accurately yield quantitative temperature images, two improvements to the conventional acoustic tomography system are studied: simultaneous acoustic transmission and TOF collection along multiple ray paths, and an offline iteration reconstruction algorithm. During system operation, all the acoustic transceivers send modulated and filtered wideband Kasami sequences simultaneously to facilitate fast and accurate TOF measurements using cross-correlation detection. For image reconstruction, the iteration process is separated and executed offline beforehand to shorten computation time for online temperature field reconstruction. The feasibility and effectiveness of the developed methods are validated in the simulation study. The simulation results demonstrate that the proposed method can reduce the processing time per frame from 160 ms to 20 ms, while the reconstruction error remains less than 5%. Hence, the proposed method has great potential in the measurement of rapid temperature change with good temporal and spatial resolution.

  2. Oscillatory motion based measurement method and sensor for measuring wall shear stress due to fluid flow (United States)

    Armstrong, William D [Laramie, WY; Naughton, Jonathan [Laramie, WY; Lindberg, William R [Laramie, WY


    A shear stress sensor for measuring fluid wall shear stress on a test surface is provided. The wall shear stress sensor is comprised of an active sensing surface and a sensor body. An elastic mechanism mounted between the active sensing surface and the sensor body allows movement between the active sensing surface and the sensor body. A driving mechanism forces the shear stress sensor to oscillate. A measuring mechanism measures displacement of the active sensing surface relative to the sensor body. The sensor may be operated under periodic excitation where changes in the nature of the fluid properties or the fluid flow over the sensor measurably changes the amplitude or phase of the motion of the active sensing surface, or changes the force and power required from a control system in order to maintain constant motion. The device may be operated under non-periodic excitation where changes in the nature of the fluid properties or the fluid flow over the sensor change the transient motion of the active sensor surface or change the force and power required from a control system to maintain a specified transient motion of the active sensor surface.

  3. Urban air quality measurements using a sensor-based system (United States)

    Ródenas, Mila; Hernández, Daniel; Gómez, Tatiana; López, Ramón; Muñoz, Amalia


    Air pollution levels in urban areas have increased the interest, not only of the scientific community but also of the general public, and both at the regional and at the European level. This interest has run in parallel to the development of miniaturized sensors, which only since very recently are suitable for air quality measurements. Certainly, their small size and price allows them to be used as a network of sensors capable of providing high temporal and spatial frequency measurements to characterize an area or city and with increasing potential, under certain considerations, as a complement of conventional methods. Within the frame of the LIFE PHOTOCITYTEX project (use of photocatalytic textiles to help reducing air pollution), CEAM has developed a system to measure gaseous compounds of importance for urban air quality characterization. This system, which allows an autonomous power supply, uses commercial NO, NO2, O3 and CO2 small sensors and incorporates measurements of temperature and humidity. A first version, using XBee boards (Radiofrequency) for communications has been installed in the urban locations defined by the project (tunnel and school), permitting the long-term air quality characterization of sites in the presence of the textiles. An improved second version of the system which also comprises a sensor for measuring particles and which uses GPRS for communications, has been developed and successfully installed in the city center of Valencia. Data are sent to a central server where they can be accessed by citizens in nearly real time and online and, in general, they can be utilized in the air quality characterization, for decision-making related to decontamination (traffic regulation, photocatalytic materials, etc.), in air quality models or in mobile applications of interest for the citizens. Within this work, temporal trends obtained with this system in different urban locations will be shown, discussing the impact of the characteristics of the

  4. Improving indoor localisation of firefighters based on inertial measurements

    NARCIS (Netherlands)

    Bonestroo, W.J. (Wilco); Leeuwen, van H. (Henk); Wassing, A. (Andre); Zebel, J. (Joris)


    Knowing firefighters’ locations in a burning building would dramatically improve their safety. In this study from the Saxion Research Centre for Design and Technology in the Firebee project, an algorithm was developed and tested to enhance the estimation of a person’s location, based on

  5. Using coherence-based measures to predict query difficulty

    NARCIS (Netherlands)

    He, J.; Larson, M.; de Rijke, M.


    We investigate the potential of coherence-based scores to predict query difficulty. The coherence of a document set associated with each query word is used to capture the quality of a query topic aspect. A simple query coherence score, QC-1, is proposed that requires the average coherence

  6. Smartphone-based quantitative measurements on holographic sensors.

    Directory of Open Access Journals (Sweden)

    Gita Khalili Moghaddam

    Full Text Available The research reported herein integrates a generic holographic sensor platform and a smartphone-based colour quantification algorithm in order to standardise and improve the determination of the concentration of analytes of interest. The utility of this approach has been exemplified by analysing the replay colour of the captured image of a holographic pH sensor in near real-time. Personalised image encryption followed by a wavelet-based image compression method were applied to secure the image transfer across a bandwidth-limited network to the cloud. The decrypted and decompressed image was processed through four principal steps: Recognition of the hologram in the image with a complex background using a template-based approach, conversion of device-dependent RGB values to device-independent CIEXYZ values using a polynomial model of the camera and computation of the CIEL*a*b* values, use of the colour coordinates of the captured image to segment the image, select the appropriate colour descriptors and, ultimately, locate the region of interest (ROI, i.e. the hologram in this case, and finally, application of a machine learning-based algorithm to correlate the colour coordinates of the ROI to the analyte concentration. Integrating holographic sensors and the colour image processing algorithm potentially offers a cost-effective platform for the remote monitoring of analytes in real time in readily accessible body fluids by minimally trained individuals.

  7. Measurement of hepatic steatosis based on magnetic resonance images (United States)

    Tkaczyk, Adam; Jańczyk, Wojciech; Chełstowska, Sylwia; Socha, Piotr; Mulawka, Jan


    The subject of this work is the usage of digital image processing to measure hepatic steatosis. To calculate this value manually it requires a lot of time and precision from the radiologist. In order to resolve this issue, a C++ application has been created. This paper describes the algorithms that have been used to solve the problem. The next chapter presents the application architecture and introduces graphical user interface. The last section describes all the tests which have been carried out to check the correctness of the results.

  8. Reachability-based impact as a measure for insiderness

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof


    Insider threats pose a difficult problem for many organisations. While organisations in principle would like to judge the risk posed by a specific insider threat, this is in general not possible. This limitation is caused partly by the lack of models for human behaviour, partly by restrictions...... on how much and what may be monitored, and by our inability to identify relevant features in large amounts of logged data. To overcome this, the notion of insiderness has been proposed, which measures the degree of access an actor has to a certain resource. We extend this notion with the concept...

  9. Intelligent tires for improved tire safety based on strain measurements (United States)

    Matsuzaki, Ryosuke; Todoroki, Akira


    Intelligent tires, equipped with sensors for monitoring applied strain, are effective in improving reliability and control systems such as anti-lock braking systems (ABSs). However, since a conventional foil strain gage has high stiffness, it causes the analyzed region to behave unnaturally. The present study proposes a novel rubber-based strain sensor fabricated using photolithography. The rubber base has the same mechanical properties as the tire surface; thereby the sensor does not interfere with the tire deformation and can accurately monitor the behavior of the tire. We also investigate the application of strain data for an optimized braking control and road condition warning system. Finally, we suggested the possibility of optimized braking control and road condition warning systems. Optimized braking control can be achieved by keeping the slip ratio constant. The road condition warning would be actuated if the recorded friction coefficient at a certain slip ratio is lower than a 'safe' reference value.


    Directory of Open Access Journals (Sweden)

    K. Yu


    Full Text Available In this study the focus is on ocean surface altimetry using the signals transmitted from GNSS (Global Navigation Satellite System satellites. A low-altitude airborne experiment was recently conducted off the coast of Sydney. Both a LiDAR experiment and a GNSS reflectometry (GNSS-R experiment were carried out in the same aircraft, at the same time, in the presence of strong wind and rather high wave height. The sea surface characteristics, including the surface height, were derived from processing the LiDAR data. A two-loop iterative method is proposed to calculate sea surface height using the relative delay between the direct and the reflected GNSS signals. The preliminary results indicate that the results obtained from the GNSS-based surface altimetry deviate from the LiDAR-based results significantly. Identification of the error sources and mitigation of the errors are needed to achieve better surface height estimation performance using GNSS signals.

  11. GNSS-Based Space Weather Systems Including COSMIC Ionospheric Measurements (United States)

    Komjathy, Attila; Mandrake, Lukas; Wilson, Brian; Iijima, Byron; Pi, Xiaoqing; Hajj, George; Mannucci, Anthony J.


    The presentation outline includes University Corporation for Atmospheric Research (UCAR) and Jet Propulsion Laboratory (JPL) product comparisons, assimilating ground-based global positioning satellites (GPS) and COSMIC into JPL/University of Southern California (USC) Global Assimilative Ionospheric Model (GAIM), and JPL/USC GAIM validation. The discussion of comparisons examines Abel profiles and calibrated TEC. The JPL/USC GAIM validation uses Arecibo ISR, Jason-2 VTEC, and Abel profiles.

  12. A framework for grouping nanoparticles based on their measurable characteristics (United States)

    Sayes, Christie M; Smith, P Alex; Ivanov, Ivan V


    Background There is a need to take a broader look at nanotoxicological studies. Eventually, the field will demand that some generalizations be made. To begin to address this issue, we posed a question: are metal colloids on the nanometer-size scale a homogeneous group? In general, most people can agree that the physicochemical properties of nanomaterials can be linked and related to their induced toxicological responses. Methods The focus of this study was to determine how a set of selected physicochemical properties of five specific metal-based colloidal materials on the nanometer-size scale – silver, copper, nickel, iron, and zinc – could be used as nanodescriptors that facilitate the grouping of these metal-based colloids. Results The example of the framework pipeline processing provided in this paper shows the utility of specific statistical and pattern recognition techniques in grouping nanoparticles based on experimental data about their physicochemical properties. Interestingly, the results of the analyses suggest that a seemingly homogeneous group of nanoparticles could be separated into sub-groups depending on interdependencies observed in their nanodescriptors. Conclusion These particles represent an important category of nanomaterials that are currently mass produced. Each has been reputed to induce toxicological and/or cytotoxicological effects. Here, we propose an experimental methodology coupled with mathematical and statistical modeling that can serve as a prototype for a rigorous framework that aids in the ability to group nanomaterials together and to facilitate the subsequent analysis of trends in data based on quantitative modeling of nanoparticle-specific structure–activity relationships. The computational part of the proposed framework is rather general and can be applied to other groups of nanomaterials as well. PMID:24098078



    N. A. Degotinsky; V. R. Lutsiv


    Subject of Research. Westudied a method of estimating the object distance on the basis of its single defocused photograph. The method is based on the analysis of image defocus at the contour points corresponding to borders of photographed objects. It is supposed that the brightness drop in not defocused image of border can be simulated with an ideal step function – the Heaviside function. Method. The contours corresponding to local maxima of brightness gradient are detected in the initial ima...

  14. Refractive Index Measurement of Liquids Based on Microstructured Optical Fibers

    Directory of Open Access Journals (Sweden)

    Susana Silva


    Full Text Available This review is focused on microstructured optical fiber sensors developed in recent years for liquid RI sensing. The review is divided into three parts: the first section introduces a general view of the most relevant refractometric sensors that have been reported over the last thirty years. Section 2 discusses several microstructured optical fiber designs, namely, suspended-core fiber, photonic crystal fiber, large-core air-clad photonic crystal fiber, and others. This part is also divided into two main groups: the interferometric-based and resonance-based configurations. The sensing methods rely either on full/selective filling of the microstructured fiber air holes with a liquid analyte or by simply immersing the sensing fiber into the liquid analyte. The sensitivities and resolutions are tabled at the end of this section followed by a brief discussion of the obtained results. The last section concludes with some remarks about the microstructured fiber-based configurations developed for RI sensing and their potential for future applications.

  15. Optimizing laboratory-based radon flux measurements for sediments. (United States)

    Chanyotha, Supitcha; Kranrod, Chutima; Kritsananuwat, Rawiwan; Lane-Smith, Derek; Burnett, William C


    Radon flux via diffusion from sediments and other materials may be determined in the laboratory by circulating air through the sample and a radon detector in a closed loop. However, this approach is complicated by the necessity of having to determine the total air volume in the system and accounting for any small air leaks that can arise if using extended measurement periods. We designed a simple open-loop configuration that includes a measured mass of wet sediment and water inside a gas-tight reaction flask connected to a drying system and a radon-in-air analyzer. Ambient air flows through two charcoal columns before entering the reaction vessel to eliminate incoming radon. After traveling through the reaction flask, the air passes the drier and the radon analyzer and is then vented. After some time, the radon activity will reach a steady state depending upon the airflow rate. With this approach, the radon flux via diffusion is simply the product of the steady-state radon activity (Bq/m(3)) multiplied by the airflow rate (mL/min). We demonstrated that this setup could produce good results for materials that produce relatively high radon fluxes. We also show that a modified closed system approach, including radon removal of the incoming air by charcoal filtration in a bypass, can produce very good results including samples with very low emission rates. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Automatic anatomical structures location based on dynamic shape measurement (United States)

    Witkowski, Marcin; Rapp, Walter; Sitnik, Robert; Kujawinska, Malgorzata; Vander Sloten, Jos; Haex, Bart; Bogaert, Nico; Heitmann, Kjell


    New image processing methods and active photonics apparatus have made possible the development of relatively inexpensive optical systems for complex shape and object measurements. We present dynamic 360° scanning method for analysis of human lower body biomechanics, with an emphasis on the analysis of the knee joint. The anatomical structure (of high medical interest) that is possible to scan and analyze, is patella. Tracking of patella position and orientation under dynamic conditions may lead to detect pathological patella movements and help in knee joint disease diagnosis. The processed data is obtained from a dynamic laser triangulation surface measurement system, able to capture slow to normal movements with a scan frequency between 15 and 30 Hz. These frequency rates are enough to capture controlled movements used e.g. for medical examination purposes. The purpose of the work presented is to develop surface analysis methods that may be used as support of diagnosis of motoric abilities of lower limbs. The paper presents algorithms used to process acquired lower limbs surface data in order to find the position and orientation of patella. The algorithms implemented include input data preparation, curvature description methods, knee region discrimination and patella assumed position/orientation calculation. Additionally, a method of 4D (3D + time) medical data visualization is proposed. Also some exemplary results are presented.

  17. Measure for Measure: How Proficiency-Based Accountability Systems Affect Inequality in Academic Achievement (United States)

    Jennings, Jennifer; Sohn, Heeju


    How do proficiency-based accountability systems affect inequality in academic achievement? This article reconciles mixed findings in the literature by demonstrating that three factors jointly determine accountability's impact. First, by analyzing student-level data from a large urban school district, we find that when educators face accountability…

  18. Microscopic oxygen imaging based on fluorescein bleaching efficiency measurements

    DEFF Research Database (Denmark)

    Beutler, Martin; Heisterkamp, Ines M.; Piltz, Bastian


    Photobleaching of the fluorophore fluorescein in an aqueous solution is dependent on the oxygen concentration. Therefore, the time-dependent bleaching behavior can be used to measure of dissolved oxygen concentrations. The method can be combined with epi-fluorescence microscopy. The molecular...... of fluorescein will fade out faster at low than at high oxygen concentration. Further simulation showed that a simple ratio function of different time-points during a fluorescence decay recorded during photobleaching could be used to describe oxygen concentrations in an aqueous solution. By careful choice of dye...... concentration and excitation light intensity the sensitivity in the oxygen concentration range of interest can be optimized. In the simulations, the estimation of oxygen concentration by the ratio function was very little affected by the pH value in the range of pH 6.5-8.5. Filming the fluorescence decay...

  19. Measurement of Tank Cooling Airflow Based on Array Sensors (United States)

    Zhou, Hui; Han, Yan; Wang, Jianguo; Zhang, Pizhuang


    Researching on the cooling airflow characters of tank will be helpful for optimizing the design of cooling system, and will be of great importance to improve the performance of armoured vehicles. According to the test requirements of tank under the actual working conditions, we studied and designed the proposed cooling airflow measurement system. The most important thing was that we finished the assembly of array sensors without any damage to tank. In addition, according to national standard we set the quantity and locations of the sensors, which was on the premise of avoiding the influence to airway. In this paper, there was description of cooling airflow signal processing; and there was analysis of air pressure distribution which was presented simultaneously by three-dimensional surface graph and plane graph at the exhaust port of tank engine compartment according to fitting algorithm.

  20. Prediction of propagated wave profiles based on point measurement

    Directory of Open Access Journals (Sweden)

    Sang-Beom Lee


    Full Text Available This study presents the prediction of propagated wave profiles using the wave information at a fixed point. The fixed points can be fixed in either space or time. Wave information based on the linear wave theory can be expressed by Fredholm integral equation of the first kinds. The discretized matrix equation is usually an ill-conditioned system. Tikhonov regularization was applied to the ill-conditioned system to overcome instability of the system. The regularization parameter is calculated by using the L-curve method. The numerical results are compared with the experimental results. The analysis of the numerical computation shows that the Tikhonov regularization method is useful.

  1. Estimation of incidences of infectious diseases based on antibody measurements

    DEFF Research Database (Denmark)

    Simonsen, J; Mølbak, K; Falkenhorst, G


    Owing to under-ascertainment it is difficult if not impossible to determine the incidence of a given disease based on cases notified to routine public health surveillance. This is especially true for diseases that are often present in mild forms as for example diarrhoea caused by foodborne...... it was possible to estimate the time since last infection for each individual in the cross-sectional study. These time estimates were then converted into incidence estimates. Information about the incidence of Salmonella infections in Denmark was obtained by using blood samples from 1780 persons. The estimated...

  2. Utility of the Canadian Occupational Performance Measure as an admission and outcome measure in interdisciplinary community-based geriatric rehabilitation

    DEFF Research Database (Denmark)

    Larsen, Anette Enemark; Carlsson, Gunilla


    In a community-based geriatric rehabilitation project, the Canadian Occupational Performance Measure (COPM) was used to develop a coordinated, interdisciplinary, and client-centred approach focusing on occupational performance. The purpose of this study was to evaluate the utility of the COPM as ...... physician, home care, occupational therapy, physiotherapy......In a community-based geriatric rehabilitation project, the Canadian Occupational Performance Measure (COPM) was used to develop a coordinated, interdisciplinary, and client-centred approach focusing on occupational performance. The purpose of this study was to evaluate the utility of the COPM...... as an admission and outcome measure in an interdisciplinary geriatric rehabilitation context in Denmark. Eighteen occupational and physiotherapists administered the COPM among elderly citizens. Of 185 citizens referred to the study, 152 were admitted to rehabilitation based on health indices, and 124 completed...

  3. Requirements Content Goodness and Complexity Measurement Based On NP Chunks

    Directory of Open Access Journals (Sweden)

    Chao Y. Din


    Full Text Available In a typical software development project, a requirements document summarizes the results of the requirements analysis and becomes the basis for subsequent software development. In many cases, the quality of the requirements documents dictates the success of the software development. The need for determining the quality of requirements documents is particularly acute when the target applications are large, complicated, and mission critical. The purpose of this research is to develop quality indicators to indicate the quality of requirements statements in a requirements document. To achieve the goal, the goodness properties of the requirements statements are adopted to represent the quality of requirements statements. A suite of complexity metrics of requirements statements is proposed as the quality indicators and is developed based upon research of noun phrase (NP chunks. A two phased empirical case study is performed to evaluate the usage of the proposed metrics. By focusing upon the complexity metrics based on NP chunks, the research aided in development of complexity indicators of low quality requirements documents.

  4. The Technical Adequacy of Curriculum-Based and Rating-Based Measures of Written Expression for Elementary School Students (United States)

    Gansle, Kristin A.; VanDerHeyden, Amanda M.; Noell, George H.; Resetar, Jennifer L.; Williams, Kashunda L.


    Five hundred thirty-eight elementary school students participated in a study designed to examine the technical characteristics of curriculum-based measures (CBMs) for the assessment of writing. In addition, the study investigated rating-based measures of writing using the Six Trait model, an assessment instrument and writing program in use in many…

  5. Prediction of propagated wave profiles based on point measurement

    Directory of Open Access Journals (Sweden)

    Lee Sang-Beom


    Full Text Available This study presents the prediction of propagated wave profiles using the wave information at a fixed point. The fixed points can be fixed in either space or time. Wave information based on the linear wave theory can be expressed by Fredholm integral equation of the first kinds. The discretized matrix equation is usually an ill-conditioned system. Tikhonov regularization was applied to the ill-conditioned system to overcome instability of the system. The regularization parameter is calculated by using the L-curve method. The numerical results are compared with the expe¬rimental results. The analysis of the numerical computation shows that the Tikhonov regularization method is useful.

  6. Ranking the Online Documents Based on Relative Credibility Measures

    Directory of Open Access Journals (Sweden)

    Ahmad Dahlan


    Full Text Available Information searching is the most popular activity in Internet. Usually the search engine provides the search results ranked by the relevance. However, for a certain purpose that concerns with information credibility, particularly citing information for scientific works, another approach of ranking the search engine results is required. This paper presents a study on developing a new ranking method based on the credibility of information. The method is built up upon two well-known algorithms, PageRank and Citation Analysis. The result of the experiment that used Spearman Rank Correlation Coefficient to compare the proposed rank (generated by the method with the standard rank (generated manually by a group of experts showed that the average Spearman 0 < rS < critical value. It means that the correlation was proven but it was not significant. Hence the proposed rank does not satisfy the standard but the performance could be improved.

  7. Ranking the Online Documents Based on Relative Credibility Measures

    Directory of Open Access Journals (Sweden)

    Ahmad Dahlan


    Full Text Available Information searching is the most popular activity in Internet. Usually the search engine provides the search results ranked by the relevance. However, for a certain purpose that concerns with information credibility, particularly citing information for scientific works, another approach of ranking the search engine results is required. This paper presents a study on developing a new ranking method based on the credibility of information. The method is built up upon two well-known algorithms, PageRank and Citation Analysis. The result of the experiment that used Spearman Rank Correlation Coefficient to compare the proposed rank (generated by the method with the standard rank (generated manually by a group of experts showed that the average Spearman 0 < rS < critical value. It means that the correlation was proven but it was not significant. Hence the proposed rank does not satisfy the standard but the performance could be improved.

  8. A Utility-Based Approach to Some Information Measures

    Directory of Open Access Journals (Sweden)

    Sven Sandow


    Full Text Available We review a decision theoretic, i.e., utility-based, motivation for entropy and Kullback-Leibler relative entropy, the natural generalizations that follow, and various properties of thesegeneralized quantities. We then consider these generalized quantities in an easily interpreted spe-cial case. We show that the resulting quantities, share many of the properties of entropy andrelative entropy, such as the data processing inequality and the second law of thermodynamics.We formulate an important statistical learning problem – probability estimation – in terms of ageneralized relative entropy. The solution of this problem reflects general risk preferences via theutility function; moreover, the solution is optimal in a sense of robust absolute performance.

  9. Magnetic Field Measurements Based on Terfenol Coated Photonic Crystal Fibers (United States)

    Quintero, Sully M. M.; Martelli, Cicero; Braga, Arthur M. B.; Valente, Luiz C. G.; Kato, Carla C.


    A magnetic field sensor based on the integration of a high birefringence photonic crystal fiber and a composite material made of Terfenol particles and an epoxy resin is proposed. An in-fiber modal interferometer is assembled by evenly exciting both eigenemodes of the HiBi fiber. Changes in the cavity length as well as the effective refractive index are induced by exposing the sensor head to magnetic fields. The magnetic field sensor has a sensitivity of 0.006 (nm/mT) over a range from 0 to 300 mT with a resolution about ±1 mT. A fiber Bragg grating magnetic field sensor is also fabricated and employed to characterize the response of Terfenol composite to the magnetic field. PMID:22247655

  10. Magnetic Field Measurements Based on Terfenol Coated Photonic Crystal Fibers

    Directory of Open Access Journals (Sweden)

    Carla C. Kato


    Full Text Available A magnetic field sensor based on the integration of a high birefringence photonic crystal fiber and a composite material made of Terfenol particles and an epoxy resin is proposed. An in-fiber modal interferometer is assembled by evenly exciting both eigenemodes of the HiBi fiber. Changes in the cavity length as well as the effective refractive index are induced by exposing the sensor head to magnetic fields. The magnetic field sensor has a sensitivity of 0.006 (nm/mT over a range from 0 to 300 mT with a resolution about ±1 mT. A fiber Bragg grating magnetic field sensor is also fabricated and employed to characterize the response of Terfenol composite to the magnetic field.

  11. Glucose Monitoring System Based on Osmotic Pressure Measurements

    Directory of Open Access Journals (Sweden)

    Alexandra LEAL


    Full Text Available This paper presents the design and development of a prototype sensor unit for implementation in a long-term glucose monitoring system suitable for estimating glucose levels in people suffering from diabetes mellitus. The system utilizes osmotic pressure as the sensing mechanism and consists of a sensor prototype that is integrated together with a pre-amplifier and data acquisition unit for both data recording and processing. The sensor prototype is based on an embedded silicon absolute pressure transducer and a semipermeable nanoporous membrane that is enclosed in the sensor housing. The glucose monitoring system facilitates the integration of a low power microcontroller that is combined with a wireless inductive powered communication link. Experimental verification have proven that the system is capable of tracking osmotic pressure changes using albumin as a model compound, and thereby show a proof of concept for novel long term tracking of blood glucose from remote sensor nodes.

  12. Evidence-based evaluation of therapeutic measures for sleep disorders

    Directory of Open Access Journals (Sweden)

    LI Juan


    Full Text Available Objective To evaluate the therapeutic efficacy and side effects of various treatment for sleep disorders in order to provide the best therapeutic regimen for the evidence-based treatment of sleep disorders. Methods Sleep disorder, insomnia, restless legs syndrome or RLS, obstructive sleep apnea or OSA, narcolepsy, REM behaviour disorder or RBD, treatment or therapy were used as retrieval words. Cochrane Library, MEDLINE, ScienceDirect were used for retrieval, and manual searching was also used. Related clinical guidelines, systematic reviews, randomized controlled clinical trials, retrospective case analysis, case-observation studies and reviews were collected and evaluated by Jadad Scale. Results Forty related articles were selected as following: 6 clinical guidelines, 12 systematic reviews, 5 randomized controlled trials, 2 retrospective case analysis, 1 case-observation study and 14 reviews. Thirty-three were of high quality, while 7 were of low quality with score. According to the evaluation of therapeutic efficacy and side effects of various therapies, it is suggested as following: 1 insomnia is the most common in sleep disorders; the treatment methods of insomnia mainly include drug therapy and cognitive behavioral treatment (CBT; the two kinds of therapy have their own advantages and disadvantages, and the combination therapy of drugs and CBT is the best treatment plan. 2 The first-line treatment of primary RLS is dopamine agonists and anti-seizure drugs; however, the treatment of secondary RLS is mainly etiologic treatment. 3 The main treatments of OSAS are nasal continuous positive airway pressure (nCPAP, oral orthotics and surgery, and nCPAP is the first-line treatments. 4 The medication of narcolepsy is mainly modafinil, hydroxy butyric acid sodium and antidepressants, and the specific choosingshould accord to clinical classifications. 5 The main treatments of RBD include general treatments such as avoiding triggers, insuring the

  13. Efficient iris texture analysis method based on Gabor ordinal measures (United States)

    Tajouri, Imen; Aydi, Walid; Ghorbel, Ahmed; Masmoudi, Nouri


    With the remarkably increasing interest directed to the security dimension, the iris recognition process is considered to stand as one of the most versatile technique critically useful for the biometric identification and authentication process. This is mainly due to every individual's unique iris texture. A modestly conceived efficient approach relevant to the feature extraction process is proposed. In the first place, iris zigzag "collarette" is extracted from the rest of the image by means of the circular Hough transform, as it includes the most significant regions lying in the iris texture. In the second place, the linear Hough transform is used for the eyelids' detection purpose while the median filter is applied for the eyelashes' removal. Then, a special technique combining the richness of Gabor features and the compactness of ordinal measures is implemented for the feature extraction process, so that a discriminative feature representation for every individual can be achieved. Subsequently, the modified Hamming distance is used for the matching process. Indeed, the advanced procedure turns out to be reliable, as compared to some of the state-of-the-art approaches, with a recognition rate of 99.98%, 98.12%, and 95.02% on CASIAV1.0, CASIAV3.0, and IIT Delhi V1 iris databases, respectively.

  14. Intelligent Tires Based on Measurement of Tire Deformation (United States)

    Matsuzaki, Ryosuke; Todoroki, Akira

    From a traffic safety point-of-view, there is an urgent need for intelligent tires as a warning system for road conditions, for optimized braking control on poor road surfaces and as a tire fault detection system. Intelligent tires, equipped with sensors for monitoring applied strain, are effective in improving reliability and control systems such as anti-lock braking systems (ABSs). In previous studies, we developed a direct tire deformation or strain measurement system with sufficiently low stiffness and high elongation for practical use, and a wireless communication system between tires and vehicle that operates without a battery. The present study investigates the application of strain data for an optimized braking control and road condition warning system. The relationships between strain sensor outputs and tire mechanical parameters, including braking torque, effective radius and contact patch length, are calculated using finite element analysis. Finally, we suggested the possibility of optimized braking control and road condition warning systems. Optimized braking control can be achieved by keeping the slip ratio constant. The road condition warning would be actuated if the recorded friction coefficient at a certain slip ratio is lower than a ‘safe’ reference value.


    Directory of Open Access Journals (Sweden)

    G. J. Grenzdörffer


    Full Text Available BRDF is a common problem in remote sensing and also in oblique photogrammetry. Common approaches of BRDF-measurement with a field goniometer are costly and rather cumbersome. UAVs may offer an interesting alternative by using a special flight pattern of oblique and converging images. The main part of this paper is the description of a photogrammetric workflow in order to determine the anisotropic reflection properties of a given surface. Due to the relatively low flying heights standard procedures from close range photogrammetry were adopted for outdoor usage. The photogrammetric processing delivered automatic and highly accurate orientation information with the aid of coded targets. The interior orientation of the consumer grade camera is more or less stable. The radiometrically corrected oblique images are converted into ortho photos. The azimuth and elevation angle of every point may then be computed. The calculated anisotropy of a winter wheat plot is shown. A system four diagonally-looking cameras (Four Vision and an additional nadir looking camera is under development. The multi camera system especially designed for a Micro- UAV with a payload of min 1 kg. The system is composed of five industrial digital frame cameras (1.3 Mpix CCD-chips, 15 fp/s with fixed lenses. Also special problems with the construction of a light weight housing of the multi camera solution are covered in the paper.

  16. Gamma-based Measurement of ``Dark Mix'' in ICF Capsules (United States)

    Meaney, Kevin; Herrmann, H.; Kim, Yh; Zylstra, Ab; Geppert-Kleinrath, H.; Hoffman, Nm; Yi, As


    Mix of capsule ablator material into the fusion fuel is a source of yield degradation in inertial confinement fusion. Jetting or chunk mix, such as the elusive ``meteors'' that have been observed at NIF, can be difficult to diagnose because the chunks may not get hot enough to excite dopant x-rays, nor atomized enough for separated-reactants to fuse. Using the gamma reaction history (GRH-6m) diagnostic, (n,n') gammas from strategically placed carbon layer within a beryllium capsule gives a measure of the time-resolved areal density of this carbon during the burn and hence an indication of the compression and spatial distribution of this layer. As the carbon moves further from the fuel, the areal density nominally decreases as 1/r2 for unablated material. However, mix of this carbon into the cold dense fuel layer or hot spot will have a significant effect on the carbon gamma signal. Different types of mix (e.g., jetting, Rayleigh-Taylor fingers, diffusive, ...) as well as features that can seed this mix (eg., tents, fill,...) will be discussed along with their expected effect on the carbon signal. The design for upcoming OMEGA shots, which will demonstrate this technique, and the potential for use on the NIF will be presented.

  17. Grinding process monitoring based on electromechanical impedance measurements (United States)

    Marchi, Marcelo; Guimarães Baptista, Fabricio; de Aguiar, Paulo Roberto; Bianchi, Eduardo Carlos


    Grinding is considered one of the last processes in precision parts manufacturing, which makes it indispensable to have a reliable monitoring system to evaluate workpiece surface integrity. This paper proposes the use of the electromechanical impedance (EMI) method to monitor the surface grinding operation in real time, particularly the surface integrity of the ground workpiece. The EMI method stands out for its simplicity and for using low-cost components such as PZT (lead zirconate titanate) piezoelectric transducers. In order to assess the feasibility of applying the EMI method to the grinding process, experimental tests were performed on a surface grinder using a CBN grinding wheel and a SAE 1020 steel workpiece, with PZT transducers mounted on the workpiece and its holder. During the grinding process, the electrical impedance of the transducers was measured and damage indices conventionally used in the EMI method were calculated and compared with workpiece wear, indicating the surface condition of the workpiece. The experimental results indicate that the EMI method can be an efficient and cost-effective alternative for monitoring precision workpieces during the surface grinding process.

  18. Detecting concealed information in less than a second: response latency-based measures

    NARCIS (Netherlands)

    Verschuere, B.; de Houwer, J.; Verschuere, B.; Ben-Shakhar, G.; Meijer, E.


    Concealed information can be accurately assessed with physiological measures. To overcome the practical limitations of physiological measures, an assessment using response latencies has been proposed. At first sight, research findings on response latency based concealed information tests seem

  19. Potentiometric measurement of polymer-membrane electrodes based on lanthanum

    Energy Technology Data Exchange (ETDEWEB)

    Saefurohman, Asep, E-mail:; Buchari,, E-mail:; Noviandri, Indra, E-mail: [Department of Chemistry, Bandung Institute of Technology (Indonesia); Syoni [Department of Metallurgy Engineering, Bandung Institute of Technology (Indonesia)


    Quantitative analysis of rare earth elements which are considered as the standard method that has a high accuracy, and detection limits achieved by the order of ppm is inductively coupled plasma atomic emission spectroscopy (ICPAES). But these tools are expensive and valuable analysis of the high cost of implementation. In this study be made and characterized selective electrode for the determination of rare earth ions is potentiometric. Membrane manufacturing techniques studied is based on immersion (liquid impregnated membrane) in PTFE 0.5 pore size. As ionophores to be used tri butyl phosphate (TBP) and bis(2-etylhexyl) hydrogen phosphate. There is no report previously that TBP used as ionophore in polymeric membrane based lanthanum. Some parameters that affect the performance of membrane electrode such as membrane composition, membrane thickness, and types of membrane materials studied in this research. Manufacturing of Ion Selective Electrodes (ISE) Lanthanum (La) by means of impregnation La membrane in TBP in kerosene solution has been done and showed performance for ISE-La. FTIR spectrum results for PTFE 0.5 pore size which impregnated in TBP and PTFE blank showed difference of spectra in the top 1257 cm{sup −1}, 1031 cm{sup −1} and 794.7 cm{sup −1} for P=O stretching and stretching POC from group −OP =O. The result showed shift wave number for P =O stretching of the cluster (−OP=O) in PTFE-TBP mixture that is at the peak of 1230 cm{sup −1} indicated that no interaction bond between hydroxyl group of molecules with molecular clusters fosforil of TBP or R{sub 3}P = O. The membrane had stable responses in pH range between 1 and 9. Good responses were obtained using 10{sup −3} M La(III) internal solution, which produced relatively high potential. ISE-La showed relatively good performances. The electrode had a response time of 29±4.5 second and could be use for 50 days. The linear range was between 10{sup −5} and 10{sup −1} M.

  20. A New Instantaneous Frequency Measure Based on The Stockwell Transform (United States)

    yedlin, M. J.; Ben-Horrin, Y.; Fraser, J. D.


    We propose the use of a new transform, the Stockwell transform[1], as a means of creating time-frequency maps and applying them to distinguish blasts from earthquakes. This new transform, the Stockwell transform can be considered as a variant of the continuous wavelet transform, that preserves the absolute phase.The Stockwell transform employs a complex Morlet mother wavelet. The novelty of this transform lies in its resolution properties. High frequencies in the candidate signal are well-resolved in time but poorly resolved in frequency, while the converse is true for low frequency signal components. The goal of this research is to obtain the instantaneous frequency as a function of time for both the earthquakes and the blasts. Two methods will be compared. In the first method, we will compute the analytic signal, the envelope and the instantaneous phase as a function of time[2]. The instantaneous phase derivative will yield the instantaneous angular frequency. The second method will be based on time-frequency analysis using the Stockwell transform. The Stockwell transform will be computed in non-redundant fashion using a dyadic representation[3]. For each time-point, the frequency centroid will be computed -- a representation for the most likely frequency at that time. A detailed comparison will be presented for both approaches to the computation of the instantaneous frequency. An advantage of the Stockwell approach is that no differentiation is applied. The Hilbert transform method can be less sensitive to edge effects. The goal of this research is to see if the new Stockwell-based method could be used as a discriminant between earthquakes and blasts. References [1] Stockwell, R.G., Mansinha, L. and Lowe, R.P. "Localization of the complex spectrum: the S transform", IEEE Trans. Signal Processing, vol.44, no.4, pp.998-1001, (1996). [2]Taner, M.T., Koehler, F. "Complex seismic trace analysis", Geophysics, vol. 44, Issue 6, pp. 1041-1063 (1979). [3] Brown, R