High Count Rate Single Photon Counting Detector Array Project
National Aeronautics and Space Administration — An optical communications receiver requires efficient and high-rate photon-counting capability so that the information from every photon, received at the aperture,...
Count rate performance of a silicon-strip detector for photon-counting spectral CT
Liu, X.; Grönberg, F.; Sjölin, M.; Karlsson, S.; Danielsson, M.
2016-08-01
A silicon-strip detector is developed for spectral computed tomography. The detector operates in photon-counting mode and allows pulse-height discrimination with 8 adjustable energy bins. In this work, we evaluate the count-rate performance of the detector in a clinical CT environment. The output counts of the detector are measured for x-ray tube currents up to 500 mA at 120 kV tube voltage, which produces a maximum photon flux of 485 Mphotons/s/mm2 for the unattenuated beam. The corresponding maximum count-rate loss of the detector is around 30% and there are no saturation effects. A near linear relationship between the input and output count rates can be observed up to 90 Mcps/mm2, at which point only 3% of the input counts are lost. This means that the loss in the diagnostically relevant count-rate region is negligible. A semi-nonparalyzable dead-time model is used to describe the count-rate performance of the detector, which shows a good agreement with the measured data. The nonparalyzable dead time τn for 150 evaluated detector elements is estimated to be 20.2±5.2 ns.
High Count Rate Electron Probe Microanalysis
Geller, Joseph D.; Herrington, Charles
2002-01-01
Reducing the measurement uncertainty of quantitative analyses made using electron probe microanalyzers (EPMA) requires a careful study of the individual uncertainties from each definable step of the measurement. Those steps include measuring the incident electron beam current and voltage, knowing the angle between the electron beam and the sample (takeoff angle), collecting the emitted x rays from the sample, comparing the emitted x-ray flux to known standards (to determine the k-ratio) and transformation of the k-ratio to concentration using algorithms which includes, as a minimum, the atomic number, absorption, and fluorescence corrections. This paper discusses the collection and counting of the emitted x rays, which are diffracted into the gas flow or sealed proportional x-ray detectors. The representation of the uncertainty in the number of collected x rays collected reduces as the number of counts increase. The uncertainty of the collected signal is fully described by Poisson statistics. Increasing the number of x rays collected involves either counting longer or at a higher counting rate. Counting longer means the analysis time increases and may become excessive to get to the desired uncertainty. Instrument drift also becomes an issue. Counting at higher rates has its limitations, which are a function of the detector physics and the detecting electronics. Since the beginning of EPMA analysis, analog electronics have been used to amplify and discriminate the x-ray induced ionizations within the proportional counter. This paper will discuss the use of digital electronics for this purpose. These electronics are similar to that used for energy dispersive analysis of x rays with either Si(Li) or Ge(Li) detectors except that the shaping time constants are much smaller. PMID:27446749
Characterization of the count rate performance of modern gamma cameras
Silosky, M.; Johnson, V.; Beasley, C.; Cheenu Kappadath, S.
2013-01-01
between the estimates of τ using the decay or dual source methods under identical experimental conditions (p = 0.13). Estimates of τ increased as a power-law function with decreasing ratio of counts in the photopeak to the total counts. Also, estimates of τ increased linearly as spectral effective energy decreased. No significant difference was observed between the dependences of τ on energy window definition or incident spectrum between the decay and dual source methods. Estimates of τ using the dual source method varied as a quadratic on the ratio of the single source to combined source activities and linearly with total activity. Conclusions: The CRP curves for three modern gamma camera models have been characterized, demonstrating unexpected behavior that necessitates the determination of both τ and maximum count rate to fully characterize the CRP curve. τ was estimated under a variety of experimental conditions, based on which guidelines for the performance of CRP testing in a clinical setting have been proposed. PMID:23464339
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Counting losses due to saturation effects of scintillation counters at high count rates
Hashimoto, K
1999-01-01
The counting statistics of a scintillation counter, with a preamplifier saturated by an overloading input, are investigated. First, the formulae for the variance and the mean number of counts, accumulated within a given gating time, are derived by considering counting-loss effects originating from the saturation and a finite resolving time of the electronic circuit. Numerical examples based on the formulae indicate that the saturation makes a positive contribution to the variance-to-mean ratio and that the contribution increases with count rate. Next the ratios are measured under high count rates when the preamplifier saturation can be observed. By fitting the present formula to the measured data, the counting-loss parameters can be evaluated. Corrections based on the parameters are made for various count rates measured in a nuclear reactor. As a result of the corrections, the linearity between count rate and reactor power can be restored.
High counting rate resistive-plate chamber
Peskov, V.; Anderson, D. F.; Kwan, S.
1993-05-01
Parallel-plate avalanche chambers (PPAC) are widely used in physics experiments because they are fast (less than 1 ns) and have very simple construction: just two parallel metallic plates or mesh electrodes. Depending on the applied voltage they may work either in spark mode or avalanche mode. The advantage of the spark mode of operation is a large signal amplitude from the chamber, the disadvantage is that there is a large dead time (msec) for the entire chamber after an event. The main advantage of the avalanche mode is high rate capability 10(exp 5) counts/mm(sup 2). A resistive-plate chamber (RPC) is similar to the PPAC in construction except that one or both of the electrodes are made from high resistivity (greater than 10(exp 10) Omega(cm) materials. In practice RPC's are usually used in the spark mode. Resistive electrodes are charged by sparks, locally reducing the actual electric field in the gap. The size of the charged surface is about 10 mm(sup 2), leaving the rest of the detector unaffected. Therefore, the rate capability of such detectors in the spark mode is considerably higher than conventional spark counters. Among the different glasses tested the best results were obtained with electron type conductive glasses, which obey Ohm's law. Most of the work with such glasses was done with high pressure parallel-plate chambers (10 atm) for time-of-flight measurements. Resistive glasses have been expensive and produced only in small quantities. Now resistive glasses are commercially available, although they are still expensive in small scale production. From the positive experience of different groups working with the resistive glasses, it was decided to review the old idea to use this glass for the RPC. This work has investigated the possibility of using the RPC at 1 atm and in the avalanche mode. This has several advantages: simplicity of construction, high rate capability, low voltage operation, and the ability to work with non-flammable gases.
Count rate performance study of the Lausanne ClearPET scanner demonstrator
Rey, M. [LPHE, Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland)]. E-mail: martin.rey@epfl.ch; Jan, S. [Service Hospitalier Frederic Joliot, CEA, F-91401 Orsay (France); Vieira, J.-M. [LPHE, Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland); Mosset, J.-B. [LPHE, Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland); Krieguer, M. [IIHE, Vrije Universiteit Brussel, B-1050 Brussels (Belgium); Comtat, C. [Service Hospitalier Frederic Joliot, CEA, F-91401 Orsay (France); Morel, C. [CPPM, CNRS-IN2P3, Universite de la Mediterranee Aix-Marseille II, F-13288 Marseille (France)
2007-02-01
This paper presents the count rate measurements obtained with the Lausanne partial ring ClearPET scanner demonstrator and compares them against GATE Monte Carlo simulations. For the present detector setup, a maximum single event count rate of 1.1 Mcps is measured or a 250-750 keV energy window. This corresponds to a coincidence count rate of approximately 22 kcps. Good agreements are observed between measured and simulated data. Count rate performance, including Noise Equivalent Count (NEC) curves, are determined and extrapolated for a full ring ClearPET design using GATE Monte Carlo simulations. For a full ring design with three rings of detector modules, NEC is peaking at about 70 kcps for 20 MBq.
Relationship between salivary flow rates and Candida albicans counts.
Navazesh, M; Wood, G J; Brightman, V J
1995-09-01
Seventy-one persons (48 women, 23 men; mean age, 51.76 years) were evaluated for salivary flow rates and Candida albicans counts. Each person was seen on three different occasions. Samples of unstimulated whole, chewing-stimulated whole, acid-stimulated parotid, and candy-stimulated parotid saliva were collected under standardized conditions. An oral rinse was also obtained and evaluated for Candida albicans counts. Unstimulated and chewing-stimulated whole flow rates were negatively and significantly (p or = 500 count. Differences in stimulated parotid flow rates were not significant among different levels of Candida counts. The results of this study reveal that whole saliva is a better predictor than parotid saliva in identification of persons with high Candida albicans counts.
Preset time count rate meter using adaptive digital signal processing
Žigić Aleksandar D.
2005-01-01
Full Text Available Two presented methods were developed to improve classical preset time count rate meters by using adapt able signal processing tools. An optimized detection algorithm that senses the change of mean count rate was implemented in both methods. Three low-pass filters of various structures with adaptable parameters to implement the control of the mean count rate error by suppressing the fluctuations in a controllable way, were considered and one of them implemented in both methods. An adaptation algorithm for preset time interval calculation executed after the low-pass filter was devised and implemented in the first method. This adaptation algorithm makes it possible to obtain shorter preset time intervals for higher stationary mean count rate. The adaptation algorithm for preset time interval calculation executed before the low-pass filter was devised and implemented in the second method. That adaptation algorithm enables sensing of a rapid change of the mean count rate before fluctuations suppression is carried out. Some parameters were fixed to their optimum values after appropriate optimization procedure. Low-pass filters have variable number of stationary coefficients depending on the specified error and the mean count rate. They implement control of the mean count rate error by suppressing fluctuations in a controllable way. The simulated and realized methods, using the developed algorithms, guarantee that the response time shall not exceed 2 s for the mean count rate higher than 2 s-1 and that controllable mean count rate error shall be within the range of ±4% to ±10%.
Reducing the Teen Death Rate. KIDS COUNT Indicator Brief
Shore, Rima; Shore, Barbara
2009-01-01
Life continues to hold considerable risk for adolescents in the United States. In 2006, the teen death rate stood at 64 deaths per 100,000 teens (13,739 teens) (KIDS COUNT Data Center, 2009). Although it has declined by 4 percent since 2000, the rate of teen death in this country remains substantially higher than in many peer nations, based…
Tremsin, A.S., E-mail: ast@ssl.berkeley.edu; Vallerga, J.V.; McPhate, J.B.; Siegmund, O.H.W.
2015-07-01
Many high resolution event counting devices process one event at a time and cannot register simultaneous events. In this article a frame-based readout event counting detector consisting of a pair of Microchannel Plates and a quad Timepix readout is described. More than 10{sup 4} simultaneous events can be detected with a spatial resolution of ~55 µm, while >10{sup 3} simultaneous events can be detected with <10 µm spatial resolution when event centroiding is implemented. The fast readout electronics is capable of processing >1200 frames/sec, while the global count rate of the detector can exceed 5×10{sup 8} particles/s when no timing information on every particle is required. For the first generation Timepix readout, the timing resolution is limited by the Timepix clock to 10–20 ns. Optimization of the MCP gain, rear field voltage and Timepix threshold levels are crucial for the device performance and that is the main subject of this article. These devices can be very attractive for applications where the photon/electron/ion/neutron counting with high spatial and temporal resolution is required, such as energy resolved neutron imaging, Time of Flight experiments in lidar applications, experiments on photoelectron spectroscopy and many others.
Trueb, P.; Sobott, B. A.; Schnyder, R.; Loeliger, T.; Schneebeli, M.; Kobas, M.; Rassool, R. P.; Peake, D. J.; Broennimann, C.
2013-03-01
PILATUS systems are well established as X-ray detectors at most synchrotrons. Their single photon counting capability ensures precise measurements, but introduces a short dead time after each hit, which becomes significant for photon rates above a million per second and pixel. The resulting loss in the number of counted photons can be corrected for by applying corresponding rate correction factors. This article presents a Monte-Carlo simulation, which computes the correction factors taking into account the detector settings as well as the time structure of the X-ray beam at the synchrotron. For the PILATUS2 detector series the simulation shows good agreement with experimentally determined correction factors for various detector settings at different synchrotrons. The application of more accurate rate correction factors will improve the X-ray data quality at high photon fluxes. Furthermore we report on the simulation of the rate correction factors for the new PILATUS3 systems. The successor of the PILATUS2 detector avoids the paralysation of the counter, and allows for measurements up to a rate of ten million photons per second and pixel. For fast detector settings the simulation is capable of reproducing the data within one to two percent at an incoming photon rate of one million per second and pixel.
Maximum ikelihood estimation for the double-count method with independent observers
Manly, Bryan F.J.; McDonald, Lyman L.; Garner, Gerald W.
1996-01-01
Data collected under a double-count protocol during line transect surveys were analyzed using new maximum likelihood methods combined with Akaike's information criterion to provide estimates of the abundance of polar bear (Ursus maritimus Phipps) in a pilot study off the coast of Alaska. Visibility biases were corrected by modeling the detection probabilities using logistic regression functions. Independent variables that influenced the detection probabilities included perpendicular distance of bear groups from the flight line and the number of individuals in the groups. A series of models were considered which vary from (1) the simplest, where the probability of detection was the same for both observers and was not affected by either distance from the flight line or group size, to (2) models where probability of detection is different for the two observers and depends on both distance from the transect and group size. Estimation procedures are developed for the case when additional variables may affect detection probabilities. The methods are illustrated using data from the pilot polar bear survey and some recommendations are given for design of a survey over the larger Chukchi Sea between Russia and the United States.
Mean square convergence rates for maximum quasi-likelihood estimator
Arnoud V. den Boer
2015-03-01
Full Text Available In this note we study the behavior of maximum quasilikelihood estimators (MQLEs for a class of statistical models, in which only knowledge about the first two moments of the response variable is assumed. This class includes, but is not restricted to, generalized linear models with general link function. Our main results are related to guarantees on existence, strong consistency and mean square convergence rates of MQLEs. The rates are obtained from first principles and are stronger than known a.s. rates. Our results find important application in sequential decision problems with parametric uncertainty arising in dynamic pricing.
许有国
2005-01-01
Most people began to count in tens because they had ten fingers on their hands. But in some countries, people counted on one hand and used the three parts of their four fingers. So they counted in twelves, not in tens.
The tropical lapse rate steepened during the Last Glacial Maximum.
Loomis, Shannon E; Russell, James M; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F Alayne; Kelly, Meredith A
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate change is uncertain because of poor constraints on high-elevation temperature during past climate states. We present a 25,000-year temperature reconstruction from Mount Kenya, East Africa, which demonstrates that cooling during the Last Glacial Maximum was amplified with elevation and hence that the lapse rate was significantly steeper than today. Comparison of our data with paleoclimate simulations indicates that state-of-the-art models underestimate this lapse-rate change. Consequently, future high-elevation tropical warming may be even greater than predicted.
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, Shannon E.; Russell, James M.; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S.; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F. Alayne; Kelly, Meredith A.
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate change is uncertain because of poor constraints on high-elevation temperature during past climate states. We present a 25,000-year temperature reconstruction from Mount Kenya, East Africa, which demonstrates that cooling during the Last Glacial Maximum was amplified with elevation and hence that the lapse rate was significantly steeper than today. Comparison of our data with paleoclimate simulations indicates that state-of-the-art models underestimate this lapse-rate change. Consequently, future high-elevation tropical warming may be even greater than predicted. PMID:28138544
Maximum orbit plane change with heat-transfer-rate considerations
Lee, J. Y.; Hull, D. G.
1990-01-01
Two aerodynamic maneuvers are considered for maximizing the plane change of a circular orbit: gliding flight with a maximum thrust segment to regain lost energy (aeroglide) and constant altitude cruise with the thrust being used to cancel the drag and maintain a high energy level (aerocruise). In both cases, the stagnation heating rate is limited. For aeroglide, the controls are the angle of attack, the bank angle, the time at which the burn begins, and the length of the burn. For aerocruise, the maneuver is divided into three segments: descent, cruise, and ascent. During descent the thrust is zero, and the controls are the angle of attack and the bank angle. During cruise, the only control is the assumed-constant angle of attack. During ascent, a maximum thrust segment is used to restore lost energy, and the controls are the angle of attack and bank angle. The optimization problems are solved with a nonlinear programming code known as GRG2. Numerical results for the Maneuverable Re-entry Research Vehicle with a heating-rate limit of 100 Btu/ft(2)-s show that aerocruise gives a maximum plane change of 2 deg, which is only 1 deg larger than that of aeroglide. On the other hand, even though aerocruise requires two thrust levels, the cruise characteristics of constant altitude, velocity, thrust, and angle of attack are easy to control.
Maximum, minimum, and optimal mutation rates in dynamic environments
Ancliff, Mark; Park, Jeong-Man
2009-12-01
We analyze the dynamics of the parallel mutation-selection quasispecies model with a changing environment. For an environment with the sharp-peak fitness function in which the most fit sequence changes by k spin flips every period T , we find analytical expressions for the minimum and maximum mutation rates for which a quasispecies can survive, valid in the limit of large sequence size. We find an asymptotic solution in which the quasispecies population changes periodically according to the periodic environmental change. In this state we compute the mutation rate that gives the optimal mean fitness over a period. We find that the optimal mutation rate per genome, k/T , is independent of genome size, a relationship which is observed across broad groups of real organisms.
Use of Feedback to Maximize Photon Count Rate in XRF Spectroscopy
Lucas, Benjamin A
2016-01-01
The effective bandwidth of an energy dispersive x-ray fluorescence spectroscopy system is limited by the timing of incident photons. When multiple photons strike the detector within the processing time of the detector photon pile-up occurs and the signal received by the detector during this interval must be ignored. In conventional ED-XRF systems the probability of a photon being incident upon the detector is uniform over time, and thus pile-up follows Poisson statistics. In this paper we present a mathematical treatment of the relationship between photon timing statistics and the count rate of an XRF system. We show that it is possible to increase the maximum count rates by applying feedback from the detector to the x-ray source to alter the timing statistics of photon emission. Monte-Carlo simulations, show that this technique can increase the maximum count rate of an XRF spectroscopy system by a factor of 2.94 under certain circumstances.
Predicting the solar maximum with the rising rate
Du, Z L
2011-01-01
The growth rate of solar activity in the early phase of a solar cycle has been known to be well correlated with the subsequent amplitude (solar maximum). It provides very useful information for a new solar cycle as its variation reflects the temporal evolution of the dynamic process of solar magnetic activities from the initial phase to the peak phase of the cycle. The correlation coefficient between the solar maximum (Rmax) and the rising rate ({\\beta}a) at {\\Delta}m months after the solar minimum (Rmin) is studied and shown to increase as the cycle progresses with an inflection point (r = 0.83) at about {\\Delta}m = 20 months. The prediction error of Rmax based on {\\beta}a is found within estimation at the 90% level of confidence and the relative prediction error will be less than 20% when {\\Delta}m \\geq 20. From the above relationship, the current cycle (24) is preliminarily predicted to peak around October 2013 with a size of Rmax =84 \\pm 33 at the 90% level of confidence.
Measurement and relevance of maximum metabolic rate in fishes.
Norin, T; Clark, T D
2016-01-01
Maximum (aerobic) metabolic rate (MMR) is defined here as the maximum rate of oxygen consumption (M˙O2max ) that a fish can achieve at a given temperature under any ecologically relevant circumstance. Different techniques exist for eliciting MMR of fishes, of which swim-flume respirometry (critical swimming speed tests and burst-swimming protocols) and exhaustive chases are the most common. Available data suggest that the most suitable method for eliciting MMR varies with species and ecotype, and depends on the propensity of the fish to sustain swimming for extended durations as well as its capacity to simultaneously exercise and digest food. MMR varies substantially (>10 fold) between species with different lifestyles (i.e. interspecific variation), and to a lesser extent (aerobic scope, interest in measuring this trait has spread across disciplines in attempts to predict effects of climate change on fish populations. Here, various techniques used to elicit and measure MMR in different fish species with contrasting lifestyles are outlined and the relevance of MMR to the ecology, fitness and climate change resilience of fishes is discussed.
Acconcia, G.; Labanca, I.; Rech, I.; Gulinatti, A.; Ghioni, M.
2017-02-01
The minimization of Single Photon Avalanche Diodes (SPADs) dead time is a key factor to speed up photon counting and timing measurements. We present a fully integrated Active Quenching Circuit (AQC) able to provide a count rate as high as 100 MHz with custom technology SPAD detectors. The AQC can also operate the new red enhanced SPAD and provide the timing information with a timing jitter Full Width at Half Maximum (FWHM) as low as 160 ps.
Veberic, Darko
2011-01-01
We present a novel method for combining the analog and photon-counting measurements of lidar transient recorders into reconstructed photon returns. The method takes into account the statistical properties of the two measurement modes and estimates the most likely number of arriving photons and the most likely values of acquisition parameters describing the two measurement modes. It extends and improves the standard combining ("gluing") methods and does not rely on any ad hoc definitions of the overlap region nor on any ackground subtraction methods.
Akiba, M; Tsujino, K; Sato, K; Sasaki, M
2009-09-14
Multipixel silicon avalanche photodiodes (Si APDs) are novel photodetectors used as silicon photomultipliers (SiPMs), or multipixel photon counter (MPPC), because they have fast response, photon-number resolution, and a high count rate; one drawback, however, is the high dark count rate. We developed a system for cooling an MPPC to liquid nitrogen temperature and thus reduce the dark count rate. Our system achieved dark count rates of <0.2 cps. Here we present the afterpulse probability, counting capability, timing jitter, and photon-number resolution of our system at 78.5 K and 295 K.
The mechanics of granitoid systems and maximum entropy production rates.
Hobbs, Bruce E; Ord, Alison
2010-01-13
A model for the formation of granitoid systems is developed involving melt production spatially below a rising isotherm that defines melt initiation. Production of the melt volumes necessary to form granitoid complexes within 10(4)-10(7) years demands control of the isotherm velocity by melt advection. This velocity is one control on the melt flux generated spatially just above the melt isotherm, which is the control valve for the behaviour of the complete granitoid system. Melt transport occurs in conduits initiated as sheets or tubes comprising melt inclusions arising from Gurson-Tvergaard constitutive behaviour. Such conduits appear as leucosomes parallel to lineations and foliations, and ductile and brittle dykes. The melt flux generated at the melt isotherm controls the position of the melt solidus isotherm and hence the physical height of the Transport/Emplacement Zone. A conduit width-selection process, driven by changes in melt viscosity and constitutive behaviour, operates within the Transport Zone to progressively increase the width of apertures upwards. Melt can also be driven horizontally by gradients in topography; these horizontal fluxes can be similar in magnitude to vertical fluxes. Fluxes induced by deformation can compete with both buoyancy and topographic-driven flow over all length scales and results locally in transient 'ponds' of melt. Pluton emplacement is controlled by the transition in constitutive behaviour of the melt/magma from elastic-viscous at high temperatures to elastic-plastic-viscous approaching the melt solidus enabling finite thickness plutons to develop. The system involves coupled feedback processes that grow at the expense of heat supplied to the system and compete with melt advection. The result is that limits are placed on the size and time scale of the system. Optimal characteristics of the system coincide with a state of maximum entropy production rate.
Ahmad, Mirza Sultan; Waheed, Abdul
2014-05-01
To determine frequency of thrombocytopenia and thrombocytosis, the MPV (mean platelet volume) and PDW (platelet distribution width) in patients with probable and culture proven neonatal sepsis and determine any association between platelet counts and mortality rate. Descriptive analytical study. NICU, Fazle Omar Hospital, from January 2011 to December 2012. Cases of culture proven and probable neonatal sepsis, admitted in Fazle Omar Hospital, Rabwah, were included in the study. Platelet counts, MPV and PDW of the cases were recorded. Mortality was documented. Frequencies of thrombocytopenia ( 450000/mm3) were ascertained. Mortality rates in different groups according to platelet counts were calculated and compared by chi-square test to check association. Four hundred and sixty nine patients were included; 68 (14.5%) of them died. One hundred and thirty six (29%) had culture proven sepsis, and 333 (71%) were categorized as probable sepsis. Thrombocytopenia was present in 116 (24.7%), and thrombocytosis was present in 36 (7.7%) cases. Median platelet count was 213.0/mm3. Twenty eight (27.7%) patients with thrombocytopenia, and 40 (12.1%) cases with normal or raised platelet counts died (p neonatal sepsis. Those with thrombocytopenia have higher mortality rate. No significant difference was present between PDW and MPV of the cases who survived and died.
Reducing the Child Poverty Rate. KIDS COUNT Indicator Brief
Shore, Rima; Shore, Barbara
2009-01-01
In 2007, nearly one in five or 18 percent of children in the U.S. lived in poverty (KIDS COUNT Data Center, 2009). Many of these children come from minority backgrounds. African American (35 percent), American Indian (33 percent) and Latino (27 percent) children are more likely to live in poverty than their white (11 percent) and Asian (12…
VANSTEENIS, HG; TULEN, JHM; MULDER, LJM
1994-01-01
This paper compares two methods to estimate heart rate variability spectra i.e., the spectrum of counts and the instantaneous heart rate spectrum. Contrary to Fourier techniques based on equidistant sampling of the interbeat intervals, the spectrum of counts of the instantaneous heart rate spectrum
VANSTEENIS, HG; TULEN, JHM; MULDER, LJM
This paper compares two methods to estimate heart rate variability spectra i.e., the spectrum of counts and the instantaneous heart rate spectrum. Contrary to Fourier techniques based on equidistant sampling of the interbeat intervals, the spectrum of counts of the instantaneous heart rate spectrum
47 CFR 65.700 - Determining the maximum allowable rate of return.
2010-10-01
... CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Maximum Allowable Rates of Return § 65.700 Determining the maximum allowable rate of return. (a) The maximum allowable rate of return for any exchange carrier's earnings on any access service category shall...
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, S.E.; Russell, J.M.; Verschuren, D.; Morrill, C.; De Cort, G.; Sinninghe Damsté, J.S.; Olago, D.; Eggermont, H.; Street-Perrott, F.A.; Kelly, M.A.
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become lesssteep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountainenvironments. However, the sensitivity of the lapse rate to climate
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, Shannon E; Russell, James M; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S|info:eu-repo/dai/nl/07401370X; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F Alayne; Kelly, Meredith A
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate
Jan Werner; Eva Maria Griebeler
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which...
A Maximum Information Rate Quaternion Filter for Spacecraft Attitude Estimation
Reijneveld, J.; Maas, A.; Choukroun, D.; Kuiper, J.M.
2011-01-01
Building on previous works, this paper introduces a novel continuous-time stochastic optimal linear quaternion estimator under the assumptions of rate gyro measurements and of vector observations of the attitude. A quaternion observation model, which observation matrix is rank degenerate, is reduced
78 FR 13999 - Maximum Interest Rates on Guaranteed Farm Loans
2013-03-04
... have removed the term. Comment: Don't remove the ``average agricultural loan customer'' definition. The... the following methods: Federal eRulemaking Portal: Go to http://www.regulations.gov . Follow the.... Comment: FSA should let the market dictate what interest rate lenders charge guaranteed borrowers, rather...
General Theory of Decoy-State Quantum Cryptography with Dark Count Rate Fluctuation
GAO Xiang; SUN Shi-Hai; LIANG Lin-Mei
2009-01-01
The existing theory of decoy-state quantum cryptography assumes that the dark count rate is a constant, but in practice there exists fluctuation. We develop a new scheme of the decoy state, achieve a more practical key generation rate in the presence of fluctuation of the dark count rate, and compare the result with the result of the decoy-state without fluctuation.It is found that the key generation rate and maximal secure distance will be decreased under the influence of the fluctuation of the dark count rate.
Instrumental oscillations in RHESSI count rates during solar flares
Inglis, A R; Dennis, B R; Kontar, E P; Nakariakov, V M; Struminsky, A B; Tolbert, A K
2011-01-01
Aims: We seek to illustrate the analysis problems posed by RHESSI spacecraft motion by studying persistent instrumental oscillations found in the lightcurves measured by RHESSI's X-ray detectors in the 6-12 keV and 12-25 keV energy range during the decay phase of the flares of 2004 November 4 and 6. Methods: The various motions of the RHESSI spacecraft which may contribute to the manifestation of oscillations are studied. The response of each detector in turn is also investigated. Results: We find that on 2004 November 6 the observed oscillations correspond to the nutation period of the RHESSI instrument. These oscillations are also of greatest amplitude for detector 5, while in the lightcurves of many other detectors the oscillations are small or undetectable. We also find that the variation in detector pointing is much larger during this flare than the counterexample of 2004 November 4. Conclusions: Sufficiently large nutation motions of the RHESSI spacecraft lead to clearly observable oscillations in count...
Note: A high count rate real-time digital processing method for PGNAA data acquisition system
Liu, Yuzhe; Chen, Lian; Li, Feng; Liang, Futian; Jin, Ge
2017-07-01
The prompt gamma neutron activation analysis (PGNAA) technique is a real-time online method to analyze the composition of industrial materials. This paper presents a data acquisition system with a high count rate and real-time digital processing method for PGNAA. Limited by the decay time of the detector, the ORTEC multi-channel analyzer (MCA) can normally achieve an average count rate of 100 kcps. However, this system uses an electrical technique to increase the average count rate and reduce dead time, and guarantees good accuracy. Since the measuring time is usually limited to about 120 s, in order to accelerate the accumulation rate of spectrum and reduce the statistical error, the average count rate is expected to reach more than 500 kcps.
Kappler, S.; Hölzer, S.; Kraft, E.; Stierstorfer, K.; Flohr, T.
2011-03-01
The application of quantum-counting detectors in clinical Computed Tomography (CT) is challenged by extreme X-ray fluxes provided by modern high-power X-ray tubes. Scanning of small objects or sub-optimal patient positioning may lead to situations where those fluxes impinge on the detector without attenuation. Even in operation modes optimized for high-rate applications, with small pixels and high bias voltage, CdTe/CdZnTe detectors deliver pulses in the range of several nanoseconds. This can result in severe pulse pile-up causing detector paralysis and ambiguous detector signals. To overcome this problem we introduce the pile-up trigger, a novel method that provides unambiguous detector signals in rate regimes where classical rising-edge counters run into count-rate paralysis. We present detailed CT image simulations assuming ideal sensor material not suffering from polarization effects at high X-ray fluxes. This way we demonstrate the general feasibility of the pile-up trigger method and quantify resulting imaging properties such as contrasts, image noise and dual-energy performance in the high-flux regime of clinical CT devices.
Bone and gallium scans in mastocytosis: correlation with count rates, radiography, and microscopy
Ensslen, R.D. (Cross Cancer Inst., Edmonton, Alberta); Jackson, F.I.; Reid, A.M.
1983-07-01
Mastocytosis (urticaria pigmentosa) was proven in a patient suffering from severe back pain. A bone scan showed diffusely increased bone activity. Count rates were also abnormally elevated over several areas of the skeleton. Radiographs were consistent with mastocytosis in bone.
9 CFR 381.68 - Maximum inspection rates-New turkey inspection system.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Maximum inspection rates-New turkey inspection system. 381.68 Section 381.68 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE... Procedures § 381.68 Maximum inspection rates—New turkey inspection system. (a) The maximum inspection...
Jan Werner
Full Text Available We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes strongly differed from Case's study (1978, which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles to 20 (fishes times (in comparison to mammals or even 45 (reptiles to 100 (fishes times (in comparison to birds lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule
Werner, Jan; Griebeler, Eva Maria
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either of
Development of Fast High-Resolution Muon Drift-Tube Detectors for High Counting Rates
INSPIRE-00287945; Dubbert, J.; Horvat, S.; Kortner, O.; Kroha, H.; Legger, F.; Richter, R.; Adomeit, S.; Biebel, O.; Engl, A.; Hertenberger, R.; Rauscher, F.; Zibell, A.
2011-01-01
Pressurized drift-tube chambers are e?cient detectors for high-precision tracking over large areas. The Monitored Drift-Tube (MDT) chambers of the muon spectrometer of the ATLAS detector at the Large Hadron Collider (LHC) reach a spatial resolution of 35 micons and almost 100% tracking e?ciency with 6 layers of 30 mm diameter drift tubes operated with Ar:CO2 (93:7) gas mixture at 3 bar and a gas gain of 20000. The ATLAS MDT chambers are designed to cope with background counting rates due to neutrons and gamma-rays of up to about 300 kHz per tube which will be exceeded for LHC luminosities larger than the design value of 10-34 per square cm and second. Decreasing the drift-tube diameter to 15 mm while keeping the other parameters, including the gas gain, unchanged reduces the maximum drift time from about 700 ns to 200 ns and the drift-tube occupancy by a factor of 7. New drift-tube chambers for the endcap regions of the ATLAS muon spectrometer have been designed. A prototype chamber consisting of 12 times 8 l...
Scheike, Thomas Harder
2002-01-01
We use the additive risk model of Aalen (Aalen, 1980) as a model for the rate of a counting process. Rather than specifying the intensity, that is the instantaneous probability of an event conditional on the entire history of the relevant covariates and counting processes, we present a model...... for the rate function, i.e., the instantaneous probability of an event conditional on only a selected set of covariates. When the rate function for the counting process is of Aalen form we show that the usual Aalen estimator can be used and gives almost unbiased estimates. The usual martingale based variance...... estimator is incorrect and an alternative estimator should be used. We also consider the semi-parametric version of the Aalen model as a rate model (McKeague and Sasieni, 1994) and show that the standard errors that are computed based on an assumption of intensities are incorrect and give a different...
A Calibration of NICMOS Camera 2 for Low Count-Rates
Rubin, D; Amanullah, R; Barbary, K; Dawson, K S; Deustua, S; Faccioli, L; Fadeyev, V; Fakhouri, H K; Fruchter, A S; Gladders, M D; de Jong, R S; Koekemoer, A; Krechmer, E; Lidman, C; Meyers, J; Nordin, J; Perlmutter, S; Ripoche, P; Schlegel, D J; Spadafora, A; Suzuki, N; Project, The Supernova Cosmology
2015-01-01
NICMOS 2 observations are crucial for constraining distances to most of the existing sample of z > 1 SNe Ia. Unlike the conventional calibration programs, these observations involve long exposure times and low count rates. Reciprocity failure is known to exist in HgCdTe devices and a correction for this effect has already been implemented for high and medium count-rates. However observations at faint count-rates rely on extrapolations. Here instead, we provide a new zeropoint calibration directly applicable to faint sources. This is obtained via inter-calibration of NIC2 F110W/F160W with WFC3 in the low count-rate regime using z ~ 1 elliptical galaxies as tertiary calibrators. These objects have relatively simple near-IR SEDs, uniform colors, and their extended nature gives superior signal-to-noise at the same count rate than would stars. The use of extended objects also allows greater tolerances on PSF profiles. We find ST magnitude zeropoints (after the installation of the NICMOS cooling system, NCS) of 25....
A real-time phoneme counting algorithm and application for speech rate monitoring.
Aharonson, Vered; Aharonson, Eran; Raichlin-Levi, Katia; Sotzianu, Aviv; Amir, Ofer; Ovadia-Blechman, Zehava
2017-03-01
Adults who stutter can learn to control and improve their speech fluency by modifying their speaking rate. Existing speech therapy technologies can assist this practice by monitoring speaking rate and providing feedback to the patient, but cannot provide an accurate, quantitative measurement of speaking rate. Moreover, most technologies are too complex and costly to be used for home practice. We developed an algorithm and a smartphone application that monitor a patient's speaking rate in real time and provide user-friendly feedback to both patient and therapist. Our speaking rate computation is performed by a phoneme counting algorithm which implements spectral transition measure extraction to estimate phoneme boundaries. The algorithm is implemented in real time in a mobile application that presents its results in a user-friendly interface. The application incorporates two modes: one provides the patient with visual feedback of his/her speech rate for self-practice and another provides the speech therapist with recordings, speech rate analysis and tools to manage the patient's practice. The algorithm's phoneme counting accuracy was validated on ten healthy subjects who read a paragraph at slow, normal and fast paces, and was compared to manual counting of speech experts. Test-retest and intra-counter reliability were assessed. Preliminary results indicate differences of -4% to 11% between automatic and human phoneme counting. Differences were largest for slow speech. The application can thus provide reliable, user-friendly, real-time feedback for speaking rate control practice.
Linear-log counting-rate meter uses transconductance characteristics of a silicon planar transistor
Eichholz, J. J.
1969-01-01
Counting rate meter compresses a wide range of data values, or decades of current. Silicon planar transistor, operating in the zero collector-base voltage mode, is used as a feedback element in an operational amplifier to obtain the log response.
Cooper, R.J., E-mail: rjcooper@lbl.gov [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Amman, M.; Luke, P.N. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Vetter, K. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Department of Nuclear Engineering, University of California, Berkeley, CA 94720 (United States)
2015-09-21
Where energy resolution is paramount, High Purity Germanium (HPGe) detectors continue to provide the optimum solution for gamma-ray detection and spectroscopy. Conventional large-volume HPGe detectors are typically limited to count rates on the order of ten thousand counts per second, however, limiting their effectiveness for high count rate applications. To address this limitation, we have developed a novel prototype HPGe detector designed to be capable of achieving fine energy resolution and high event throughput at count rates in excess of one million counts per second. We report here on the concept, design, and initial performance of the first prototype device.
Daniel L. Rabosky
2006-01-01
Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.
13 CFR 107.845 - Maximum rate of amortization on Loans and Debt Securities.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum rate of amortization on... ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES Financing of Small Businesses by Licensees Structuring... rate of amortization on Loans and Debt Securities. The principal of any Loan (or the loan portion...
An automatic attenuator device for x-ray detectors at high counting rate
Alvarez, J.; Paiser, E.; Capitan, M. J.
2002-07-01
In this article we describe an attenuator device for reducing/controlling the pulse detector counting losses at a high counting rate. The electronics are based on a direct measure of the detector dead time from the analog output signal at the end of the detection chain. Taking into account this parameter the attenuator device decides to reduce/enhance the number of photons that arrive at the detector by inserting/extracting the necessary number of attenuation foils in the x-ray beam path. In that way the number of events in the incoming signal are reduced and the "apparent dynamic range" of the detector is increased.
Statistical analysis of dark count rate in Geiger-mode APD FPAs
Itzler, Mark A.; Krishnamachari, Uppili; Chau, Quan; Jiang, Xudong; Entwistle, Mark; Owens, Mark; Slomkowski, Krystyna
2014-10-01
We present a temporal statistical analysis of the array-level dark count behavior of Geiger-mode avalanche photodiode (GmAPD) focal plane arrays that distinguishes between Poissonian intrinsic dark count rate and non-Poissonian crosstalk counts by considering "inter-arrival" times between successive counts from the entire array. For 32 x 32 format sensors with 100 μm pixel pitch, we show the reduction of crosstalk for smaller active area sizes within the pixel. We also compare the inter-arrival time behavior for arrays with narrow band (900 - 1100 nm) and broad band (900 - 1600 nm) spectral response. We then consider a similar analysis of larger format 128 x 32 arrays. As a complement to the temporal analysis, we describe the results of a spatial analysis of crosstalk events. Finally, we propose a simple model for the impact of crosstalk events on the Poissonian statistics of intrinsic dark counts that provides a qualitative explanation for the results of the inter-arrival time analysis for arrays with varying degrees of crosstalk.
Solar models of low neutrino-counting rate - The depleted Maxwellian tail
Clayton, D. D.; Dwek, E.; Newman, M. J.; Talbot, R. J., Jr.
1975-01-01
Evolutionary sequences for the sun are presented which confirm that the Cl-37 neutrino counting rate will be greatly reduced if the high-energy tail of the Maxwellian distribution of relative energies is progressively depleted. Thermonuclear reaction rates and pressure are reevaluated for a distribution function modified by the correction factor suggested by Clayton (1974), and the effect of the results on solar models calculated with a simple Henyey code is discussed. It is shown that if the depletion is characterized by a certain exponential dependence on the distribution function, the counting rate will fall below 1 SNU for a distribution function of not less than 0.01. Suggestions are made for measuring the distribution function in the sun by means of neutrino spectroscopy and photography.
The Scaling of Maximum and Basal Metabolic Rates of Mammals and Birds
Barbosa, L A; Silva, J K L; Barbosa, Lauro A.; Garcia, Guilherme J. M.; Silva, Jafferson K. L. da
2004-01-01
Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as $M^{6/7}$, maximum heart rate as $M^{-1/7}$, and muscular capillary density as $M^{-1/7}$, in agreement with data.
Dose and dose rate effects of irradiation on blood count and cytokine assay in mice
Kim, Joong Sun [Research center, Dongnam institute of radiological and Medical Sciences (DIRAMS), Busan (Korea, Republic of)
2013-11-15
The possible role of exposure to radiation as a risk factor for human health has been of increasing public concern in the series of explosions at earthquake damaged nuclear reactors on the Japan. Current events throughout the world underscore the growing threat of different forms of accidental exposure to radiation including nuclear accidents, atomic weapons use and testing, and the side effects of cancer therapy. A large range of dose rates of ionizing radiations could be encountered in accidental radiation situations. Nevertheless, most of the studies related to radiation effects have only examined a high dose rate. In this study, we investigated the blood count and the cytokine levels in the serum of mice exposed to a high or low dose rate of radiation. In this study, the precise molecular mechanism underlying the low dose rate of radiation remains unclear, but differential hematopoietic effects of radiation exposed at a high dose rate versus low dose rate were observed using the number of peripheral blood count and serum cytokines. These data suggest that chronic low dose rate exposure caused a stimulation of heamatopoietic system occurrence, unlike those observed after higher dose rate exposure. Our data suggest that the dose rate, rather than the total dose, may be more critical in causing damage to the cellular hematopoietic compartments of the body.
Improved count rate corrections for highest data quality with PILATUS detectors.
Trueb, P; Sobott, B A; Schnyder, R; Loeliger, T; Schneebeli, M; Kobas, M; Rassool, R P; Peake, D J; Broennimann, C
2012-05-01
The PILATUS detector system is widely used for X-ray experiments at third-generation synchrotrons. It is based on a hybrid technology combining a pixelated silicon sensor with a CMOS readout chip. Its single-photon-counting capability ensures precise and noise-free measurements. The counting mechanism introduces a short dead-time after each hit, which becomes significant for rates above 10(6) photons s(-1) pixel(-1). The resulting loss in the number of counted photons is corrected for by applying corresponding rate correction factors. This article presents the results of a Monte Carlo simulation which computes the correction factors taking into account the detector settings as well as the time structure of the X-ray beam at the synchrotron. The results of the simulation show good agreement with experimentally determined correction factors for various detector settings at different synchrotrons. The application of accurate rate correction factors improves the X-ray data quality acquired at high photon fluxes. Furthermore, it is shown that the use of fast detector settings in combination with an optimized time structure of the X-ray beam allows for measurements up to rates of 10(7) photons s(-1) pixel(-1).
Morrison, Glenn; Shaughnessy, Richard; Shu, Shi
2011-02-01
A Monte Carlo analysis of indoor ozone levels in four cities was applied to provide guidance to regulatory agencies on setting maximum ozone emission rates from consumer appliances. Measured distributions of air exchange rates, ozone decay rates and outdoor ozone levels at monitoring stations were combined with a steady-state indoor air quality model resulting in emission rate distributions (mg h -1) as a function of % of building hours protected from exceeding a target maximum indoor concentration of 20 ppb. Whole-year, summer and winter results for Elizabeth, NJ, Houston, TX, Windsor, ON, and Los Angeles, CA exhibited strong regional differences, primarily due to differences in air exchange rates. Infiltration of ambient ozone at higher average air exchange rates significantly reduces allowable emission rates, even though air exchange also dilutes emissions from appliances. For Houston, TX and Windsor, ON, which have lower average residential air exchange rates, emission rates ranged from -1.1 to 2.3 mg h -1 for scenarios that protect 80% or more of building hours from experiencing ozone concentrations greater than 20 ppb in summer. For Los Angeles, CA and Elizabeth, NJ, with higher air exchange rates, only negative emission rates were allowable to provide the same level of protection. For the 80th percentile residence, we estimate that an 8-h average limit concentration of 20 ppb would be exceeded, even in the absence of an indoor ozone source, 40 or more days per year in any of the cities analyzed. The negative emission rates emerging from the analysis suggest that only a zero-emission rate standard is prudent for Los Angeles, Elizabeth, NJ and other regions with higher summertime air exchange rates. For regions such as Houston with lower summertime air exchange rates, the higher emission rates would likely increase occupant exposure to the undesirable products of ozone reactions, thus reinforcing the need for zero-emission rate standard.
Characteristic Count Rate Profiles for a Rotating Modulator Gamma-Ray Imager
Budden, Brent S; Case, Gary L; Cherry, Michael L
2011-01-01
Rotating modulation is a technique for indirect imaging in the hard x-ray and soft gamma-ray energy bands, which may offer an advantage over coded aperture imaging at high energies. A rotating modulator (RM) consists of a single mask of co-planar parallel slats typically spaced equidistance apart, suspended above an array of circular non-imaging detectors. The mask rotates, temporally modulating the transmitted image of the object scene. The measured count rate profiles of each detector are folded modulo the mask rotational period, and the object scene is reconstructed using pre-determined characteristic modulation profiles. The use of Monte Carlo simulation to derive the characteristic count rate profiles is accurate but computationally expensive; an analytic approach is preferred for its speed of computation. We present both the standard and a new advanced characteristic formula describing the modulation pattern of the RM; the latter is a more robust description of the instrument response developed as part ...
17 CFR 148.7 - Rulemaking on maximum rates for attorney fees.
2010-04-01
... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Rulemaking on maximum rates for attorney fees. 148.7 Section 148.7 Commodity and Securities Exchanges COMMODITY FUTURES TRADING... increase in the cost of living or by special circumstances (such as limited availability of...
The 220-age equation does not predict maximum heart rate in children and adolescents
Verschuren, Olaf; Maltais, Desiree B.; Takken, Tim
Our primary purpose was to provide maximum heart rate (HR(max)) values for ambulatory children with cerebral palsy (CP). The secondary purpose was to determine the effects of age, sex, ambulatory ability, height, and weight on HR(max). In 362 ambulatory children and adolescents with CP (213 males
The 220-age equation does not predict maximum heart rate in children and adolescents
Verschuren, Olaf; Maltais, Desiree B.; Takken, Tim
2011-01-01
Our primary purpose was to provide maximum heart rate (HR(max)) values for ambulatory children with cerebral palsy (CP). The secondary purpose was to determine the effects of age, sex, ambulatory ability, height, and weight on HR(max). In 362 ambulatory children and adolescents with CP (213 males an
Investigation of Detector Behaviour At High Count Rates for the Purple Crow Lidar
Sica, R. J.; McCullough, E. M.; Jalali, A.; Hartery, S.; Farhani, G.; Argall, P.; Argall, S.
2013-12-01
Temperature measurements in the middle and upper atmosphere are an important complement to similar measurements in the lower atmosphere. Even modest size Rayleigh-scatter lidars are capable of high quality measurements of temperature in the stratosphere (above 25 km) and lower mesosphere. The most commonly reported uncertainty, that due to counting statistics, is well understood and affects temperatures at the greatest heights (i.e. lowest signal rates). Counting statistics have a lesser effect on temperatures at the lower range of measurements, where the photocount rate is larger. However, if a lidar's dynamic range is increased by combining analog and digital counting profiles into a 'glued' profile, the gluing introduces a systematic uncertainty. In this presentation we will show the effect of the uncertainty due to gluing on our temperature measurements. The Purple Crow Lidar (PCL), located at the The University of Western Ontario's Echo Base Field Station near London, Canada, has undergone considerable modifications to its transmitter (now a Litron Nd:YAG laser outputting 1 J/pulse at 532 nm with a repetition rate of 30 Hz) as well as to its data acquisition system. The PCL has retained its 2.6 m diameter liquid mercury mirror, giving the system a large power-aperture product. Such a large throughput requires simultaneous analog-digital detection to obtain Rayleigh-scatter temperatures from 25 to above 100 km. The analog and digital profiles must be combined into a single continuous profile, a process called gluing. Several excellent methods for gluing profiles have been presented, but prior to now systematic uncertainties due to the procedure have not been quantified. We will present a detailed characterization of the analog and digital counting channels, using a variety of tests which will show the effect of the gluing procedure on the retrieved temperature.
Flanagan, Jonathan M.; Alvarez, Ofelia A.; Nelson, Stephen C.; Aygun, Banu; Nottage, Kerri A.; George, Alex; Roberts, Carla W.; Piccone, Connie M.; Howard, Thad A.; Davis, Barry R.; Ware, Russell E.
2016-01-01
Discovery and validation of genetic variants that influence disease severity in children with sickle cell anemia (SCA) could lead to early identification of high-risk patients, better screening strategies, and intervention with targeted and preventive therapy. We hypothesized that newly identified genetic risk factors for the general African American population could also impact laboratory biomarkers known to contribute to the clinical disease expression of SCA, including variants influencing the white blood cell count and the development of albuminuria and abnormal glomerular filtration rate. We first investigated candidate genetic polymorphisms in well-characterized SCA pediatric cohorts from three prospective NHLBI-supported clinical trials: HUSTLE, SWiTCH, and TWiTCH. We also performed whole exome sequencing to identify novel genetic variants, using both a discovery and a validation cohort. Among candidate genes, DARC rs2814778 polymorphism regulating Duffy antigen expression had a clear influence with significantly increased WBC and neutrophil counts, but did not affect the maximum tolerated dose of hydroxyurea therapy. The APOL1 G1 polymorphism, an identified risk factor for non-diabetic renal disease, was associated with albuminuria. Whole exome sequencing discovered several novel variants that maintained significance in the validation cohorts, including ZFHX4 polymorphisms affecting both the leukocyte and neutrophil counts, as well as AGGF1, CYP4B1, CUBN, TOR2A, PKD1L2, and CD163 variants affecting the glomerular filtration rate. The identification of robust, reliable, and reproducible genetic markers for disease severity in SCA remains elusive, but new genetic variants provide avenues for further validation and investigation. PMID:27711207
Maximum initial growth-rate of strong-shock-driven Richtmyer-Meshkov instability
Dell, Z. R.; Pandian, A.; Bhowmick, A. K.; Swisher, N. C.; Stanic, M.; Stellingwerf, R. F.; Abarzhi, S. I.
2017-09-01
We focus on the classical problem of the dependence on the initial conditions of the initial growth-rate of strong shock driven Richtmyer-Meshkov instability (RMI) by developing a novel empirical model and by employing rigorous theories and Smoothed Particle Hydrodynamics simulations to describe the simulation data with statistical confidence in a broad parameter regime. For the given values of the shock strength, fluid density ratio, and wavelength of the initial perturbation of the fluid interface, we find the maximum value of the RMI initial growth-rate, the corresponding amplitude scale of the initial perturbation, and the maximum fraction of interfacial energy. This amplitude scale is independent of the shock strength and density ratio and is characteristic quantity of RMI dynamics. We discover the exponential decay of the ratio of the initial and linear growth-rates of RMI with the initial perturbation amplitude that excellently agrees with available data.
Low-noise multichannel ASIC for high count rate X-ray diffractometry applications
Szczygiel, R. [AGH University of Science and Technology, Department of Measurement and Instrumentation, al. Mickiewicza 30, Krakow (Poland)], E-mail: robert.szczygiel@agh.edu.pl; Grybos, P.; Maj, P. [AGH University of Science and Technology, Department of Measurement and Instrumentation, al. Mickiewicza 30, Krakow (Poland); Tsukiyama, A.; Matsushita, K.; Taguchi, T. [Rigaku Corporation, 3-9-12 Matsubara-cho, Akishima-shi, Tokyo (Japan)
2009-08-01
RG64 is a 64-channel ASIC designed for the silicon strip detector readout and optimized for high count rate X-ray imaging applications. In this paper we report on the test results referring to the RG64 noise level, channel uniformity and the operation with a high rate of input signals. The parameters of the RG64-based diffractometry system are compared with the ones based on the scintillation counter. Diffractometry measurement results with silicon strip detectors of different strip lengths and strip pitch are also presented.
Curtis, Tyler E; Roeder, Ryan K
2017-07-06
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in
Grande, L.A.
1966-01-31
The purpose of this study was to evaluate radiological conditions that exist on the riverbank of the Columbia River. Included was a comparative study of the suitability of three instruments to measure the dose rates. These instruments were a NaI (T1) scintillation counter normally used for aerial monitoring, a bioplastic scintillation counter normally used as a road monitor, and a portable 40 liter ionization chamber normally used to measure very low gamma dose rates. The selection of representative sites for the comparative study was based on an initial GM survey of the general areas in question. Seven sites were studied--from Vernita Ferry Landing above the Hanford project to Sacajawea Park below Pasco.
Effects of electric field on the maximum electro-spinning rate of silk fibroin solutions.
Park, Bo Kyung; Um, In Chul
2017-02-01
Owing to the excellent cyto-compatibility of silk fibroin (SF) and the simple fabrication of nano-fibrous webs, electro-spun SF webs have attracted much research attention in numerous biomedical fields. Because the production rate of electro-spun webs is strongly dependent on the electro-spinning rate used, the electro-spinning rate becomes more important. In the present study, to improve the electro-spinning rate of SF solutions, various electric fields were applied during electro-spinning of SF, and its effects on the maximum electro-spinning rate of SF solution as well as diameters and molecular conformations of the electro-spun SF fibers were examined. As the electric field was increased, the maximum electro-spinning rate of the SF solution also increased. The maximum electro-spinning rate of a 13% SF solution could be increased 12×by increasing the electric field from 0.5kV/cm (0.25mL/h) to 2.5kV/cm (3.0mL/h). The dependence of the fiber diameter on the present electric field was not significant when using less-concentrated SF solutions (7-9% SF). On the other hand, at higher SF concentrations the electric field had a greater effect on the resulting fiber diameter. The electric field had a minimal effect of the molecular conformation and crystallinity index of the electro-spun SF webs. Copyright © 2016 Elsevier B.V. All rights reserved.
Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei
2016-07-30
In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network's performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks.
On the rate of convergence of the maximum likelihood estimator of a k-monotone density
WELLNER; Jon; A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded k-monotone functions on [0,A] are obtained under both the Hellinger distance and the Lp(Q) distance,where 1 p < ∞ and Q is a probability measure on [0,A].The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a k-monotone density.
On the rate of convergence of the maximum likelihood estimator of a K-monotone density
GAO FuChang; WELLNER Jon A
2009-01-01
Bounds for the bracketing entropy of the classes of bounded K-monotone functions on [0, A] are obtained under both the Hellinger distance and the LP(Q) distance, where 1 ≤ p < ∞ and Q is a probability measure on [0, A]. The result is then applied to obtain the rate of convergence of the maximum likelihood estimator of a K-monotone density.
Kühl, P.; Banjac, S.; Heber, B.; Labrenz, J.; Müller-Mellin, R.; Terasa, C.
Forbush (1937) was the first to observe intensity decreases lasting for a few days utilizing ionization chambers. A number of studies on Forbush decreases (FDs) have been performed since then utilizing neutron monitors and space instrumentation. The amplitude of these variations can be as low as a few permil. Therefore intensity measurements need to be of high statistical accuracy. Richardson et al. (1996) suggested therefore to utilize the single counter measurements of the guard counters of the IMP 8 and Helios E6 instruments. Like the above mentioned instruments the Electron Proton Helium INstrument (EPHIN) provides single counting rates. During the extended solar minimum in 2009 its guard detector counted about 25000~counts/minute, allowing to determine intensity variations of less than 2 permil using 30 minute averages. We performed a GEANT 4 simulation of the instrument in order to determine the energy response of all single detectors. It is shown here that their energy thresholds are much lower than the ones of neutron monitors and therefore we developed a criterion that allows to investigate FDs during quiet time periods.
A physics investigation of deadtime losses in neutron counting at low rates with Cf252
Evans, Louise G [Los Alamos National Laboratory; Croft, Stephen [CANBERRA INDUSTRIES, INC.
2009-01-01
{sup 252}Cf spontaneous fission sources are used for the characterization of neutron counters and the determination of calibration parameters; including both neutron coincidence counting (NCC) and neutron multiplicity deadtime (DT) parameters. Even at low event rates, temporally-correlated neutron counting using {sup 252}Cf suffers a deadtime effect. Meaning that in contrast to counting a random neutron source (e.g. AmLi to a close approximation), DT losses do not vanish in the low rate limit. This is because neutrons are emitted from spontaneous fission events in time-correlated 'bursts', and are detected over a short period commensurate with their lifetime in the detector (characterized by the system die-away time, {tau}). Thus, even when detected neutron events from different spontaneous fissions are unlikely to overlap in time, neutron events within the detected 'burst' are subject to intrinsic DT losses. Intrinsic DT losses for dilute Pu will be lower since the multiplicity distribution is softer, but real items also experience self-multiplication which can increase the 'size' of the bursts. Traditional NCC DT correction methods do not include the intrinsic (within burst) losses. We have proposed new forms of the traditional NCC Singles and Doubles DT correction factors. In this work, we apply Monte Carlo neutron pulse train analysis to investigate the functional form of the deadtime correction factors for an updating deadtime. Modeling is based on a high efficiency {sup 3}He neutron counter with short die-away time, representing an ideal {sup 3}He based detection system. The physics of dead time losses at low rates is explored and presented. It is observed that new forms are applicable and offer more accurate correction than the traditional forms.
Reber, T. J.; Plumb, N. C.; Waugh, J. A.; Dessau, D. S. [Department of Physics, University of Colorado, Boulder, Colorado 80309-0390 (United States)
2014-04-15
Detector counting rate nonlinearity, though a known problem, is commonly ignored in the analysis of angle resolved photoemission spectroscopy where modern multichannel electron detection schemes using analog intensity scales are used. We focus on a nearly ubiquitous “inverse saturation” nonlinearity that makes the spectra falsely sharp and beautiful. These artificially enhanced spectra limit accurate quantitative analysis of the data, leading to mistaken spectral weights, Fermi energies, and peak widths. We present a method to rapidly detect and correct for this nonlinearity. This algorithm could be applicable for a wide range of nonlinear systems, beyond photoemission spectroscopy.
Study of the counting rate capability of MRPC detectors built with soda lime glass
Forster, R.; Margoto Rodríguez, O.; Park, W.; Rodríguez Rodríguez, A.; Williams, M. C. S.; Zichichi, A.; Zuyeuski, R.
2016-09-01
We report the results of three MRPC detectors built with soda lime glass and tested in the T10 beam line at CERN. The detectors consist of a stack of 280 μm thick glass sheets with 6 gaps of 220 μm . We built two identical MRPCs, except one had the edges of glass treated with resistive paint. A third detector was built with one HV electrode painted as strips. The detectors' efficiency and time resolution were studied at different particle flux in a pulsed beam environment. The results do not show any improvement with the painted edge technique at higher particle flux. We heated the MRPCs up to 40 °C to evaluate the influence of temperature in the rate capability. Results from this warming has indicated an improvement on the rate capability. The dark count rates show a significant dependence with the temperature.
A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.
Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming
2008-01-01
This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system.
Seymour, Roger S
2010-09-01
Effect of size of inflorescences, flowers and cones on maximum rate of heat production is analysed allometrically in 23 species of thermogenic plants having diverse structures and ranging between 1.8 and 600 g. Total respiration rate (, micromol s(-1)) varies with spadix mass (M, g) according to in 15 species of Araceae. Thermal conductance (C, mW degrees C(-1)) for spadices scales according to C = 18.5M(0.73). Mass does not significantly affect the difference between floral and air temperature. Aroids with exposed appendices with high surface area have high thermal conductance, consistent with the need to vaporize attractive scents. True flowers have significantly lower heat production and thermal conductance, because closed petals retain heat that benefits resident insects. The florets on aroid spadices, either within a floral chamber or spathe, have intermediate thermal conductance, consistent with mixed roles. Mass-specific rates of respiration are variable between species, but reach 900 nmol s(-1) g(-1) in aroid male florets, exceeding rates of all other plants and even most animals. Maximum mass-specific respiration appears to be limited by oxygen delivery through individual cells. Reducing mass-specific respiration may be one selective influence on the evolution of large size of thermogenic flowers.
Miura, Shota; Odashima, Satoshi
2016-03-01
A stable quality of delivery 18F-fluoro-2-deoxy-D-glucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) requires suitable acquisition time, which can be obtained from an accurate true count of 18F-FDG. However, the true count is influenced by body mass index (BMI) and attenuation of 18F-FDG. In order to remove these influences, we have developed a new method (actual measurement method) to measure the actual true count rate based on sub-pubic thigh, which allows us to calculate a suitable acquisition time. In this study, we aimed to verify the acquisition count through our new method in terms of two categories: (1) the accuracy of acquisition count and (2) evaluation of clinical images using physical index. Our actual measurement method was designed to obtain suitable acquisition time through the following procedure. A true count rate of sub-pubic thigh was measured through detector of PET, and used as a standard true count rate. Finally, the obtained standard count rate was processed to acquisition time. This method was retrospectively applied to 150 patients, receiving 18F-FDG administration from 109.7 to 336.8 MBq, and whose body weight ranged from 37 to 95.4 kg. The accuracy of true count was evaluated by comparing relationships of true count, relative to BMI or to administered dose of 18F-FDG. The PET/CT images obtained by our actual measurement method were assessed using physical index. Our new method resulted in accurate true count, which was not influenced by either BMI or administered dose of 18F-FDG, as well as satisfied PET/CT images with recommended criteria of physical index in all patients.
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard
2008-01-01
that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...... that the specific growth rate is the same for all bacteria strains. This study highlights the importance of carrying out an explorative examination of residuals in order to make a correct parametrization of a model including the covariance structure. The ML method is shown to be a strong tool as it enables......The specific growth rate for P. aeruginosa and four mutator strains mutT, mutY, mutM and mutY–mutM is estimated by a suggested Maximum Likelihood, ML, method which takes the autocorrelation of the observation into account. For each bacteria strain, six wells of optical density, OD, measurements...
Rondeau, M; Rouleau, M
1981-06-01
Using semen from bull, boar and stallion as well as different spectrophotometers, we established the calibration curves relating the optical density of a sperm sample to the sperm count obtained on the hemacytometer. The results show that, for a given spectrophotometer, the calibration curve is not characteristic of the animal species we studied. The differences in size of the spermatozoa are probably too small to account for the anticipated specificity of the calibration curve. Furthermore, the fact that different dilution rates must be used, because of the vastly different concentrations of spermatozoa which is characteristic of those species, has no effect on the calibration curves since the dilution rate is shown to be artefactual. On the other hand, for a given semen, the calibration curve varies depending upon the spectrophotometry used. However, if two instruments have the same characteristic in terms of spectral bandwidth, the calibration curves are not statistically different.
Performance of Drift-Tube Detectors at High Counting Rates for High-Luminosity LHC Upgrades
Bittner, Bernhard; Kortner, Oliver; Kroha, Hubert; Manfredini, Alessandro; Nowak, Sebastian; Ott, Sebastian; Richter, Robert; Schwegler, Philipp; Zanzi, Daniele; Biebel, Otmar; Hertenberger, Ralf; Ruschke, Alexander; Zibell, Andre
2016-01-01
The performance of pressurized drift-tube detectors at very high background rates has been studied at the Gamma Irradiation Facility (GIF) at CERN and in an intense 20 MeV proton beam at the Munich Van-der-Graaf tandem accelerator for applications in large-area precision muon tracking at high-luminosity upgrades of the Large Hadron Collider (LHC). The ATLAS muon drifttube (MDT) chambers with 30 mm tube diameter have been designed to cope with and neutron background hit rates of up to 500 Hz/square cm. Background rates of up to 14 kHz/square cm are expected at LHC upgrades. The test results with standard MDT readout electronics show that the reduction of the drift-tube diameter to 15 mm, while leaving the operating parameters unchanged, vastly increases the rate capability well beyond the requirements. The development of new small-diameter muon drift-tube (sMDT) chambers for LHC upgrades is completed. Further improvements of tracking e?ciency and spatial resolution at high counting rates will be achieved with ...
Determination of zero-coupon and spot rates from treasury data by maximum entropy methods
Gzyl, Henryk; Mayoral, Silvia
2016-08-01
An interesting and important inverse problem in finance consists of the determination of spot rates or prices of the zero coupon bonds, when the only information available consists of the prices of a few coupon bonds. A variety of methods have been proposed to deal with this problem. Here we present variants of a non-parametric method to treat with such problems, which neither imposes an analytic form on the rates or bond prices, nor imposes a model for the (random) evolution of the yields. The procedure consists of transforming the problem of the determination of the prices of the zero coupon bonds into a linear inverse problem with convex constraints, and then applying the method of maximum entropy in the mean. This method is flexible enough to provide a possible solution to a mispricing problem.
Perkell, J S; Hillman, R E; Holmberg, E B
1994-08-01
In previous reports, aerodynamic and acoustic measures of voice production were presented for groups of normal male and female speakers [Holmberg et al., J. Acoust. Soc. Am. 84, 511-529 (1988); J. Voice 3, 294-305 (1989)] that were used as norms in studies of voice disorders [Hillman et al., J. Speech Hear. Res. 32, 373-392 (1989); J. Voice 4, 52-63 (1990)]. Several of the measures were extracted from glottal airflow waveforms that were derived by inverse filtering a high-time-resolution oral airflow signal. Recently, the methods have been updated and a new study of additional subjects has been conducted. This report presents previous (1988) and current (1993) group mean values of sound pressure level, fundamental frequency, maximum airflow declination rate, ac flow, peak flow, minimum flow, ac-dc ratio, inferred subglottal air pressure, average flow, and glottal resistance. Statistical tests indicate overall group differences and differences for values of several individual parameters between the 1988 and 1993 studies. Some inter-study differences in parameter values may be due to sampling effects and minor methodological differences; however, a comparative test of 1988 and 1993 inverse filtering algorithms shows that some lower 1988 values of maximum flow declination rate were due at least in part to excessive low-pass filtering in the 1988 algorithm. The observed differences should have had a negligible influence on the conclusions of our studies of voice disorders.
Optimization of the ATLAS (s)MDT readout electronics for high counting rates
Kortner, Oliver; Kroha, Hubert; Nowak, Sebastian; Schmidt-Sommerfeld, Korbinian [Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut), Foehringer Ring 6, 80805 Muenchen (Germany)
2016-07-01
In the ATLAS muon spectrometer, Monitored Drift Tube (MDT) chambers are used for precise muon track measurement. For the high background rates expected at HL-LHC, which are mainly due to neutrons and photons produced by interactions of the proton collision products in the detector and shielding, new small-diameter muon drift tube (sMDT)-chambers with half the drift tube diameter of the MDT-chambers and ten times higher rate capability have been developed. The standard MDT readout electronics uses bipolar shaping in front of a discriminator. This shaping leads to an undershoot of same charge but opposite polarity following each pulse. With count rates also the probability of having the subsequent pulse in this undershoot increases, which leads to losses in efficiency and spatial resolution. In order to decrease this effect, discrete prototype electronics including Baseline Restoration has been developed. Results of their tests and data taken with them during muon beamtime measurements at CERN's Gamma Irradiation Facility will be presented. which causes a deterioration of signal pulses by preceding background hits, leading to losses in muon efficiency and drift tube spatial resolution. In order to mitigate these so-called signal pile-up effects, new readout electronics with active baseline restoration (BLR) is under development. Discrete prototype electronics with BLR functionality has been tested in laboratory measurements and in the Gamma Irradiation Facility at CERN under high γ-irradiation rates. Results of the measurements are presented.
A model of the high count rate performance of NaI(Tl)-based PET detectors
Wear, J.A.; Karp, J.S.; Freifelder, R. [Univ. of Pennsylvania, Philadelphia, PA (United States). Dept. of Radiology; Mankoff, D.A. [Univ. of Washington, Seattle, WA (United States). Dept. of Radiology; Muehllehner, G. [UGM Medical Systems, Philadelphia, PA (United States)
1998-06-01
A detailed model of the response of large-area NaI(Tl) detectors used in PET and their triggering and data acquisition electronics has been developed. This allows one to examine the limitations of the imaging system`s performance due to degradation in the detector performance from light pile-up and deadtime from triggering and event processing. Comparisons of simulation results to measurements from the HEAD PENN-PET scanner have been performed to validate the Monte Carlo model. The model was then used to predict improvements in the high count rate performance of the HEAD PENN-PET scanner using different signal integration times, light response functions, and detectors.
Two dimensional localization of electrons and positrons under high counting rate
Barbosa, A.F.; Anjos, J.C.; Sanchez-Hernandez, A. [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Pepe, I.M.; Barros, N. [Bahia Univ., Salvador, BA (Brazil). Inst. de Fisica
1997-12-01
The construction of two wire chambers for the experiment E831 at Fermilab is reported. Each chamber includes three wire planes - one anode and two orthogonal cathodes - in which the wires operate as independent proportional counters. One of the chambers is rotated with respect to the other, so that four position coordinates may be encoded for a charged particle crossing both chambers. Spatial resolution is determined by the wire pitch: 1 mm for cathodes, 2 mm for anodes. 320 electronic channels are involved in the detection system readout. Global counting rates in excess to 10{sup 7} events per second have been measured, while the average electron-positron beam intensity may be as high as 3 x 10{sup 7} events per second. (author) 5 refs., 9 figs.
Inoue, Kazumasa; Kurosawa, Hideo; Tanaka, Takashi; Fukushi, Masahiro; Moriyama, Noriyuki; Fujii, Hirofumi
2012-07-01
The optimal injection dose for imaging of the pelvic region in 3D FDG PET tests was investigated based on the noise-equivalent count (NEC) rate with use of an anthropomorphic pelvis phantom. Count rates obtained from an anthropomorphic pelvis phantom were compared with those of pelvic images of 60 patients. The correlation between single photon count rates obtained from the pelvic regions of patients and the doses per body weight was also evaluated. The radioactivity at the maximum NEC rate was defined as an optimal injection dose, and the optimal injection dose for the body weight was evaluated. The image noise of a phantom was also investigated. Count rates obtained from an anthropomorphic pelvis phantom corresponded well with those from the human pelvis. The single photon count rate obtained from the phantom was 9.9 Mcps at the peak NEC rate. The coefficient of correlation between the single photon count rate and the dose per weight obtained from patient data was 0.830. The optimal injection doses for a patient with weighing 60 kg were estimated to be 375 MBq (6.25 MBq/kg) and 435 MBq (7.25 MBq/kg) for uptake periods of 60 and 90 min, respectively. The image noise was minimal at the peak NEC rate. We successfully estimated the optimal injection dose based on the NEC rate in the pelvic region on 3D FDG PET tests using an anthropomorphic pelvis phantom.
Michael D. Hare
2014-09-01
Full Text Available A field trial in northeast Thailand during 2011–2013 compared the establishment and growth of 2 Panicum maximum cultivars, Mombasa and Tanzania, sown at seeding rates of 2, 4, 6, 8, 10 and 12 kg/ha. In the first 3 months of establishment, higher sowing rates produced significantly more DM than sowing at 2 kg/ha, but thereafter there were no significant differences in total DM production between sowing rates of 2–12 kg/ha. Lower sowing rates produced fewer tillers/m2 than higher sowing rates but these fewer tillers were significantly heavier than the more numerous smaller tillers produced by higher sowing rates. Mombasa produced 23% more DM than Tanzania in successive wet seasons (7,060 vs. 5,712 kg DM/ha from 16 June to 1 November 2011; and 16,433 vs. 13,350 kg DM/ha from 25 April to 24 October 2012. Both cultivars produced similar DM yields in the dry seasons (November–April, averaging 2,000 kg DM/ha in the first dry season and 1,750 kg DM/ha in the second dry season. Mombasa produced taller tillers (104 vs. 82 cm, longer leaves (60 vs. 47 cm, wider leaves (2 vs. 1.8 cm and heavier tillers (1 vs. 0.7 g than Tanzania but fewer tillers/m2 (260 vs. 304. If farmers improve soil preparation and place more emphasis on sowing techniques, there is potential to dramatically reduce seed costs.Keywords: Guinea grass, tillering, forage production, seeding rates, Thailand.DOI: 10.17138/TGFT(2246-253
Nuclear photonics at ultra-high counting rates and higher multipole excitations
Thirolf, P. G.; Habs, D.; Filipescu, D.; Gernhaeuser, R.; Guenther, M. M.; Jentschel, M.; Marginean, N.; Pietralla, N. [Fakultaet f. Physik, Ludwig-Maximilians-Universitaet Muenchen, Garching (Germany); Fakultaet f. Physik, Ludwig-Maximilians-Universitaet Muenchen, Garching, Germany and Max-Planck-Institute f. Quantum Optics, Garching (Germany); IFIN-HH, Bucharest-Magurele (Romania); Physik Department E12,Technische Universitaet Muenchen, Garching (Germany); Max-Planck-Institute f. Quantum Optics, Garching (Germany); Institut Laue-Langevin, Grenoble (France); Physik Department E12,Technische Universitaet Muenchen, Garching (Germany); Institut f. Kernphysik, Technische Universitaet Darmstadt (Germany)
2012-07-09
Next-generation {gamma} beams from laser Compton-backscattering facilities like ELI-NP (Bucharest)] or MEGa-Ray (Livermore) will drastically exceed the photon flux presently available at existing facilities, reaching or even exceeding 10{sup 13}{gamma}/sec. The beam structure as presently foreseen for MEGa-Ray and ELI-NP builds upon a structure of macro-pulses ({approx}120 Hz) for the electron beam, accelerated with X-band technology at 11.5 GHz, resulting in a micro structure of 87 ps distance between the electron pulses acting as mirrors for a counterpropagating intense laser. In total each 8.3 ms a {gamma} pulse series with a duration of about 100 ns will impinge on the target, resulting in an instantaneous photon flux of about 10{sup 18}{gamma}/s, thus introducing major challenges in view of pile-up. Novel {gamma} optics will be applied to monochromatize the {gamma} beam to ultimately {Delta}E/E{approx}10{sup -6}. Thus level-selective spectroscopy of higher multipole excitations will become accessible with good contrast for the first time. Fast responding {gamma} detectors, e.g. based on advanced scintillator technology (e.g. LaBr{sub 3}(Ce)) allow for measurements with count rates as high as 10{sup 6}-10{sup 7}{gamma}/s without significant drop of performance. Data handling adapted to the beam conditions could be performed by fast digitizing electronics, able to sample data traces during the micro-pulse duration, while the subsequent macro-pulse gap of ca. 8 ms leaves ample time for data readout. A ball of LaBr{sub 3} detectors with digital readout appears to best suited for this novel type of nuclear photonics at ultra-high counting rates.
Evangelia Karagianni
2016-04-01
Full Text Available By utilizing meteorological data such as relative humidity, temperature, pressure, rain rate and precipitation duration at eight (8 stations in Aegean Archipelagos from six recent years (2007 – 2012, the effect of the weather on Electromagnetic wave propagation is studied. The EM wave propagation characteristics depend on atmospheric refractivity and consequently on Rain-Rate which vary in time and space randomly. Therefore the statistics of radio refractivity, Rain-Rate and related propagation effects are of main interest. This work investigates the maximum value of rain rate in monthly rainfall records, for a 5 min interval comparing it with different values of integration time as well as different percentages of time. The main goal is to determine the attenuation level for microwave links based on local rainfall data for various sites in Greece (L-zone, namely Aegean Archipelagos, with a view on improved accuracy as compared with more generic zone data available. A measurement of rain attenuation for a link in the S-band has been carried out and the data compared with prediction based on the standard ITU-R method.
Construction and Test of Muon Drift Tube Chambers for High Counting Rates
Schwegler, Philipp; Dubbert, Jörg
2010-01-01
Since the start of operation of the Large Hadron Collider (LHC) at CERN on 20 November 2009, the instantaneous luminosity is steadily increasing. The muon spectrometer of the ATLAS detector at the LHC is instrumented with trigger and precision tracking chambers in a toroidal magnetic field. Monitored Drift-Tube (MDT) chambers are employed as precision tracking chambers, complemented by Cathode Strip Chambers (CSC) in the very forward region where the background counting rate due to neutrons and γ's produced in shielding material and detector components is too high for the MDT chambers. After several upgrades of the CERN accelerator system over the coming decade, the instantaneous luminosity is expected to be raised to about five times the LHC design luminosity. This necessitates replacement of the muon chambers in the regions with the highest background radiation rates in the so-called Small Wheels, which constitute the innermost layers of the muon spectrometer end-caps, by new detectors with higher rate cap...
Kalafut, Bennett; Visscher, Koen
2008-10-01
Optical tweezers experiments allow us to probe the role of force and mechanical work in a variety of biochemical processes. However, observable states do not usually correspond in a one-to-one fashion with the internal state of an enzyme or enzyme-substrate complex. Different kinetic pathways yield different distributions for the dwells in the observable states. Furthermore, the dwell-time distribution will be dependent upon force, and upon where in the biochemical pathway force acts. I will present a maximum-likelihood method for identifying rate constants and the locations of force-dependent transitions in transcription initiation by T7 RNA Polymerase. This method is generalizable to systems with more complicated kinetic pathways in which there are two observable states (e.g. bound and unbound) and an irreversible final transition.
Asymptotic correctability of Bell-diagonal quantum states and maximum tolerable bit error rates
Ranade, K S; Ranade, Kedar S.; Alber, Gernot
2005-01-01
The general conditions are discussed which quantum state purification protocols have to fulfill in order to be capable of purifying Bell-diagonal qubit-pair states, provided they consist of steps that map Bell-diagonal states to Bell-diagonal states and they finally apply a suitably chosen Calderbank-Shor-Steane code to the outcome of such steps. As a main result a necessary and a sufficient condition on asymptotic correctability are presented, which relate this problem to the magnitude of a characteristic exponent governing the relation between bit and phase errors under the purification steps. These conditions allow a straightforward determination of maximum tolerable bit error rates of quantum key distribution protocols whose security analysis can be reduced to the purification of Bell-diagonal states.
Phylogenetic prediction of the maximum per capita rate of population growth.
Fagan, William F; Pearson, Yanthe E; Larsen, Elise A; Lynch, Heather J; Turner, Jessica B; Staver, Hilary; Noble, Andrew E; Bewick, Sharon; Goldberg, Emma E
2013-07-22
The maximum per capita rate of population growth, r, is a central measure of population biology. However, researchers can only directly calculate r when adequate time series, life tables and similar datasets are available. We instead view r as an evolvable, synthetic life-history trait and use comparative phylogenetic approaches to predict r for poorly known species. Combining molecular phylogenies, life-history trait data and stochastic macroevolutionary models, we predicted r for mammals of the Caniformia and Cervidae. Cross-validation analyses demonstrated that, even with sparse life-history data, comparative methods estimated r well and outperformed models based on body mass. Values of r predicted via comparative methods were in strong rank agreement with observed values and reduced mean prediction errors by approximately 68 per cent compared with two null models. We demonstrate the utility of our method by estimating r for 102 extant species in these mammal groups with unknown life-history traits.
Statistical properties of the maximum Lyapunov exponent calculated via the divergence rate method.
Franchi, Matteo; Ricci, Leonardo
2014-12-01
The embedding of a time series provides a basic tool to analyze dynamical properties of the underlying chaotic system. To this purpose, the choice of the embedding dimension and lag is crucial. Although several methods have been devised to tackle the issue of the optimal setting of these parameters, a conclusive criterion to make the most appropriate choice is still lacking. An accepted procedure to rank different embedding methods relies on the evaluation of the maximum Lyapunov exponent (MLE) out of embedded time series that are generated by chaotic systems with explicit analytic representation. The MLE is evaluated as the local divergence rate of nearby trajectories. Given a system, embedding methods are ranked according to how close such MLE values are to the true MLE. This is provided by the so-called standard method in a way that exploits the mathematical description of the system and does not require embedding. In this paper we study the dependence of the finite-time MLE evaluated via the divergence rate method on the embedding dimension and lag in the case of time series generated by four systems that are widely used as references in the scientific literature. We develop a completely automatic algorithm that provides the divergence rate and its statistical uncertainty. We show that the uncertainty can provide useful information about the optimal choice of the embedding parameters. In addition, our approach allows us to find which systems provide suitable benchmarks for the comparison and ranking of different embedding methods.
High-voltage integrated active quenching circuit for single photon count rate up to 80 Mcounts/s.
Acconcia, Giulia; Rech, Ivan; Gulinatti, Angelo; Ghioni, Massimo
2016-08-01
Single photon avalanche diodes (SPADs) have been subject to a fast improvement in recent years. In particular, custom technologies specifically developed to fabricate SPAD devices give the designer the freedom to pursue the best detector performance required by applications. A significant breakthrough in this field is represented by the recent introduction of a red enhanced SPAD (RE-SPAD) technology, capable of attaining a good photon detection efficiency in the near infrared range (e.g. 40% at a wavelength of 800 nm) while maintaining a remarkable timing resolution of about 100ps full width at half maximum. Being planar, the RE-SPAD custom technology opened the way to the development of SPAD arrays particularly suited for demanding applications in the field of life sciences. However, to achieve such excellent performance custom SPAD detectors must be operated with an external active quenching circuit (AQC) designed on purpose. Next steps toward the development of compact and practical multichannel systems will require a new generation of monolithically integrated AQC arrays. In this paper we present a new, fully integrated AQC fabricated in a high-voltage 0.18 µm CMOS technology able to provide quenching pulses up to 50 Volts with fast leading and trailing edges. Although specifically designed for optimal operation of RE-SPAD devices, the new AQC is quite versatile: it can be used with any SPAD detector, regardless its fabrication technology, reaching remarkable count rates up to 80 Mcounts/s and generating a photon detection pulse with a timing jitter as low as 119 ps full width at half maximum. The compact design of our circuit has been specifically laid out to make this IC a suitable building block for monolithically integrated AQC arrays.
Preamplifier development for high count-rate, large dynamic range readout of inorganic scintillators
Keshelashvili, Irakli; Erni, Werner; Steinacher, Michael; Krusche, Bernd; Collaboration: PANDA-Collaboration
2013-07-01
Electromagnetic calorimeter are central component of many experiments in nuclear and particle physics. Modern ''trigger less'' detectors run with very high count-rates, require good time and energy resolution, and large dynamic range. In addition photosensors and preamplifiers must work in hostile environments (magnetic fields). Due to later constraints mainly Avalanche Photo Diodes (APD's), Vacuum Photo Triodes (VPT's), and Vacuum Photo Tetrodes (VPTT's) are used. A disadvantage is their low gain which together with other requirements is a challenge for the preamplifier design. Our group has developed special Low Noise / Low Power (LNP) preamplifier for this purpose. They will be used to equip PANDA EMC forward end-cap (dynamic range 15'000, rate 1MHz), where the PWO II crystals and preamplifier have to run in an environment cooled down to -25{sup o}C. Further application is the upgrade of the Crystal Barrel detector at the Bonn ELSA accelerator with APD readout for which special temperature comparison of the APD gain and good time resolution is necessary. Development and all test procedures after the mass production done by our group during past several years in Basel University will be reported.
Alvah C. Stahlnecker IV
2008-12-01
Full Text Available A percentage of either measured or predicted maximum heart rate is commonly used to prescribe and measure exercise intensity. However, maximum heart rate in athletes may be greater during competition or training than during laboratory exercise testing. Thus, the aim of the present investigation was to determine if endurance-trained runners train and compete at or above laboratory measures of 'maximum' heart rate. Maximum heart rates were measured utilising a treadmill graded exercise test (GXT in a laboratory setting using 10 female and 10 male National Collegiate Athletic Association (NCAA division 2 cross-country and distance event track athletes. Maximum training and competition heart rates were measured during a high-intensity interval training day (TR HR and during competition (COMP HR at an NCAA meet. TR HR (207 ± 5.0 b·min-1; means ± SEM and COMP HR (206 ± 4 b·min-1 were significantly (p < 0.05 higher than maximum heart rates obtained during the GXT (194 ± 2 b·min-1. The heart rate at the ventilatory threshold measured in the laboratory occurred at 83.3 ± 2.5% of the heart rate at VO2 max with no differences between the men and women. However, the heart rate at the ventilatory threshold measured in the laboratory was only 77% of the maximal COMP HR or TR HR. In order to optimize training-induced adaptation, training intensity for NCAA division 2 distance event runners should not be based on laboratory assessment of maximum heart rate, but instead on maximum heart rate obtained either during training or during competition
Photon-counting X-ray imaging at kilohertz frame rates
Ponchut, Cyril; Rigal, J M; Papillon, E; Vallerga, J; LaMarra, D; Mikulec, B
2007-01-01
A kilohertz frame rate readout system for Medipix2 chips is being developed at European Synchrotron Radiation Facility (ESRF). This work was initiated with the aim of meeting the growing demand for fast and noise-free X-ray bidimensional detection particularly on synchrotron beamlines. Medipix2 is a photon-counting readout ASIC of 256×256 pixels with 55 μm pitch developed in the framework of the Medipix collaboration managed by CERN. The ESRF readout system is based on a custom interface board named Parallel Readout Image Acquisition for Medipix (PRIAM) a fast PCI interface and a Linux PC. The PRIAM board implementing fast FIFOs and a programmable gate array can read up to five Medipix2 circuits simultaneously in less than 0.3 ms using the 32-bit parallel readout port of Medipix2 and 100 MHz clock frequency. This paper describes the architecture of the PRIAM board, reports on the first test results, and mentions some of the targeted applications.
Why does steady-state magnetic reconnection have a maximum local rate of order 0.1?
Liu, Yi-Hsin; Guo, F; Daughton, W; Li, H; Cassak, P A; Shay, M A
2016-01-01
Simulations suggest collisionless steady-state magnetic reconnection of Harris-type current sheets proceeds with a rate of order 0.1, independent of dissipation mechanism. We argue this long-standing puzzle is a result of constraints at the magnetohydrodynamic (MHD) scale. We perform a scaling analysis of the reconnection rate as a function of the opening angle made by the upstream magnetic fields, finding a maximum reconnection rate close to 0.2. The predictions compare favorably to particle-in-cell simulations of relativistic electron-positron and non-relativistic electron-proton reconnection. The fact that simulated reconnection rates are close to the predicted maximum suggests reconnection proceeds near the most efficient state allowed at the MHD-scale. The rate near the maximum is relatively insensitive to the opening angle, potentially explaining why reconnection has a similar fast rate in differing models.
Frigo, Everton [University of Sao Paulo, USP, Institute of Astronomy, Geophysics and Atmospheric Sciences, IAG/USP, Department of Geophysics, Sao Paulo, SP (Brazil); Savian, Jairo Francisco [Space Science Laboratory of Santa Maria, LACESM/CT, Southern Regional Space Research Center, CRS/INPE, MCT, Santa Maria, RS (Brazil); Silva, Marlos Rockenbach da; Lago, Alisson dal; Trivedi, Nalin Babulal [National Institute for Space Research, INPE/MCT, Division of Space Geophysics, DGE, Sao Jose dos Campos, SP (Brazil); Schuch, Nelson Jorge, E-mail: efrigo@iag.usp.br, E-mail: savian@lacesm.ufsm.br, E-mail: njschuch@lacesm.ufsm.br, E-mail: marlos@dge.inpe.br, E-mail: dallago@dge.inpe.br, E-mail: trivedi@dge.inpe.br [Southern Regional Space Research Center, CRS/INPE, MCT, Santa Maria, RS (Brazil)
2007-07-01
An analysis of geomagnetic storm variations and the count rate of cosmic ray muons recorded at the Brazilian Southern Space Observatory -OES/CRS/INPE-MCT, in Sao Martinho da Serra, RS during the month of November 2004, is presented in this paper. The geomagnetic measurements are done by a three component low noise fluxgate magnetometer and the count rates of cosmic ray muons are recorded by a muon scintillator telescope - MST, both instruments installed at the Observatory. The fluxgate magnetometer measures variations in the three orthogonal components of Earth magnetic field, H (North-South), D (East-West) and Z (Vertical), with data sampling rate of 0.5 Hz. The muon scintillator telescope records hourly count rates. The arrival of a solar disturbance can be identified by observing the decrease in the muon count rate. The goal of this work is to describe the physical morphology and phenomenology observed during the geomagnetic storm of November 2004, using the H component of the geomagnetic field and vertical channel V of the multi-directional muon detector in South of Brazil. (author)
Benício, Kadja; Dias, Fernando A. L.; Gualdi, Lucien P.; Aliverti, Andrea; Resqueti, Vanessa R.; Fregonezi, Guilherme A. F.
2015-01-01
OBJECTIVE: To assess the influence of diaphragmatic activation control (diaphC) on Sniff Nasal-Inspiratory Pressure (SNIP) and Maximum Relaxation Rate of inspiratory muscles (MRR) in healthy subjects. METHOD: Twenty subjects (9 male; age: 23 (SD=2.9) years; BMI: 23.8 (SD=3) kg/m2; FEV1/FVC: 0.9 (SD=0.1)] performed 5 sniff maneuvers in two different moments: with or without instruction on diaphC. Before the first maneuver, a brief explanation was given to the subjects on how to perform the sniff test. For sniff test with diaphC, subjects were instructed to perform intense diaphragm activation. The best SNIP and MRR values were used for analysis. MRR was calculated as the ratio of first derivative of pressure over time (dP/dtmax) and were normalized by dividing it by peak pressure (SNIP) from the same maneuver. RESULTS: SNIP values were significantly different in maneuvers with and without diaphC [without diaphC: -100 (SD=27.1) cmH2O/ with diaphC: -72.8 (SD=22.3) cmH2O; p<0.0001], normalized MRR values were not statistically different [without diaphC: -9.7 (SD=2.6); with diaphC: -8.9 (SD=1.5); p=0.19]. Without diaphC, 40% of the sample did not reach the appropriate sniff criteria found in the literature. CONCLUSION: Diaphragmatic control performed during SNIP test influences obtained inspiratory pressure, being lower when diaphC is performed. However, there was no influence on normalized MRR. PMID:26578254
Ren Jingming; Wang Rusong
2004-01-01
Having argued the importance of China's sustainable development in global sustainability, the authors review the achievements of China in sustainable development, especially its institutional construction. Environment to be counted in official's political performance rating system is thought of as a new institutional mechanism in China facilitating its sustainable development and then global sustainability. Then its significance is narrated and visions in future are envisioned. In the end, certain concrete suggestions for the rating system are given in a practical way.
Bulaevskaya, Vera
2010-01-01
This paper analyzes the sensitivity of antineutrino count rate measurements to changes in the fissile content of civil power reactors. Such measurements may be useful in IAEA reactor safeguards applications. We introduce a hypothesis testing procedure to identify statistically significant differences between the antineutrino count rate evolution of a standard 'baseline' fuel cycle and that of an anomalous cycle, in which plutonium is removed and replaced with an equivalent fissile worth of uranium. The test would allow an inspector to detect anomalous reactor activity, or to positively confirm that the reactor is operating in a manner consistent with its declared fuel inventory and power level. We show that with a reasonable choice of detector parameters, the test can detect replacement of 73 kg of plutonium in 90 days with 95% probability, while controlling the false positive rate at 5%. We show that some improvement on this level of sensitivity may be expected by various means, including use of the method i...
Optimum poultry litter rates for maximum profit vs. yield in cotton production
Cotton lint yield responds well to increasing rates of poultry litter fertilization, but little is known of how optimum rates for yield compare with optimum rates for profit. The objectives of this study were to analyze cotton lint yield response to poultry litter application rates, determine and co...
无
2008-01-01
Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.
On the maximum rate of change in sunspot number growth and the size of the sunspot cycle
Wilson, Robert M.
1990-01-01
Statistically significant correlations exist between the size (maximum amplitude) of the sunspot cycle and, especially, the maximum value of the rate of rise during the ascending portion of the sunspot cycle, where the rate of rise is computed either as the difference in the month-to-month smoothed sunspot number values or as the 'average rate of growth' in smoothed sunspot number from sunspot minimum. Based on the observed values of these quantities (equal to 10.6 and 4.63, respectively) as of early 1989, it is inferred that cycle 22's maximum amplitude will be about 175 + or - 30 or 185 + or - 10, respectively, where the error bars represent approximately twice the average error found during cycles 10-21 from the two fits.
California Environmental Health Tracking Program — This dataset contains case counts, rates, and confidence intervals of unintentional carbon monoxide poisoning (CO) inpatient hospitalizations and emergency...
FROST: a low-noise high-rate photon counting ASIC for X-ray applications
Prest, M. E-mail: prest@ts.infn.it; Vallazza, E.; Chiavacci, M.; Mariani, R.; Motto, S.; Neri, M.; Scantamburlo, N.; Arfelli, F.; Conighi, A.; Longo, R.; Olivo, A.; Pani, S.; Poropat, P.; Rashevsky, A.; Rigon, L.; Tromba, G.; Castelli, E
2001-04-01
FRONTier RADiography is an R and D project to assess the feasibility of digital mammography with Synchrotron Radiation at the ELETTRA Light Source in Trieste. In order to reach an acceptable time duration of the exam, a fast- and low-noise photon counting ASIC has been developed in collaboration with Aurelia Microelettronica, called Frontrad ReadOut SysTem. It is a multichannel counting system, each channel being made of a low-noise charge-sensitive preamplifier optimized for X-ray energy range (10-100 keV), a CR-RC{sup 2} shaper, a discriminator and a 16-bit counter. In order to set the discriminator threshold, a set of a global 6-bit DAC and a local (per channel) 3-bit DAC has been implemented within the ASIC. We report on the measurements done with the 8-channel prototype chip and the comparison with the simulation results.
Ambarita, Himsar; Kishinami, Koki; Daimaruya, Mashashi; Tokura, Ikuo; Kawai, Hideki; Suzuki, Jun; Kobiyama, Mashayosi; Ginting, Armansyah
The present paper is a study on the optimum plate to plate spacing for maximum heat transfer rate from a flat plate type heat exchanger. The heat exchanger consists of a number of parallel flat plates. The working fluids are flowed at the same operational conditions, either fixed pressure head or fixed fan power input. Parallel and counter flow directions of the working fluids were considered. While the volume of the heat exchanger is kept constant, plate number was varied. Hence, the spacing between plates as well as heat transfer rate will vary and there exists a maximum heat transfer rate. The objective of this paper is to seek the optimum plate to plate spacing for maximum heat transfer rate. In order to solve the problem, analytical and numerical solutions have been carried out. In the analytical solution, the correlations of the optimum plate to plate spacing as a function of the non-dimensional parameters were developed. Furthermore, the numerical simulation is carried out to evaluate the correlations. The results show that the optimum plate to plate spacing for a counter flow heat exchanger is smaller than parallel flow ones. On the other hand, the maximum heat transfer rate for a counter flow heat exchanger is bigger than parallel flow ones.
Matsukiyo, Hiroshi; Sato, Eiichi; Oda, Yasuyuki; Yamaguchi, Satoshi; Sato, Yuichi; Hagiwara, Osahiko; Enomoto, Toshiyuki; Watanabe, Manabu; Kusachi, Shinya
2017-09-10
To obtain four kinds of tomograms at four different X-ray energy ranges simultaneously, we have constructed a quad-energy (QE) X-ray photon counter with a cadmium telluride (CdTe) detector and four sets of comparators and microcomputers (MCs). X-ray photons are detected using the CdTe detector, and the event pulses produced using amplifiers are sent to four comparators simultaneously to regulate four threshold energies of 20, 33, 50 and 65keV. Using this counter, the energy ranges are 20-33, 33-50, 50-65 and 65-100keV; the maximum energy corresponds to the tube voltage. We performed QE computed tomography (QE-CT) at a tube voltage of 100kV. Using a 0.5-mm-diam lead pinhole, four tomograms were obtained simultaneously at four energy ranges. K-edge CT using iodine and gadolinium media was carried out utilizing two energy ranges of 33-50 and 50-65keV, respectively. At a tube voltage of 100kV and a current of 60 μA, the count rate was 15.2 kilocounts per second (kcps), and the minimum count rates after penetrating objects in QE-CT were regulated to approximately 2 kcps by the tube current. Copyright © 2017 Elsevier Ltd. All rights reserved.
Maximum Acceptable Vibrato Excursion as a Function of Vibrato Rate in Musicians and Non-musicians
Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels H.
and, in most listeners, exhibited a peak at medium vibrato rates (5–7 Hz). Large across-subject variability was observed, and no significant effect of musical experience was found. Overall, most listeners were not solely sensitive to the vibrato excursion and there was a listener-dependent rate...
Maximum Acceptable Vibrato Excursion as a Function of Vibrato Rate in Musicians and Non-musicians
Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels H.
2014-01-01
and, in most listeners, exhibited a peak at medium vibrato rates (5–7 Hz). Large across-subject variability was observed, and no significant effect of musical experience was found. Overall, most listeners were not solely sensitive to the vibrato excursion and there was a listener-dependent rate...
7 CFR 1.187 - Rulemaking on maximum rates for attorney fees.
2010-01-01
... the types of proceedings in which the rate should be used. It also should explain fully the reasons... certain types of proceedings), the Department may adopt regulations providing that attorney fees may be awarded at a rate higher than $125 per hour in some or all of the types of proceedings covered by...
Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds
Conroy, M.J.; Morgan, B.J.T.; North, P.M.
1985-01-01
It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam [Dept. of Nuclear Medicine, Severance Hospital, Yonsei University, Seoul (Korea, Republic of); Park, Hoon Hee [Dept. of Radiological Technology, Shingu college, Sungnam (Korea, Republic of)
2013-12-15
This study is aimed to evaluate the effect of T{sub 1/2} upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9{sup 9m}TcO{sub 4}- of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ{sup 2} test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T{sub 1/2} error from change of gradient with -0.25% to +0.25%, if T{sub 1/2} is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T{sub 1/2} error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation
CLARO: an ASIC for high rate single photon counting with multi-anode photomultipliers
Baszczyk, M.; Carniti, P.; Cassina, L.; Cotta Ramusino, A.; Dorosz, P.; Fiorini, M.; Gotti, C.; Kucewicz, W.; Malaguti, R.; Pessina, G.
2017-08-01
The CLARO is a radiation-hard 8-channel ASIC designed for single photon counting with multi-anode photomultiplier tubes. Each channel outputs a digital pulse when the input signal from the photomultiplier crosses a configurable threshold. The fast return to baseline, typically within 25 ns, and below 50 ns in all conditions, allows to count up to 107 hits/s on each channel, with a power consumption of about 1 mW per channel. The ASIC presented here is a much improved version of the first 4-channel prototype. The threshold can be precisely set in a wide range, between 30 ke- (5 fC) and 16 Me- (2.6 pC). The noise of the amplifier with a 10 pF input capacitance is 3.5 ke- (0.6 fC) RMS. All settings are stored in a 128-bit configuration and status register, protected against soft errors with triple modular redundancy. The paper describes the design of the ASIC at transistor-level, and demonstrates its performance on the test bench.
McCarthy, C M; Taylor, M A; Dennis, M W
1987-01-01
Mycobacterium avium is a human pathogen which may cause either chronic or disseminated disease and the organism exhibits a slow rate of growth. This study provides information on the growth rate of the organism in chronically infected mice and its maximal growth rate in vitro. M. avium was grown in continuous culture, limited for nitrogen with 0.5 mM ammonium chloride and dilution rates that ranged from 0.054 to 0.153 h-1. The steady-state concentration of ammonia nitrogen and M. avium cells for each dilution rate were determined. The bacterial saturation constant for growth-limiting ammonia was 0.29 mM (4 micrograms nitrogen/ml) and, from this, the maximal growth rate for M. avium was estimated to be 0.206 h-1 or a doubling time of 3.4 h. BALB/c mice were infected intravenously with 3 x 10(6) colony-forming units and a chronic infection resulted, typical of virulent M. avium strains. During a period of 3 months, the number of mycobacteria remained constant in the lungs, but increased 30-fold and 8,900-fold, respectively, in the spleen and mesenteric lymph nodes. The latter increase appeared to be due to proliferation in situ. The generation time of M. avium in the mesenteric lymph nodes was estimated to be 7 days.
Guerrero, C. [Centro de Investigaciones Medioambientales, Energéticas y Tecnológicas (CIEMAT), Madrid (Spain); Departamento de Física Atómica, Molecular y Nuclear, Universidad de Sevilla (Spain); Cano-Ott, D.; Mendoza, E. [Centro de Investigaciones Medioambientales, Energéticas y Tecnológicas (CIEMAT), Madrid (Spain); Wright, T. [University of Manchester, Manchester (United Kingdom)
2015-03-21
The effect of dead-time and pile-up in counting experiments may become a significant source of uncertainty if not properly taken into account. Although analytical solutions to this problem have been proposed for simple set-ups with one or two detectors, these are limited when it comes to arrays where time correlation between the detector modules is used, and also in situations of variable counting rates. In this paper we describe the dead-time and pile-up corrections applied to the n-TOF Total Absorption Calorimeter (TAC), a 4π γ-ray detector made of 40 BaF{sub 2} modules operating at the CERN n-TOF facility. Our method is based on the simulation of the complete signal detection and event reconstruction processes and can be applied as well in the case of rapidly varying counting rates. The method is discussed in detail and then we present its successful application to the particular case of the measurement of {sup 238}U(n, γ) reactions with the TAC detector.
Quinn, T Alexander; Kohl, Peter
2016-12-01
Mechanical stimulation (MS) represents a readily available, non-invasive means of pacing the asystolic or bradycardic heart in patients, but benefits of MS at higher heart rates are unclear. Our aim was to assess the maximum rate and sustainability of excitation by MS vs. electrical stimulation (ES) in the isolated heart under normal physiological conditions. Trains of local MS or ES at rates exceeding intrinsic sinus rhythm (overdrive pacing; lowest pacing rates 2.5±0.5 Hz) were applied to the same mid-left ventricular free-wall site on the epicardium of Langendorff-perfused rabbit hearts. Stimulation rates were progressively increased, with a recovery period of normal sinus rhythm between each stimulation period. Trains of MS caused repeated focal ventricular excitation from the site of stimulation. The maximum rate at which MS achieved 1:1 capture was lower than during ES (4.2±0.2 vs. 5.9±0.2 Hz, respectively). At all overdrive pacing rates for which repetitive MS was possible, 1:1 capture was reversibly lost after a finite number of cycles, even though same-site capture by ES remained possible. The number of MS cycles until loss of capture decreased with rising stimulation rate. If interspersed with ES, the number of MS to failure of capture was lower than for MS only. In this study, we demonstrate that the maximum pacing rate at which MS can be sustained is lower than that for same-site ES in isolated heart, and that, in contrast to ES, the sustainability of successful 1:1 capture by MS is limited. The mechanism(s) of differences in MS vs. ES pacing ability, potentially important for emergency heart rhythm management, are currently unknown, thus warranting further investigation. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Cardiology.
HEPS-BPIX, a single photon counting pixel detector with a high frame rate for the HEPS project
Wei, Wei; Zhang, Jie; Ning, Zhe; Lu, Yunpeng; Fan, Lei; Li, Huaishen; Jiang, Xiaoshan; Lan, Allan K.; Ouyang, Qun; Wang, Zheng; Zhu, Kejun; Chen, Yuanbo; Liu, Peng
2016-11-01
China's next generation light source, named the High Energy Photon Source (HEPS), is currently under construction. HEPS-BPIX (HEPS-Beijing PIXel) is a dedicated pixel readout chip that operates in single photon counting mode for X-ray applications in HEPS. Designed using CMOS 0.13 μm technology, the chip contains a matrix of 104×72 pixels. Each pixel measures 150 μm×150 μm and has a counting depth of 20 bits. A bump-bonded prototyping detector module with a 300-μm thick silicon sensor was tested in the beamline of Beijing Synchrotron Radiation Facility. A fast stream of X-ray images was demonstrated, and a frame rate of 1.2 kHz was proven, with a negligible dead time. The test results showed an equivalent noise charge of 115 e- rms after bump bonding and a threshold dispersion of 55 e- rms after calibration.
Maximum Rate of Growth of Enstrophy in Solutions of the Fractional Burgers Equation
Yun, Dongfang
2016-01-01
This investigation is a part of a research program aiming to characterize the extreme behavior possible in hydrodynamic models by probing the sharpness of estimates on the growth of certain fundamental quantities. We consider here the rate of growth of the classical and fractional enstrophy in the fractional Burgers equation in the subcritical, critical and supercritical regime. First, we obtain estimates on these rates of growth and then show that these estimates are sharp up to numerical prefactors. In particular, we conclude that the power-law dependence of the enstrophy rate of growth on the fractional dissipation exponent has the same global form in the subcritical, critical and parts of the supercritical regime. This is done by numerically solving suitably defined constrained maximization problems and then demonstrating that for different values of the fractional dissipation exponent the obtained maximizers saturate the upper bounds in the estimates as the enstrophy increases. In addition, nontrivial be...
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Riisgård, Hans Ulrik; Larsen, Poul Scheel; Pleissner, Daniel
2014-01-01
rate (F, l h-1), W (g), and L (mm) as described by the equations: FW = aWb and FL = cLd, respectively. This is done by using available and new experimental laboratory data on M. edulis obtained by members of the same research team using different methods and controlled diets of cultivated algal cells...
Maximum organic loading rate for the single-stage wet anaerobic digestion of food waste.
Nagao, Norio; Tajima, Nobuyuki; Kawai, Minako; Niwa, Chiaki; Kurosawa, Norio; Matsuyama, Tatsushi; Yusoff, Fatimah Md; Toda, Tatsuki
2012-08-01
Anaerobic digestion of food waste was conducted at high OLR from 3.7 to 12.9 kg-VS m(-3) day(-1) for 225 days. Periods without organic loading were arranged between the each loading period. Stable operation at an OLR of 9.2 kg-VS (15.0 kg-COD) m(-3) day(-1) was achieved with a high VS reduction (91.8%) and high methane yield (455 mL g-VS-1). The cell density increased in the periods without organic loading, and reached to 10.9×10(10) cells mL(-1) on day 187, which was around 15 times higher than that of the seed sludge. There was a significant correlation between OLR and saturated TSS in the sludge (y=17.3e(0.1679×), r(2)=0.996, P<0.05). A theoretical maximum OLR of 10.5 kg-VS (17.0 kg-COD) m(-3) day(-1) was obtained for mesophilic single-stage wet anaerobic digestion that is able to maintain a stable operation with high methane yield and VS reduction.
Performance of a GM tube based environmental dose rate monitor operating in the Time-To-Count mode
Zickefoose, J.; Kulkarni, T.; Martinson, T.; Phillips, K.; Voelker, M. [Canberra Industries Inc. (United States)
2015-07-01
The events at the Fukushima Daiichi power plant in the aftermath of a natural disaster underline the importance of a large array of networked environmental monitors to cover areas around nuclear power plants. These monitors should meet a few basic criteria: have a uniform response over a wide range of gamma energies, have a uniform response over a wide range of incident angles, and have a large dynamic range. Many of these criteria are met if the probe is qualified to the international standard IEC 60532 (Radiation protection instrumentation - Installed dose rate meters, warning assemblies and monitors - X and gamma radiation of energy between 50 keV and 7 MeV), which specifically deals with energy response, angle of incidence, dynamic range, response time, and a number of environmental characteristics. EcoGamma is a dual GM tube environmental gamma radiation monitor designed specifically to meet the requirements of IEC 60532 and operate in the most extreme conditions. EcoGamma utilizes two energy compensated GM tubes operating with a Time-To-Count (TTC) collection algorithm. The TTC algorithm extends the lifetime and range of a GM tube significantly and allows the dual GM tube probe to achieve linearity over approximately 10 decades of gamma dose rate (from the Sv/hr range to 100 Sv/hr). In the TTC mode of operation, the GM tube is not maintained in a biased condition continuously. This is different from a traditional counting system where the GM tube is held at a constant bias continuously and the total number of strikes that the tube registers are counted. The traditional approach allows for good sensitivity, but does not lend itself to a long lifetime of the tube and is susceptible to linearity issues at high count rates. TTC on the other hand only biases the tube for short periods of time and in effect measures the time between events, which is statistically representative of the total strike rate. Since the tube is not continually biased, the life of the tube
Validity of heart rate based nomogram fors estimation of maximum oxygen uptake in Indian population.
Kumar, S Krishna; Khare, P; Jaryal, A K; Talwar, A
2012-01-01
Maximal oxygen uptake (VO2max) during a graded maximal exercise test is the objective method to assess cardiorespiratory fitness. Maximal oxygen uptake testing is limited to only a few laboratories as it requires trained personnel and strenuous effort by the subject. At the population level, submaximal tests have been developed to derive VO2max indirectly based on heart rate based nomograms or it can be calculated using anthropometric measures. These heart rate based predicted standards have been developed for western population and are used routinely to predict VO2max in Indian population. In the present study VO2max was directly measured by maximal exercise test using a bicycle ergometer and was compared with VO2max derived by recovery heart rate in Queen's College step test (QCST) (PVO2max I) and with VO2max derived from Wasserman equation based on anthropometric parameters and age (PVO2max II) in a well defined age group of healthy male adults from New Delhi. The values of directly measured VO2max showed no significant correlation either with the estimated VO2max with QCST or with VO2max predicted by Wasserman equation. Bland and Altman method of approach for limit of agreement between VO2max and PVO2max I or PVO2max II revealed that the limits of agreement between directly measured VO2max and PVO2max I or PVO2max II was large indicating inapplicability of prediction equations of western population in the population under study. Thus it is evident that there is an urgent need to develop nomogram for Indian population, may be even for different ethnic sub-population in the country.
Longitudinal Examination of Age-Predicted Symptom-Limited Exercise Maximum Heart Rate
Zhu, Na; Suarez, Jose; Sidney, Steve; Sternfeld, Barbara; Schreiner, Pamela J.; Carnethon, Mercedes R.; Lewis, Cora E.; Crow, Richard S.; Bouchard, Claude; Haskell, William; Jacobs, David R.
2010-01-01
Purpose To estimate the association of age with maximal heart rate (MHR). Methods Data were obtained in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Participants were black and white men and women aged 18-30 in 1985-86 (year 0). A symptom-limited maximal graded exercise test was completed at years 0, 7, and 20 by 4969, 2583, and 2870 participants, respectively. After exclusion 9622 eligible tests remained. Results In all 9622 tests, estimated MHR (eMHR, beats/minute) had a quadratic relation to age in the age range 18 to 50 years, eMHR=179+0.29*age-0.011*age2. The age-MHR association was approximately linear in the restricted age ranges of consecutive tests. In 2215 people who completed both year 0 and 7 tests (age range 18 to 37), eMHR=189–0.35*age; and in 1574 people who completed both year 7 and 20 tests (age range 25 to 50), eMHR=199–0.63*age. In the lowest baseline BMI quartile, the rate of decline was 0.20 beats/minute/year between years 0-7 and 0.51 beats/minute/year between years 7-20; while in the highest baseline BMI quartile there was a linear rate of decline of approximately 0.7 beats/minute/year over the full age of 18 to 50 years. Conclusion Clinicians making exercise prescriptions should be aware that the loss of symptom-limited MHR is much slower at young adulthood and more pronounced in later adulthood. In particular, MHR loss is very slow in those with lowest BMI below age 40. PMID:20639723
Low-Noise Free-Running High-Rate Photon-Counting for Space Communication and Ranging
Lu, Wei; Krainak, Michael A.; Yang, Guan; Sun, Xiaoli; Merritt, Scott
2016-01-01
We present performance data for low-noise free-running high-rate photon counting method for space optical communication and ranging. NASA GSFC is testing the performance of two types of novel photon-counting detectors 1) a 2x8 mercury cadmium telluride (HgCdTe) avalanche array made by DRS Inc., and a 2) a commercial 2880-element silicon avalanche photodiode (APD) array. We successfully measured real-time communication performance using both the 2 detected-photon threshold and logic AND-gate coincidence methods. Use of these methods allows mitigation of dark count, after-pulsing and background noise effects without using other method of Time Gating The HgCdTe APD array routinely demonstrated very high photon detection efficiencies (50) at near infrared wavelength. The commercial silicon APD array exhibited a fast output with rise times of 300 ps and pulse widths of 600 ps. On-chip individually filtered signals from the entire array were multiplexed onto a single fast output. NASA GSFC has tested both detectors for their potential application for space communications and ranging. We developed and compare their performances using both the 2 detected photon threshold and coincidence methods.
A Count for Quality: Child Care Center Directors on Rating and Improvement Systems
Schulman, Karen; Matthews, Hannah; Blank, Helen; Ewen, Danielle
2012-01-01
Quality Rating and Improvement Systems (QRIS)--a strategy to improve families' access to high-quality child care--assess the quality of child care programs, offer incentives and assistance to programs to improve their ratings, and give information to parents about the quality of child care. These systems are operating in a growing number of…
Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard
2008-01-01
with an exponentially decaying function of the time between observations is suggested. A model with a full covariance structure containing OD-dependent variance and an autocorrelation structure is compared to a model with variance only and with no variance or correlation implemented. It is shown that the model...... are used for parameter estimation. The data is log-transformed such that a linear model can be applied. The transformation changes the variance structure, and hence an OD-dependent variance is implemented in the model. The autocorrelation in the data is demonstrated, and a correlation model...... that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...
Garde, Eva; Heide-Jørgensen, Mads Peter; Ditlevsen, Susanne
2012-01-01
Ages of marine mammals have traditionally been estimated by counting dentinal growth layers in teeth. However, this method is difficult to use on narwhals (Monodon monoceros) because of their special tooth structures. Alternative methods are therefore needed. The aspartic acid racemization (AAR......) technique has been used in age estimation studies of cetaceans, including narwhals. The purpose of this study was to estimate a species-specific racemization rate for narwhals by regressing aspartic acid D/L ratios in eye lens nuclei against growth layer groups in tusks (n=9). Two racemization rates were...... rate and (D/L)0 value be used in future AAR age estimation studies of narwhals, but also recommend the collection of tusks and eyes of narwhals for further improving the (D/L)0 and 2kAsp estimates obtained in this study....
Kubota, Hiroyuki; Makino, Hiroshi; Gawad, Agata; Kushiro, Akira; Ishikawa, Eiji; Sakai, Takafumi; Akiyama, Takuya; Matsuda, Kazunori; Martin, Rocio; Knol, Jan; Oishi, Kenji
2016-01-01
Asymptomatic infant carriers of toxigenic Clostridium difficile are suggested to play a role in the transmission of C. difficile infection (CDI) in adults. However, the mode of C. difficile carriage in infants remains to be fully elucidated. We investigated longitudinal changes in carriage rates,
Kubota, Hiroyuki; Makino, Hiroshi; Gawad, Agata; Kushiro, Akira; Ishikawa, Eiji; Sakai, Takafumi; Akiyama, Takuya; Matsuda, Kazunori; Martin, Rocio; Knol, Jan; Oishi, Kenji
2016-01-01
Asymptomatic infant carriers of toxigenic Clostridium difficile are suggested to play a role in the transmission of C. difficile infection (CDI) in adults. However, the mode of C. difficile carriage in infants remains to be fully elucidated. We investigated longitudinal changes in carriage rates,
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Suvaila, Rares; Osvath, Iolanda; Sima, Octavian
2013-11-01
In this work a method for the evaluation of the activity when a point source containing (60)Co is located in an unknown position within a sample is developed. The method can be applied if the count rate in the 2,505 keV sum peak has an acceptable uncertainty. It is based on the correlation between the apparent efficiency for the 1,173 keV peak and the ratio of the count rate in the sum peak of 2,505 keV and in the 1,332 keV peak. The correlation was observed in the measurements of a (60)Co point source located in various positions in a soil sample. The measurements were done with a 47% efficiency n-type HPGe detector. The correlation is also observed in the measurements and simulations done with a Compton-suppressed spectrometer having a 100% n-type HPGe detector. The results obtained with the proposed method are less affected by the uncertainty of the position of the point source than the results obtained using the standard methods of activity evaluation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mangeard, P.-S.; Ruffolo, D.; Sáiz, A.; Nuntiyakul, W.; Bieber, J. W.; Clem, J.; Evenson, P.; Pyle, R.; Duldig, M. L.; Humble, J. E.
2016-12-01
Neutron monitors are the premier instruments for precisely tracking time variations in the Galactic cosmic ray flux at GeV-range energies above the geomagnetic cutoff at the location of measurement. Recently, a new capability has been developed to record and analyze the neutron time delay distribution (related to neutron multiplicity) to infer variations in the cosmic ray spectrum as well. In particular, from time delay histograms we can determine the leader fraction L, defined as the fraction of neutrons that did not follow a previous neutron detection in the same tube from the same atmospheric secondary particle. Using data taken during 2000-2007 by a shipborne neutron monitor latitude survey, we observe a strong dependence of the count rate and L on the geomagnetic cutoff. We have modeled this dependence using Monte Carlo simulations of cosmic ray interactions in the atmosphere and in the neutron monitor. We present new yield functions for the count rate of a neutron monitor at sea level. The simulation results show a variation of L with geomagnetic cutoff as observed by the latitude survey, confirming that these changes in L can be attributed to changes in the cosmic ray spectrum arriving at Earth's atmosphere. We also observe a variation in L with time at a fixed cutoff, which reflects the evolution of the cosmic ray spectrum with the sunspot cycle, known as solar modulation.
Snelling, Edward P; Seymour, Roger S; Matthews, Philip G D; Runciman, Sue; White, Craig R
2011-10-01
The hemimetabolous migratory locust Locusta migratoria progresses through five instars to the adult, increasing in size from 0.02 to 0.95 g, a 45-fold change. Hopping locomotion occurs at all life stages and is supported by aerobic metabolism and provision of oxygen through the tracheal system. This allometric study investigates the effect of body mass (Mb) on oxygen consumption rate (MO2, μmol h(-1)) to establish resting metabolic rate (MRO2), maximum metabolic rate during hopping (MMO2) and maximum metabolic rate of the hopping muscles (MMO2,hop) in first instar, third instar, fifth instar and adult locusts. Oxygen consumption rates increased throughout development according to the allometric equations MRO2=30.1Mb(0.83±0.02), MMO2=155Mb(1.01±0.02), MMO2,hop=120Mb(1.07±0.02) and, if adults are excluded, MMO2,juv=136Mb(0.97±0.02) and MMO2,juv,hop=103Mb(1.02±0.02). Increasing body mass by 20-45% with attached weights did not increase mass-specific MMO2 significantly at any life stage, although mean mass-specific hopping MO2 was slightly higher (ca. 8%) when juvenile data were pooled. The allometric exponents for all measures of metabolic rate are much greater than 0.75, and therefore do not support West, Brown and Enquist’s optimised fractal network model, which predicts that metabolism scales with a 3⁄4-power exponent owing to limitations in the rate at which resources can be transported within the body.
2Kx2K resolution element photon counting MCP sensor with >200 kHz event rate capability
Vallerga, J V
2000-01-01
Siegmund Scientific undertook a NASA Small Business Innovative Research (SBIR) contract to develop a versatile, high-performance photon (or particle) counting detector combining recent technical advances in all aspects of Microchannel Plate (MCP) detector development in a low cost, commercially viable package that can support a variety of applications. The detector concept consists of a set of MCPs whose output electron pulses are read out with a crossed delay line (XDL) anode and associated high-speed event encoding electronics. The delay line anode allows high-resolution photon event centroiding at very high event rates and can be scaled to large formats (>40 mm) while maintaining good linearity and high temporal stability. The optimal sensitivity wavelength range is determined by the choice of opaque photocathodes. Specific achievements included: spatial resolution of 200 000 events s sup - sup 1; local rates of >100 events s sup - sup 1 per resolution element; event timing of <1 ns; and low background ...
Free-running InGaAs single photon detector with 1 cps dark count rate at 10% efficiency
Korzh, Boris; Lunghi, Tommaso; Gisin, Nicolas; Zbinden, Hugo
2013-01-01
We present a free-running single photon detector for telecom wavelengths based on a negative feedback avalanche photodiode (NFAD). A dark count rate as low as 1 cps was obtained at a detection efficiency of 10%, with an afterpulse probability of 2.2% for 20 {\\mu}s of deadtime. This was achieved by using an active hold-off circuit and cooling the NFAD with a free-piston stirling cooler down to temperatures of -110${^o}$C. We integrated two detectors into a practical, 625 MHz clocked quantum key distribution system. Stable, real-time key distribution in presence of 30 dB channel loss was possible, yielding a secret key rate of 350 bps.
Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; Smith, Emily A; Vaswani, Namrata; Petrich, Jacob W
2016-03-10
The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as "residual minimization" (RM) and "maximum likelihood" (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number of "photon counts" was approximately 20, 200, 1000, 3000, and 6000 and there were about 2-200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson's weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. The robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.
Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier
2011-10-01
Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/
Kruse, Marcelo Lapa; Kruse, José Cláudio Lupi; Leiria, Tiago Luiz Luz; Pires, Leonardo Martins; Gensas, Caroline Saltz; Gomes, Daniel Garcia; Boris, Douglas; Mantovani, Augusto; Lima, Gustavo Glotz de
2014-12-01
Occurrences of asymptomatic atrial fibrillation (AF) are common. It is important to identify AF because it increases morbidity and mortality. 24-hour Holter has been used to detect paroxysmal AF (PAF). The objective of this study was to investigate the relationship between occurrence of PAF in 24-hour Holter and the symptoms of the population studied. Cross-sectional study conducted at a cardiology hospital. 11,321 consecutive 24-hour Holter tests performed at a referral service were analyzed. Patients with pacemakers or with AF throughout the recording were excluded. There were 75 tests (0.67%) with PAF. The mean age was 67 ± 13 years and 45% were female. The heart rate (HR) over the 24 hours was a minimum of 45 ± 8 bpm, mean of 74 ± 17 bpm and maximum of 151 ± 32 bpm. Among the tests showing PAF, only 26% had symptoms. The only factor tested that showed a correlation with symptomatic AF was maximum HR (165 ± 34 versus 147 ± 30 bpm) (P = 0.03). Use of beta blockers had a protective effect against occurrence of PAF symptoms (odds ratio: 0.24, P = 0.031). PAF is a rare event in 24-hour Holter. The maximum HR during the 24 hours was the only factor correlated with symptomatic AF, and use of beta blockers had a protective effect against AF symptom occurrence.
Optimization of statistical methods for HpGe gamma-ray spectrometer used in wide count rate ranges
Gervino, G.; Mana, G.; Palmisano, C.
2016-07-01
The need to perform γ-ray measurements with HpGe detectors is a common technique in many fields such as nuclear physics, radiochemistry, nuclear medicine and neutron activation analysis. The use of HpGe detectors is chosen in situations where isotope identification is needed because of their excellent resolution. Our challenge is to obtain the "best" spectroscopy data possible in every measurement situation. "Best" is a combination of statistical (number of counts) and spectral quality (peak, width and position) over a wide range of counting rates. In this framework, we applied Bayesian methods and the Ellipsoidal Nested Sampling (a multidimensional integration technique) to study the most likely distribution for the shape of HpGe spectra. In treating these experiments, the prior information suggests to model the likelihood function with a product of Poisson distributions. We present the efforts that have been done in order to optimize the statistical methods to HpGe detector outputs with the aim to evaluate to a better order of precision the detector efficiency, the absolute measured activity and the spectra background. Reaching a more precise knowledge of statistical and systematic uncertainties for the measured physical observables is the final goal of this research project.
Leukocyte Count and Erythrocyte Sedimentation Rate as Diagnostic Factors in Febrile Convulsion
Ali Akbar Rahbarimanesh
2011-07-01
Full Text Available "nFebrile convulsion (FC is the most common seizure disorder in childhood. white blood cell (WBC and erythrocyte sedimentation rate (ESR are commonly measured in FC. Trauma, vomiting and bleeding can also lead to WBC and ESR so the blood tests must carefully be interpreted by the clinician. In this cross sectional study 410 children(163 with FC, aged 6 months to 5 years, admitted to Bahrami Children hospital in the first 48 hours of their febrile disease, either with or without seizure, were evaluated over an 18 months period. Age, sex, temperature; history of vomiting, bleeding or trauma; WBC, ESR and hemoglobin were recorded in all children. There was a significant increase of WBC (P<0.001 in children with FC so we can deduct that leukocytosis encountered in children with FC can be due to convulsion in itself. There was no significant difference regarding ESR (P=0.113 between the two groups. In fact, elevated ESR is a result of underlying pathology. In stable patients who don't have any indication of lumbar puncture, there's no need to assess WBC and ESR as an indicator of underlying infection. If the patient is transferred to pediatric ward and still there's no reason to suspect a bacterial infection, there is no need for WBC test.
Karia Ritesh M
2012-04-01
Full Text Available Objective: Objectives of this study is to study effect of smoking on Peak Expiratory Flow Rate and Maximum Voluntary Ventilation in apparently healthy tobacco smokers and non-smokers and to compare the result of both the studies to assess the effects of smoking Method: The present study was carried out by computerized software of Pulmonary Function Test named ‘Spiro Excel’ on 50 non-smokers and 50 smokers. Smokers are divided in three gropus. Full series of test take 4 to 5 minutes. Tests were compared in the both smokers and non-smokers group by the ‘unpaired t test’. Statistical significance was indicated by ‘p’ value < 0.05. Results: From the result it is found that actual value of Peak Expiratory Flow Rate and Maximum Voluntary Ventilation are significantly lower in all smokers group than non-smokers. The difference of actual mean value is increases as the degree of smoking increases. [National J of Med Res 2012; 2(2.000: 191-193
Siegler, Jason C; Marshall, Paul W M; Raftry, Sean; Brooks, Cristy; Dowswell, Ben; Romero, Rick; Green, Simon
2013-12-01
The purpose of this investigation was to assess the influence of sodium bicarbonate supplementation on maximal force production, rate of force development (RFD), and muscle recruitment during repeated bouts of high-intensity cycling. Ten male and female (n = 10) subjects completed two fixed-cadence, high-intensity cycling trials. Each trial consisted of a series of 30-s efforts at 120% peak power output (maximum graded test) that were interspersed with 30-s recovery periods until task failure. Prior to each trial, subjects consumed 0.3 g/kg sodium bicarbonate (ALK) or placebo (PLA). Maximal voluntary contractions were performed immediately after each 30-s effort. Maximal force (F max) was calculated as the greatest force recorded over a 25-ms period throughout the entire contraction duration while maximal RFD (RFD max) was calculated as the greatest 10-ms average slope throughout that same contraction. F max declined similarly in both the ALK and PLA conditions, with baseline values (ALK: 1,226 ± 393 N; PLA: 1,222 ± 369 N) declining nearly 295 ± 54 N [95% confidence interval (CI) = 84-508 N; P force vs. maximum rate of force development during a whole body fatiguing task.
Larson, Eric D.; St. Clair, Joshua R.; Sumner, Whitney A.; Bannister, Roger A.; Proenza, Cathy
2013-01-01
An inexorable decline in maximum heart rate (mHR) progressively limits human aerobic capacity with advancing age. This decrease in mHR results from an age-dependent reduction in “intrinsic heart rate” (iHR), which is measured during autonomic blockade. The reduced iHR indicates, by definition, that pacemaker function of the sinoatrial node is compromised during aging. However, little is known about the properties of pacemaker myocytes in the aged sinoatrial node. Here, we show that depressed excitability of individual sinoatrial node myocytes (SAMs) contributes to reductions in heart rate with advancing age. We found that age-dependent declines in mHR and iHR in ECG recordings from mice were paralleled by declines in spontaneous action potential (AP) firing rates (FRs) in patch-clamp recordings from acutely isolated SAMs. The slower FR of aged SAMs resulted from changes in the AP waveform that were limited to hyperpolarization of the maximum diastolic potential and slowing of the early part of the diastolic depolarization. These AP waveform changes were associated with cellular hypertrophy, reduced current densities for L- and T-type Ca2+ currents and the “funny current” (If), and a hyperpolarizing shift in the voltage dependence of If. The age-dependent reduction in sinoatrial node function was not associated with changes in β-adrenergic responsiveness, which was preserved during aging for heart rate, SAM FR, L- and T-type Ca2+ currents, and If. Our results indicate that depressed excitability of individual SAMs due to altered ion channel activity contributes to the decline in mHR, and thus aerobic capacity, during normal aging. PMID:24128759
Goasduff, A., E-mail: Alain.Goasduff@csnsm.in2p3.fr [Université de Strasbourg, IPHC, 23 rue du Loess, F-67037 Strasbourg (France); CNRS, UMR 7178, F-67037 Strasbourg (France); CSNSM, UMR 8609, IN2P3-CNRS, Université Paris-Sud 11, F-91405 Orsay Cedex (France); Valiente-Dobón, J.J. [Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Legnaro, I-35020 Legnaro (Italy); Lunardi, S. [Dipartimento di Fisica e Astronomia, Università di Padova and INFN, Sezione di Padova, I-35131 Padova (Italy); Haas, F. [Université de Strasbourg, IPHC, 23 rue du Loess, F-67037 Strasbourg (France); CNRS, UMR 7178, F-67037 Strasbourg (France); Gadea, A. [Instituto de Física Corpuscular, CSIC-Universitat de València, E-46980 Valencia (Spain); Angelis, G. de [Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Legnaro, I-35020 Legnaro (Italy); Bazzacco, D. [Dipartimento di Fisica e Astronomia, Università di Padova and INFN, Sezione di Padova, I-35131 Padova (Italy); Courtin, S. [Université de Strasbourg, IPHC, 23 rue du Loess, F-67037 Strasbourg (France); CNRS, UMR 7178, F-67037 Strasbourg (France); Farnea, E. [Dipartimento di Fisica e Astronomia, Università di Padova and INFN, Sezione di Padova, I-35131 Padova (Italy); Gottardo, A. [Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Legnaro, I-35020 Legnaro (Italy); Michelagnoli, C. [Dipartimento di Fisica e Astronomia, Università di Padova and INFN, Sezione di Padova, I-35131 Padova (Italy); and others
2014-09-11
The differential Recoil Distance Doppler Shift (RDDS) method after multinucleon transfer (MNT) reactions to measure lifetimes of excited states in neutron-rich nuclei requires the use of a thick energy degrader for the recoiling ejectiles that are then detected in a spectrometer. This type of measurements greatly benefits from the use of the new generation segmented γ-ray detectors, such as the AGATA demonstrator which offers unprecedented energy and angular resolutions. In order to make an optimized choice of the material and the thickness of the degrader for lifetime measurements using the RDDS method after MNT, an experiment has been performed with the AGATA demonstrator. Counting rate measurements for different degraders are presented.
Loyka, Sergey; Gagnon, Francois
2009-01-01
Motivated by a recent surge of interest in convex optimization techniques, convexity/concavity properties of error rates of the maximum likelihood detector operating in the AWGN channel are studied and extended to frequency-flat slow-fading channels. Generic conditions are identified under which the symbol error rate (SER) is convex/concave for arbitrary multi-dimensional constellations. In particular, the SER is convex in SNR for any one- and two-dimensional constellation, and also in higher dimensions at high SNR. Pairwise error probability and bit error rate are shown to be convex at high SNR, for arbitrary constellations and bit mapping. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, an implication for fading channels ("fa...
Macdonald, L R; Schmitz, R E; Alessio, A M; Wollenweber, S D; Stearns, C W; Ganin, A; Harrison, R L; Lewellen, T K; Kinahan, P E
2008-07-21
We measured count rates and scatter fraction on the Discovery STE PET/CT scanner in conventional 2D and 3D acquisition modes, and in a partial collimation mode between 2D and 3D. As part of the evaluation of using partial collimation, we estimated global count rates using a scanner model that combined computer simulations with an empirical live-time function. Our measurements followed the NEMA NU2 count rate and scatter-fraction protocol to obtain true, scattered and random coincidence events, from which noise equivalent count (NEC) rates were calculated. The effect of patient size was considered by using 27 cm and 35 cm diameter phantoms, in addition to the standard 20 cm diameter cylindrical count-rate phantom. Using the scanner model, we evaluated two partial collimation cases: removing half of the septa (2.5D) and removing two-thirds of the septa (2.7D). Based on predictions of the model, a 2.7D collimator was constructed. Count rates and scatter fractions were then measured in 2D, 2.7D and 3D. The scanner model predicted relative NEC variation with activity, as confirmed by measurements. The measured 2.7D NEC was equal or greater than 3D NEC for all activity levels in the 27 cm and 35 cm phantoms. In the 20 cm phantom, 3D NEC was somewhat higher ( approximately 15%) than 2.7D NEC at 100 MBq. For all higher activity concentrations, 2.7D NEC was greater and peaked 26% above the 3D peak NEC. The peak NEC in 2.7D mode occurred at approximately 425 MBq, and was 26-50% greater than the peak 3D NEC, depending on object size. NEC in 2D was considerably lower, except at relatively high activity concentrations. Partial collimation shows promise for improved noise equivalent count rates in clinical imaging without altering other detector parameters.
Rezaeian Mahdi
2015-01-01
Full Text Available Containment of a transport cask during both normal and accident conditions is important to the health and safety of the public and of the operators. Based on IAEA regulations, releasable activity and maximum permissible volumetric leakage rate within the cask containing fuel samples of Tehran Research Reactor enclosed in an irradiated capsule are calculated. The contributions to the total activity from the four sources of gas, volatile, fines, and corrosion products are treated separately. These calculations are necessary to identify an appropriate leak test that must be performed on the cask and the results can be utilized as the source term for dose evaluation in the safety assessment of the cask.
Isacco, L; Thivel, D; Duclos, M; Aucouturier, J; Boisseau, N
2014-06-01
Fat mass localization affects lipid metabolism differently at rest and during exercise in overweight and normal-weight subjects. The aim of this study was to investigate the impact of a low vs high ratio of abdominal to lower-body fat mass (index of adipose tissue distribution) on the exercise intensity (Lipox(max)) that elicits the maximum lipid oxidation rate in normal-weight women. Twenty-one normal-weight women (22.0 ± 0.6 years, 22.3 ± 0.1 kg.m(-2)) were separated into two groups of either a low or high abdominal to lower-body fat mass ratio [L-A/LB (n = 11) or H-A/LB (n = 10), respectively]. Lipox(max) and maximum lipid oxidation rate (MLOR) were determined during a submaximum incremental exercise test. Abdominal and lower-body fat mass were determined from DXA scans. The two groups did not differ in aerobic fitness, total fat mass, or total and localized fat-free mass. Lipox(max) and MLOR were significantly lower in H-A/LB vs L-A/LB women (43 ± 3% VO(2max) vs 54 ± 4% VO(2max), and 4.8 ± 0.6 mg min(-1)kg FFM(-1)vs 8.4 ± 0.9 mg min(-1)kg FFM(-1), respectively; P normal-weight women, a predominantly abdominal fat mass distribution compared with a predominantly peripheral fat mass distribution is associated with a lower capacity to maximize lipid oxidation during exercise, as evidenced by their lower Lipox(max) and MLOR. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Mocroft, Amanda; Phillips, Andrew N; Gatell, Jose
2013-01-01
CD4 cell count and viral loads are used in clinical trials as surrogate endpoints for assessing efficacy of newly available antiretrovirals. If antiretrovirals act through other pathways or increase the risk of disease this would not be identified prior to licensing. The aim of this study was to ...... was to investigate the CD4 cell count and viral load-specific rates of fatal and nonfatal AIDS and non-AIDS events according to current antiretrovirals....
Eva Garde
2012-11-01
Full Text Available Ages of marine mammals have traditionally been estimated by counting dentinal growth layers in teeth. However, this method is difficult to use on narwhals (Monodon monoceros because of their special tooth structures. Alternative methods are therefore needed. The aspartic acid racemization (AAR technique has been used in age estimation studies of cetaceans, including narwhals. The purpose of this study was to estimate a species-specific racemization rate for narwhals by regressing aspartic acid d/l ratios in eye lens nuclei against growth layer groups in tusks (n=9. Two racemization rates were estimated: one by linear regression (r2=0.98 based on the assumption that age was known without error, and one based on a bootstrap study, taking into account the uncertainty in the age estimation (r2 between 0.88 and 0.98. The two estimated 2kAsp values were identical up to two significant figures. The 2k Asp value from the bootstrap study was found to be 0.00229±0.000089 SE, which corresponds to a racemization rate of 0.00114−yr±0.000044 SE. The intercept of 0.0580±0.00185 SE corresponds to twice the (d/l0 value, which is then 0.0290±0.00093 SE. We propose that this species-specific racemization rate and (d/l0 value be used in future AAR age estimation studies of narwhals, but also recommend the collection of tusks and eyes of narwhals for further improving the (d/l0 and 2kAsp estimates obtained in this study.
Jorge Cuadrado Reyes
2011-05-01
Full Text Available Abstract This research developed a logarithms for calculating the maximum heart rate (max. HR for players in team sports in game situations. The sample was made of thirteen players (aged 24 ± 3 to a Division Two Handball team. HR was initially measured by Course Navette test. Later, twenty one training sessions were conducted in which HR and Rate of Perceived Exertion (RPE, were continuously monitored, in each task. A lineal regression analysis was done to help find a max. HR prediction equation from the max. HR of the three highest intensity sessions. Results from this equation correlate significantly with data obtained in the Course Navette test and with those obtained by other indirect methods. The conclusion of this research is that this equation provides a very useful and easy way to measure the max. HR in real game situations, avoiding non-specific analytical tests and, therefore laboratory testing.. Key words: workout control, functional evaluation, prediction equation.
Mocroft, Amanda; Phillips, Andrew N; Ledergerber, Bruno;
2010-01-01
BACKGROUND: Patients receiving combination antiretroviral therapy (cART) might continue treatment with a virologically failing regimen. We sought to identify annual change in CD4(+) T-cell count according to levels of viraemia in patients on cART. METHODS: A total of 111,371 CD4(+) T-cell counts ...
Kuhn, Margaret G; Lenke, Lawrence G; Bridwell, Keith H; O'Donnell, June C; Luhmann, Scott J
2012-03-01
The erythrocyte sedimentation rate (ESR) and white blood cell (WBC) count are frequently obtained in the work-up of post-operative fever. However, their diagnostic utility depends upon comparison with normative peri-operative trends which have not yet been described. The purpose of this study is to define a range of erythrocyte sedimentation rates and white blood cell counts following spinal instrumentation and fusion in non-infected patients. Seventy-five patients underwent spinal instrumentation and fusion. The erythrocyte sedimentation rate and white blood cell count were recorded pre-operatively, at 3 and 7 days post-operatively, and at 1 and 3 months post-operatively. Both erythrocyte sedimentation rate and white blood cell count trends demonstrated an early peak, followed by a gradual return to normal. Peak erythrocyte sedimentation rates occurred within the first week post-operatively in 98% of patients. Peak white blood cell counts occurred with the first week in 85% of patients. In the absence of infection, the erythrocyte sedimentation rate was abnormally elevated in 78% of patients at 1 month and in 53% of patients at 3 months post-operatively. The white blood cell count was abnormally elevated in only 6% of patients at 1 month post-operatively. Longer surgical time was associated with elevated white cell count at 1 week post-operatively. The fusion of more vertebral levels had a negative relationship with elevated erythrocyte sedimentation rate at 1 week post-operatively. The anterior surgical approach was associated with significantly lower erythrocyte sedimentation rate at 1 month post-operatively and with lower white cell count at 1 week post-operatively. In non-infected spinal fusion surgeries, erythrocyte sedimentation rates are in the abnormal range in 78% of patients at 1 month and in 53% of patients at 3 months post-operatively, suggesting that the erythrocyte sedimentation rate is of limited diagnostic value in the early post
Battaile, Brian C; Trites, Andrew W
2013-01-01
We propose a method to model the physiological link between somatic survival and reproductive output that reduces the number of parameters that need to be estimated by models designed to determine combinations of birth and death rates that produce historic counts of animal populations. We applied our Reproduction and Somatic Survival Linked (RSSL) method to the population counts of three species of North Pacific pinnipeds (harbor seals, Phoca vitulina richardii (Gray, 1864); northern fur seals, Callorhinus ursinus (L., 1758); and Steller sea lions, Eumetopias jubatus (Schreber, 1776))--and found our model outperformed traditional models when fitting vital rates to common types of limited datasets, such as those from counts of pups and adults. However, our model did not perform as well when these basic counts of animals were augmented with additional observations of ratios of juveniles to total non-pups. In this case, the failure of the ratios to improve model performance may indicate that the relationship between survival and reproduction is redefined or disassociated as populations change over time or that the ratio of juveniles to total non-pups is not a meaningful index of vital rates. Overall, our RSSL models show advantages to linking survival and reproduction within models to estimate the vital rates of pinnipeds and other species that have limited time-series of counts.
Brian C Battaile
Full Text Available We propose a method to model the physiological link between somatic survival and reproductive output that reduces the number of parameters that need to be estimated by models designed to determine combinations of birth and death rates that produce historic counts of animal populations. We applied our Reproduction and Somatic Survival Linked (RSSL method to the population counts of three species of North Pacific pinnipeds (harbor seals, Phoca vitulina richardii (Gray, 1864; northern fur seals, Callorhinus ursinus (L., 1758; and Steller sea lions, Eumetopias jubatus (Schreber, 1776--and found our model outperformed traditional models when fitting vital rates to common types of limited datasets, such as those from counts of pups and adults. However, our model did not perform as well when these basic counts of animals were augmented with additional observations of ratios of juveniles to total non-pups. In this case, the failure of the ratios to improve model performance may indicate that the relationship between survival and reproduction is redefined or disassociated as populations change over time or that the ratio of juveniles to total non-pups is not a meaningful index of vital rates. Overall, our RSSL models show advantages to linking survival and reproduction within models to estimate the vital rates of pinnipeds and other species that have limited time-series of counts.
Shinohara, K., E-mail: shinohara.koji@jaea.go.jp; Ochiai, K.; Sukegawa, A. [Japan Atomic Energy Agency, Naka, Ibaraki 311-0193 (Japan); Ishii, K.; Kitajima, S. [Department of Quantum Science and Energy Engineering, Tohoku University, Sendai, Miyagi 980-8579 (Japan); Baba, M. [Cyclotron and Radioisotope Center, Tohoku University, Sendai, Miyagi 980-8578 (Japan); Sasao, M. [Organization for Research Initiatives and Development, Doshisha University, Kyoto 602-8580 (Japan)
2014-11-15
In order to increase the count rate capability of a neutron detection system as a whole, we propose a multi-stage neutron detection system. Experiments to test the effectiveness of this concept were carried out on Fusion Neutronics Source. Comparing four configurations of alignment, it was found that the influence of an anterior stage on a posterior stage was negligible for the pulse height distribution. The two-stage system using 25 mm thickness scintillator was about 1.65 times the count rate capability of a single detector system for d-D neutrons and was about 1.8 times the count rate capability for d-T neutrons. The results suggested that the concept of a multi-stage detection system will work in practice.
Niu, Xiaofeng; Ye, Hongwei; Xia, Ting; Asma, Evren; Winkler, Mark; Gagnon, Daniel; Wang, Wenli
2015-07-07
Quantitative PET imaging is widely used in clinical diagnosis in oncology and neuroimaging. Accurate normalization correction for the efficiency of each line-of- response is essential for accurate quantitative PET image reconstruction. In this paper, we propose a normalization calibration method by using the delayed-window coincidence events from the scanning phantom or patient. The proposed method could dramatically reduce the 'ring' artifacts caused by mismatched system count-rates between the calibration and phantom/patient datasets. Moreover, a modified algorithm for mean detector efficiency estimation is proposed, which could generate crystal efficiency maps with more uniform variance. Both phantom and real patient datasets are used for evaluation. The results show that the proposed method could lead to better uniformity in reconstructed images by removing ring artifacts, and more uniform axial variance profiles, especially around the axial edge slices of the scanner. The proposed method also has the potential benefit to simplify the normalization calibration procedure, since the calibration can be performed using the on-the-fly acquired delayed-window dataset.
Alternative Optimizations of X-ray TES Arrays: Soft X-rays, High Count Rates, and Mixed-Pixel Arrays
Kilbourne, C. A.; Bandler, S. R.; Brown, A.-D.; Chervenak, J. A.; Figueroa-Feliciano, E.; Finkbeiner, F. M.; Iyomoto, N.; Kelley, R. L.; Porter, F. S.; Smith, S. J.
2007-01-01
We are developing arrays of superconducting transition-edge sensors (TES) for imaging spectroscopy telescopes such as the XMS on Constellation-X. While our primary focus has been on arrays that meet the XMS requirements (of which, foremost, is an energy resolution of 2.5 eV at 6 keV and a bandpass from approx. 0.3 keV to 12 keV), we have also investigated other optimizations that might be used to extend the XMS capabilities. In one of these optimizations, improved resolution below 1 keV is achieved by reducing the heat capacity. Such pixels can be based on our XMS-style TES's with the separate absorbers omitted. These pixels can added to an array with broadband response either as a separate array or interspersed, depending on other factors that include telescope design and science requirements. In one version of this approach, we have designed and fabricated a composite array of low-energy and broad-band pixels to provide high spectral resolving power over a broader energy bandpass than could be obtained with a single TES design. The array consists of alternating pixels with and without overhanging absorbers. To explore optimizations for higher count rates, we are also optimizing the design and operating temperature of pixels that are coupled to a solid substrate. We will present the performance of these variations and discuss other optimizations that could be used to enhance the XMS or enable other astrophysics experiments.
Lee, D.; Lim, K.; Park, K.; Lee, C.; Alexander, S.; Cho, G.
2017-03-01
In this study, an innovative fast X-ray photon-counting pixel for high X-ray flux applications is proposed. A computed tomography system typically uses X-ray fluxes up to 108 photons/mm2/sec at the detector and thus a fast read-out is required in order to process individual X-ray photons. Otherwise, pulse pile-up can occur at the output of the signal processing unit. These superimposed signals can distort the number of incident X-ray photons leading to count loss. To minimize such losses, a cross detection method was implemented in the photon-counting pixel. A maximum count rate under X-ray tube voltage of 90 kV was acquired which reflect electrical test results of the proposed photon counting pixel. A maximum count of 780 kcps was achieved with a conventional photon-counting pixel at the pulse processing time of 500 ns, which is the time for a pulse to return to the baseline from the initial rise. In contrast, the maximum count of about 8.1 Mcps was achieved with the proposed photon-counting pixel. From these results, it was clear that the maximum count rate was increased by approximately a factor 10 times by adopting the cross detection method. Therefore, it is an innovative method to reduce count loss from pulse pile-up in a photon-counting pixel while maintaining the pulse processing time.
Surti, S; Karp, J S [Department of Radiology, University of Pennsylvania, 110 Donner Building (HUP), 3400 Spruce Street, Philadelphia, PA 19104 (United States)
2005-12-07
A high count-rate simulation (HCRSim) model has been developed so that all results are derived from fundamental physics principles. Originally developed to study the behaviour of continuous sodium iodide (NaI(Tl)) detectors, this model is now applied to PET scanners based on pixelated Anger-logic detectors using lanthanum bromide (LaBr{sub 3}), gadolinium orthosilicate (GSO) and lutetium orthosilicate (LSO) scintillators. This simulation has been used to study the effect on scanner deadtime and pulse pileup at high activity levels due to the scintillator stopping power ({mu}), decay time ({tau}) and energy resolution. Simulations were performed for a uniform 20 cm diameter x 70 cm long cylinder (NEMA NU2-2001 standard) in a whole-body scanner with an 85 cm ring diameter and a 25 cm axial field-of-view. Our results for these whole-body scanners demonstrate the potential of a pixelated Anger-logic detector and the relationship of its performance with the scanner NEC rate. Faster signal decay and short coincidence timing window lead to a reduction in deadtime and randoms fraction in the LaBr{sub 3} and LSO scanners compared to GSO. The excellent energy resolution of LaBr{sub 3} leads to the lowest scatter fraction for all scanners and helps compensate for reduced sensitivity compared to the GSO and LSO scanners, leading to the highest NEC values at high activity concentrations. The LSO scanner has the highest sensitivity of all the scanner designs investigated here, therefore leading to the highest peak NEC value but at a lower activity concentration than that of LaBr{sub 3}.
Surti, S; Karp, J S
2005-12-07
A high count-rate simulation (HCRSim) model has been developed so that all results are derived from fundamental physics principles. Originally developed to study the behaviour of continuous sodium iodide (NaI(Tl)) detectors, this model is now applied to PET scanners based on pixelated Anger-logic detectors using lanthanum bromide (LaBr(3)), gadolinium orthosilicate (GSO) and lutetium orthosilicate (LSO) scintillators. This simulation has been used to study the effect on scanner deadtime and pulse pileup at high activity levels due to the scintillator stopping power (mu), decay time (tau) and energy resolution. Simulations were performed for a uniform 20 cm diameter x 70 cm long cylinder (NEMA NU2-2001 standard) in a whole-body scanner with an 85 cm ring diameter and a 25 cm axial field-of-view. Our results for these whole-body scanners demonstrate the potential of a pixelated Anger-logic detector and the relationship of its performance with the scanner NEC rate. Faster signal decay and short coincidence timing window lead to a reduction in deadtime and randoms fraction in the LaBr(3) and LSO scanners compared to GSO. The excellent energy resolution of LaBr(3) leads to the lowest scatter fraction for all scanners and helps compensate for reduced sensitivity compared to the GSO and LSO scanners, leading to the highest NEC values at high activity concentrations. The LSO scanner has the highest sensitivity of all the scanner designs investigated here, therefore leading to the highest peak NEC value but at a lower activity concentration than that of LaBr(3).
J Vijay Kumar
2015-01-01
Full Text Available Objectives: To determine if long-term highly active antiretroviral therapy (HAART therapy alters salivary flow rate and also to compare its relation of CD4 count with unstimulated and stimulated whole saliva. Materials and Methods: A cross-sectional study was performed on 150 individuals divided into three groups. Group I (50 human immunodeficiency virus (HIV seropositive patients, but not on HAART therapy, Group II (50 HIV-infected subjects and on HAART for less than 3 years called short-term HAART, Group III (50 HIV-infected subjects and on HAART for more than or equal to 3 years called long-term HAART. Spitting method proposed by Navazesh and Kumar was used for the measurement of unstimulated and stimulated salivary flow rate. Chi-square test and analysis of variance (ANOVA were used for statistical analysis. Results: The mean CD4 count was 424.78 187.03, 497.82 206.11 and 537.6 264.00 in the respective groups. Majority of the patients in all the groups had a CD4 count between 401 and 600. Both unstimulated and stimulated whole salivary (UWS and SWS flow rates in Group I was found to be significantly higher than in Group II (P < 0.05. Unstimulated salivary flow rate between Group II and III subjects were also found to be statistically significant (P < 0.05. ANOVA performed between CD4 count and unstimulated and stimulated whole saliva in each group demonstrated a statistically significant relationship in Group II (P < 0.05. There were no significant results found between CD4 count and stimulated whole saliva in each groups. Conclusion:The reduction in CD4 cell counts were significantly associated with salivary flow rates of HIV-infected individuals who are on long-term HAART.
2010-07-01
... PREPARING TOMORROW'S TEACHERS TO USE TECHNOLOGY § 614.6 What is the maximum indirect cost rate for all... requirements; or (3) Charged by the grantee to another Federal award. (Authority: 20 U.S.C. 6832)...
Wen, Xianfei; Enqvist, Andreas
2017-09-01
Cs2LiYCl6:Ce3+ (CLYC) detectors have demonstrated the capability to simultaneously detect γ-rays and thermal and fast neutrons with medium energy resolution, reasonable detection efficiency, and substantially high pulse shape discrimination performance. A disadvantage of CLYC detectors is the long scintillation decay times, which causes pulse pile-up at moderate input count rate. Pulse processing algorithms were developed based on triangular and trapezoidal filters to discriminate between neutrons and γ-rays at high count rate. The algorithms were first tested using low-rate data. They exhibit a pulse-shape discrimination performance comparable to that of the charge comparison method, at low rate. Then, they were evaluated at high count rate. Neutrons and γ-rays were adequately identified with high throughput at rates of up to 375 kcps. The algorithm developed using the triangular filter exhibits discrimination capability marginally higher than that of the trapezoidal filter based algorithm irrespective of low or high rate. The algorithms exhibit low computational complexity and are executable on an FPGA in real-time. They are also suitable for application to other radiation detectors whose pulses are piled-up at high rate owing to long scintillation decay times.
Sada, H
1978-10-01
Effects of phentolamine (13.3, 26.5 and 53.0 micron), alprenolol (3.5, 7.0 and 17.5 micron) and prenylamine (2.4, 4.8 and 11.9 micron) on the transmembrane potential were studied in isolated guinea-pig papillary muscles, superfused with Tyrode's solution. 1. Phentolamine, alprenolol and prenylamine reduced the maximum rate of rise of action potential (.Vmax) dose-dependently. Higher concentrations of phentolamine and prenylamine caused a loss of plateau in a majority of the preparations. Resting potential was not altered by any of the drugs. Readmittance of drug-free Tyrode's solution reversed these changes induced by 13.3 micron of phentolamine and all conconcentrations of alprenolol almost completely but those induced by higher concentrations of phentolamine and all concentrations of prenylamine only slightly. 2. .Vmax at steady state was increased with decreasing driving frequencies (0.5 and 0.25 Hz) and was decreased with increasing ones (2--5 Hz) in comparison with that at 1 Hz. Such changes were all exaggerated by the above drugs, particularly by prenylamine. 3. Prenylamine and, to a lesser degree, phentolamine and alprenolol delayed dose-dependently the recovery process of .Vmax in premature responses. 4. .Vmax in the first response after interruption of stimulation recovered toward the predrug value in the presence of the above three drugs. The time constants of recovery process ranged between 10.5 and 15.0s for phentolamine, between 4.5 and 15.5s for alprenolol. The time constant of the main component was estimated to be approximately 2s for the recovery process with prenylamine. 5. On the basis of the model recently proposed by Hondeghem and Katzung (1977), it is suggested that the drug molecules associate with the open sodium channels and dissociated slowly from the closed channels and that the inactivation parameter in the drug-associated channels is shifted in the hyperpolarizing direction.
Mazhar A. Memon
2016-04-01
Full Text Available ABSTRACT Objective: To evaluate correlation between visual prostate score (VPSS and maximum flow rate (Qmax in men with lower urinary tract symptoms. Material and Methods: This is a cross sectional study conducted at a university Hospital. Sixty-seven adult male patients>50 years of age were enrolled in the study after signing an informed consent. Qmax and voided volume recorded at uroflowmetry graph and at the same time VPSS were assessed. The education level was assessed in various defined groups. Pearson correlation coefficient was computed for VPSS and Qmax. Results: Mean age was 66.1±10.1 years (median 68. The mean voided volume on uroflowmetry was 268±160mL (median 208 and the mean Qmax was 9.6±4.96mLs/sec (median 9.0. The mean VPSS score was 11.4±2.72 (11.0. In the univariate linear regression analysis there was strong negative (Pearson's correlation between VPSS and Qmax (r=848, p<0.001. In the multiple linear regression analyses there was a significant correlation between VPSS and Qmax (β-http://www.blogapaixonadosporviagens.com.br/p/caribe.html after adjusting the effect of age, voided volume (V.V and level of education. Multiple linear regression analysis done for independent variables and results showed that there was no significant correlation between the VPSS and independent factors including age (p=0.27, LOE (p=0.941 and V.V (p=0.082. Conclusion: There is a significant negative correlation between VPSS and Qmax. The VPSS can be used in lieu of IPSS score. Men even with limited educational background can complete VPSS without assistance.
Silva, H G; Lopes, I
Heliospheric modulation of galactic cosmic rays links solar cycle activity with neutron monitor count rate on earth. A less direct relation holds between neutron monitor count rate and atmospheric electric field because different atmospheric processes, including fluctuations in the ionosphere, are involved. Although a full quantitative model is still lacking, this link is supported by solid statistical evidence. Thus, a connection between the solar cycle activity and atmospheric electric field is expected. To gain a deeper insight into these relations, sunspot area (NOAA, USA), neutron monitor count rate (Climax, Colorado, USA), and atmospheric electric field (Lisbon, Portugal) are presented here in a phase space representation. The period considered covers two solar cycles (21, 22) and extends from 1978 to 1990. Two solar maxima were observed in this dataset, one in 1979 and another in 1989, as well as one solar minimum in 1986. Two main observations of the present study were: (1) similar short-term topological features of the phase space representations of the three variables, (2) a long-term phase space radius synchronization between the solar cycle activity, neutron monitor count rate, and potential gradient (confirmed by absolute correlation values above ~0.8). Finally, the methodology proposed here can be used for obtaining the relations between other atmospheric parameters (e.g., solar radiation) and solar cycle activity.
Park, Woo-Chul; Seo, Inho; Kim, Shin-Hye; Lee, Yong-Jae; Ahn, Song Vogue
2017-01-01
Inflammation is an important underlying mechanism in the pathogenesis of atherosclerosis, and an elevated resting heart rate underlies the process of atherosclerotic plaque formation. We hypothesized an association between resting heart rate and subclinical inflammation. Resting heart rate was recorded at baseline in the KoGES-ARIRANG (Korean Genome and Epidemiology Study on Atherosclerosis Risk of Rural Areas in the Korean General Population) cohort study, and was then divided into quartiles. Subclinical inflammation was measured by white blood cell count and high-sensitivity C-reactive protein. We used progressively adjusted regression models with terms for muscle mass, body fat proportion, and adiponectin in the fully adjusted models. We examined inflammatory markers as both continuous and categorical variables, using the clinical cut point of the highest quartile of white blood cell count (≥7,900/mm(3)) and ≥3 mg/dL for high-sensitivity C-reactive protein. Participants had a mean age of 56.3±8.1 years and a mean resting heart rate of 71.4±10.7 beats/min; 39.1% were men. In a fully adjusted model, an increased resting heart rate was significantly associated with a higher white blood cell count and higher levels of high-sensitivity C-reactive protein in both continuous (P for trend heart rate is associated with a higher level of subclinical inflammation among healthy Korean people.
Carb counting; Carbohydrate-controlled diet; Diabetic diet; Diabetes-counting carbohydrates ... Many foods contain carbohydrates (carbs), including: Fruit and fruit juice Cereal, bread, pasta, and rice Milk and milk products, soy milk Beans, legumes, ...
National Oceanic and Atmospheric Administration, Department of Commerce — Database of seal counts from aerial photography. Counts by image, site, species, and date are stored in the database along with information on entanglements and...
... their spleen removed surgically Use of birth control pills (oral contraceptives) Some conditions may cause a temporary (transitory) increased ... increased platelet counts include estrogen and birth control pills (oral contraceptives). Mildly decreased platelet counts may be seen in ...
欧阳习; 尹吉林; 李小华
2009-01-01
目的 测量SIEMENS LSO晶体PET/CT的本底计数.方法 利用仪器本身的Syngo软件中PET Monitor工具或作一个床位的PET/CT空白扫描,测试一台SIEMENS Biograph 16HK PET/CT的LSO晶体本底计数.结果 SIEMENS BIOGRAPH 16HR PET/CT的LSO晶体本底计数率约为net trues:4.5counts/s、Randorm:545counts/s、Prompts:550counts/s、Single Rate:745750counts/s.标准差SD分别为:0.26、2.52、1.53、7656.24.结论 作一次一个床位6分钟PET/CT空白扫描,是测试LSO晶体PET/CT的本底计数率简单而又适用的方法.
Maurin, D; Derome, L; Ghelfi, A; Hubert, G
2014-01-01
Particles count rates at given Earth location and altitude result from the convolution of (i) the interstellar (IS) cosmic-ray fluxes outside the solar cavity, (ii) the time-dependent modulation of IS into Top-of-Atmosphere (TOA) fluxes, (iii) the rigidity cut-off (or geomagnetic transmission function) and grammage at the counter location, (iv) the atmosphere response to incoming TOA cosmic rays (shower development), and (v) the counter response to the various particles/energies in the shower. Count rates from neutron monitors or muon counters are therefore a proxy to solar activity. In this paper, we review all ingredients, discuss how their uncertainties impact count rate calculations, and how they translate into variation/uncertainties on the level of solar modulation $\\phi$ (in the simple Force-Field approximation). The main uncertainty for neutron monitors is related to the yield function. However, many other effects have a significant impact, at the 5-10% level on $\\phi$ values. We find no clear ranking...
Petriş, M; Caragheorgheopol, G.; Deppner, I.; Frühauf, J.; Herrmann, N.; Kiš, M.; Loizeau, P-A.; Petrovici, M.; Rǎdulescu, L.; Simion, V.; Simon, C.
2016-01-01
Multi-gap RPC prototypes with readout on a multi-strip electrode were developed for the small polar angle region of the CBM-TOF subdetector, the most demanding zone in terms of granularity and counting rate. The prototypes are based on low resistivity ($\\sim$10$^{10}$ $\\Omega$cm) glass electrodes for performing in high counting rate environment. The strip width/pitch size was chosen such to fulfill the impedance matching with the front-end electronics and the granularity requirements of the innermost zone of the CBM-TOF wall. The in-beam tests using secondary particles produced in heavy ion collisions on a Pb target at SIS18 - GSI Darmstadt and SPS - CERN were focused on the performance of the prototype in conditions similar to the ones expected at SIS100/FAIR. An efficiency larger than 98\\% and a system time resolution in the order of 70~-~80~ps were obtained in high counting rate and high multiplicity environment.
Petriş, M.; Bartoş, D.; Caragheorgheopol, G.; Deppner, I.; Frühauf, J.; Herrmann, N.; Kiš, M.; Loizeau, P.-A.; Petrovici, M.; Rădulescu, L.; Simion, V.; Simon, C.
2016-09-01
Multi-gap RPC prototypes with a multi-strip-electrode readout were developed for the small polar angle region of the CBM-TOF subdetector, the most demanding zone in terms of granularity and counting rate. The prototypes are based on using low resistivity (~ 1010 Ω·cm) glass electrodes for performing in high counting rate environment. The strip width/pitch size was chosen such to fulfill the impedance matching with the front-end electronics and the granularity requirements of the innermost zone of the CBM-TOF wall. The in-beam tests using secondary particles produced in heavy ion collisions on a Pb target at SIS18—GSI Darmstadt and SPS—CERN were focused on the performance of the prototypes in conditions similar to the ones expected at SIS100/FAIR. An efficiency larger than 98% and a system time resolution in the order of 70-80 ps were obtained in high counting rate and high multiplicity environment.
Effect of a biological activated carbon filter on particle counts
Su-hua WU; Bing-zhi DONG; Tie-jun QIAO; Jin-song ZHANG
2008-01-01
Due to the importance of biological safety in drinking water quality and the disadvantages which exist in traditional methods of detecting typical microorganisms such as Cryptosporidium and Giardia,it is necessary to develop an alternative.Particle counts is a qualitative measurement of the amount of dissolved solids in water.The removal rate of particle counts was previously used as an indicator of the effectiveness of a biological activated carbon(BAC)filter in removing Cryptosporidium and Giardia.The particle counts in a BAC filter effluent over one operational period and the effects of BAC filter construction and operational parameters were investigated with a 10 m3/h pilot plant.The results indicated that the maximum particle count in backwash remnant water was as high as 1296 count/ml and it needed about 1.5 h to reduce from the maximum to less than 50 count/ml.During the standard filtration period,particle counts stay constant at less than 50 count/ml for 5 d except when influ-enced by sand filter backwash remnant water.The removal rates of particle counts in the BAC filter are related to characteristics of the carbon.For example,a columned carbon and a sand bed removed 33.3% and 8.5% of particles,respectively,while the particle counts in effluent from a cracked BAC filter was higher than that of the influent.There is no significant difference among particle removal rates with different filtration rates.High post-ozone dosage(>2 mg/L)plays an important role in particle count removal;when the dosage was 3 mg/L,the removal rates by carbon layers and sand beds decreased by 17.5% and increased by 9.5%,respectively,compared with a 2 mg/L dosage.
34 CFR 694.9 - What is the maximum indirect cost rate for an agency of a State or local government?
2010-07-01
... for an agency of a State or local government? Notwithstanding 34 CFR 75.560-75.562 and 34 CFR 80.22, the maximum indirect cost rate that an agency of a State or local government receiving funds under... a State or local government? 694.9 Section 694.9 Education Regulations of the Offices of...
Park, Woo-Chul; Seo, Inho; Kim, Shin-Hye
2017-01-01
Background Inflammation is an important underlying mechanism in the pathogenesis of atherosclerosis, and an elevated resting heart rate underlies the process of atherosclerotic plaque formation. We hypothesized an association between resting heart rate and subclinical inflammation. Methods Resting heart rate was recorded at baseline in the KoGES-ARIRANG (Korean Genome and Epidemiology Study on Atherosclerosis Risk of Rural Areas in the Korean General Population) cohort study, and was then divided into quartiles. Subclinical inflammation was measured by white blood cell count and high-sensitivity C-reactive protein. We used progressively adjusted regression models with terms for muscle mass, body fat proportion, and adiponectin in the fully adjusted models. We examined inflammatory markers as both continuous and categorical variables, using the clinical cut point of the highest quartile of white blood cell count (≥7,900/mm3) and ≥3 mg/dL for high-sensitivity C-reactive protein. Results Participants had a mean age of 56.3±8.1 years and a mean resting heart rate of 71.4±10.7 beats/min; 39.1% were men. In a fully adjusted model, an increased resting heart rate was significantly associated with a higher white blood cell count and higher levels of high-sensitivity C-reactive protein in both continuous (P for trend <0.001) and categorical (P for trend <0.001) models. Conclusion An increased resting heart rate is associated with a higher level of subclinical inflammation among healthy Korean people.
Lee, Sang-Yong; Ortega, Antonio
2000-04-01
We address the problem of online rate control in digital cameras, where the goal is to achieve near-constant distortion for each image. Digital cameras usually have a pre-determined number of images that can be stored for the given memory size and require limited time delay and constant quality for each image. Due to time delay restrictions, each image should be stored before the next image is received. Therefore, we need to define an online rate control that is based on the amount of memory used by previously stored images, the current image, and the estimated rate of future images. In this paper, we propose an algorithm for online rate control, in which an adaptive reference, a 'buffer-like' constraint, and a minimax criterion (as a distortion metric to achieve near-constant quality) are used. The adaptive reference is used to estimate future images and the 'buffer-like' constraint is required to keep enough memory for future images. We show that using our algorithm to select online bit allocation for each image in a randomly given set of images provides near constant quality. Also, we show that our result is near optimal when a minimax criterion is used, i.e., it achieves a performance close to that obtained by applying an off-line rate control that assumes exact knowledge of the images. Suboptimal behavior is only observed in situations where the distribution of images is not truly random (e.g., if most of the 'complex' images are captured at the end of the sequence.) Finally, we propose a T- step delay rate control algorithm and using the result of 1- step delay rate control algorithm, we show that this algorithm removes the suboptimal behavior.
2010-01-01
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Young chicken and squab slaughter... INSPECTION REGULATIONS Operating Procedures § 381.67 Young chicken and squab slaughter inspection rate... inspector per minute under the traditional inspection procedure for the different young chicken and...
Ye, Jia-kai; Zhang, Jin-tao; Kong, Yan; Xu, Tan; Zou, Ting-ting; Zhang, Yong-hong; Zhang, Shao-yan
2012-09-01
To investigate the relationship between white blood cell count, neutrophils ratio and erythrocyte sedimentation rate and short outcomes among patients with acute ischemic stroke at admission to the hospital. A total of 2675 acute ischemic stroke patients were included in this study. Data on demographic characteristics, life style, history of disease, white blood cell count (WBC), neutrophils ratio (NEUR), erythrocyte sedimentation rate (ESR) and clinical outcomes were collected for all the participants. Poor clinical outcome was defined as neurologic deficiency (NIHSS ≥ 5) at discharge or death during hospitalization. White blood cell count, neutrophils ratio and erythrocyte sedimentation rate were higher in patients with poor outcome than in those without clinical outcome. According to the quartile range, WBC, NEUR and ESR were divided into four levels at admission. After adjustment for multivariate, compared with WBC ≤ 5.6× 10(9)/L, the odds ratio (95% confidence intervals) of poor outcome with ≥ 8.7×10(9)/L was 1.883 (1.306 - 2.716). When compared with NEUR ≤ 0.56, the odds ratio (95% confidence intervals) of poor outcome with 0.57 - 0.64 and with ≥ 0.74 were 1.572 (1.002 - 2.466) and 2.577 (1.698 - 3.910), respectively. When compared with ESR ≤ 4 mm/h, the odds ratio (95% confidence intervals) of poor outcome with ≥ 17 mm/h was 2.426 (1.233 - 4.776). Elevated WBC count and NEUR at admission were significantly and positively associated with poor clinical outcomes among patients with acute ischemic stroke (trend test P acute ischemic stroke (trend test P > 0.05). There appeared associations between WBC, NEUR, ESR and poor outcome among patients with acute ischemic stroke at admission to the hospital. Both elevated WBC count and NEUR showed significantly positive association with poor clinical outcomes among patients with acute ischemic stroke at admission.
Bártová, H.; Kučera, J.; Musílek, L.; Trojek, T.
2014-11-01
In order to evaluate the age from the equivalent dose and to obtain an optimized and efficient procedure for thermoluminescence (TL) dating, it is necessary to obtain the values of both the internal and the external dose rates from dated samples and from their environment. The measurements described and compared in this paper refer to bricks from historic buildings and a fine-grain dating method. The external doses are therefore negligible, if the samples are taken from a sufficient depth in the wall. However, both the alpha dose rate and the beta and gamma dose rates must be taken into account in the internal dose. The internal dose rate to fine-grain samples is caused by the concentrations of natural radionuclides 238U, 235U, 232Th and members of their decay chains, and by 40K concentrations. Various methods can be used for determining trace concentrations of these natural radionuclides and their contributions to the dose rate. The dose rate fraction from 238U and 232Th can be calculated, e.g., from the alpha count rate, or from the concentrations of 238U and 232Th, measured by neutron activation analysis (NAA). The dose rate fraction from 40K can be calculated from the concentration of potassium measured, e.g., by X-ray fluorescence analysis (XRF) or by NAA. Alpha counting and XRF are relatively simple and are accessible for an ordinary laboratory. NAA can be considered as a more accurate method, but it is more demanding regarding time and costs, since it needs a nuclear reactor as a neutron source. A comparison of these methods allows us to decide whether the time- and cost-saving simpler techniques introduce uncertainty that is still acceptable.
High quantum efficiency S-20 photocathodes for photon counting applications
Orlov, Dmitry A; Pinto, Serge Duarte; Glazenborg, Rene; Kernen, Emilie
2016-01-01
Based on conventional S-20 processes, a new series of high quantum efficiency (QE) photocathodes has been developed that can be specifically tuned for use in the ultraviolet, blue or green regions of the spectrum. The QE values exceed 30% at maximum response, and the dark count rate is found to be as low as 30 Hz/cm2 at room temperature. This combination of properties along with a fast temporal response makes these photocathodes ideal for application in photon counting detectors.
Asymmetry in the effect of magnetic field on photon detection and dark counts in bended nanostrips
Semenov, A; Lusche, R; Ilin, K; Siegel, M; Hubers, H -W; Bralovic, N; Dopf, K; Vodolazov, D Yu
2015-01-01
Current crowding in the bends of superconducting nano-structures not only restricts measurable critical current in such structures but also redistributes local probabilities for dark and light counts to appear. Using structures from strips in the form of a square spiral which contain bends with the very same curvature with respect to the directions of bias current and external magnetic field, we have shown that dark counts as well as light count at small photon energies originate from areas around the bends. The minimum in the rate of dark counts reproduces the asymmetry of the maximum critical current density as function of the magnetic field. Contrary, the minimum in the rate of light counts demonstrate opposite asymmetry. The rate of light counts become symmetric at large currents and fields. Comparing locally computed absorption probabilities for photons and the simulated threshold detection current we found approximate location of areas near bends which deliver asymmetric light counts. Any asymmetry is a...
Henzl, Vladimir [Los Alamos National Laboratory; Croft, Stephen [Los Alamos National Laboratory; Swinhoe, Martyn T. [Los Alamos National Laboratory; Tobin, Stephen J. [Los Alamos National Laboratory
2012-07-13
Inspired by approach of Bignan and Martin-Didier (ESARDA 1991) we introduce novel (instrument independent) approach based on multiplication and passive neutron. Based on simulations of SFL-1 the accuracy of determination of {sup tot}Pu content with new approach is {approx}1.3-1.5%. Method applicable for DDA instrument, since it can measure both multiplication and passive neutron count rate. Comparison of pro's & con's of measuring/determining of {sup 239}Pu{sub eff} and {sup tot}Pu suggests a potential for enhanced diversion detection sensitivity.
Santavicca, Daniel F; Prober, Daniel E; 10.1117/12.883979
2012-01-01
We describe a superconducting transition edge sensor based on a nanoscale niobium detector element. This device is predicted to be capable of energy-resolved near-IR single-photon detection with a GHz count rate. The increased speed and sensitivity of this device compared to traditional transition edge sensors result from the very small electronic heat capacity of the nanoscale detector element. In the present work, we calculate the predicted thermal response time and energy resolution. We also discuss approaches for achieving efficient optical coupling to the sub-wavelength detector element using a resonant near-IR antenna.
Geist, William H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-12-01
This set of slides begins by giving background and a review of neutron counting; three attributes of a verification item are discussed: ^{240}Pu_{eff} mass; α, the ratio of (α,n) neutrons to spontaneous fission neutrons; and leakage multiplication. It then takes up neutron detector systems – theory & concepts (coincidence counting, moderation, die-away time); detector systems – some important details (deadtime, corrections); introduction to multiplicity counting; multiplicity electronics and example distributions; singles, doubles, and triples from measured multiplicity distributions; and the point model: multiplicity mathematics.
Dang, Cuong Cao; Le, Vinh Sy; Gascuel, Olivier; Hazes, Bart; Le, Quang Si
2014-10-24
Amino acid replacement rate matrices are a crucial component of many protein analysis systems such as sequence similarity search, sequence alignment, and phylogenetic inference. Ideally, the rate matrix reflects the mutational behavior of the actual data under study; however, estimating amino acid replacement rate matrices requires large protein alignments and is computationally expensive and complex. As a compromise, sub-optimal pre-calculated generic matrices are typically used for protein-based phylogeny. Sequence availability has now grown to a point where problem-specific rate matrices can often be calculated if the computational cost can be controlled. The most time consuming step in estimating rate matrices by maximum likelihood is building maximum likelihood phylogenetic trees from protein alignments. We propose a new procedure, called FastMG, to overcome this obstacle. The key innovation is the alignment-splitting algorithm that splits alignments with many sequences into non-overlapping sub-alignments prior to estimating amino acid replacement rates. Experiments with different large data sets showed that the FastMG procedure was an order of magnitude faster than without splitting. Importantly, there was no apparent loss in matrix quality if an appropriate splitting procedure is used. FastMG is a simple, fast and accurate procedure to estimate amino acid replacement rate matrices from large data sets. It enables researchers to study the evolutionary relationships for specific groups of proteins or taxa with optimized, data-specific amino acid replacement rate matrices. The programs, data sets, and the new mammalian mitochondrial protein rate matrix are available at http://fastmg.codeplex.com.
... radiation therapy, or infection) Cirrhosis of the liver Anemia caused by low iron levels, or low levels of vitamin B12 or folate Chronic kidney disease Reticulocyte count may be higher during pregnancy.
Analog multivariate counting analyzers
Nikitin, A V; Armstrong, T P
2003-01-01
Characterizing rates of occurrence of various features of a signal is of great importance in numerous types of physical measurements. Such signal features can be defined as certain discrete coincidence events, e.g. crossings of a signal with a given threshold, or occurrence of extrema of a certain amplitude. We describe measuring rates of such events by means of analog multivariate counting analyzers. Given a continuous scalar or multicomponent (vector) input signal, an analog counting analyzer outputs a continuous signal with the instantaneous magnitude equal to the rate of occurrence of certain coincidence events. The analog nature of the proposed analyzers allows us to reformulate many problems of the traditional counting measurements, and cast them in a form which is readily addressed by methods of differential calculus rather than by algebraic or logical means of digital signal processing. Analog counting analyzers can be easily implemented in discrete or integrated electronic circuits, do not suffer fro...
Kite, Edwin S.; Mayer, David P.
2017-04-01
Small-crater counts on Mars light-toned sedimentary rock are often inconsistent with any isochron; these data are usually plotted then ignored. We show (using an 18-HiRISE-image, > 104-crater dataset) that these non-isochron crater counts are often well-fit by a model where crater production is balanced by crater obliteration via steady exhumation. For these regions, we fit erosion rates. We infer that Mars light-toned sedimentary rocks typically erode at ∼102 nm/yr, when averaged over 10 km2 scales and 107-108 yr timescales. Crater-based erosion-rate determination is consistent with independent techniques, but can be applied to nearly all light-toned sedimentary rocks on Mars. Erosion is swift enough that radiolysis cannot destroy complex organic matter at some locations (e.g. paleolake deposits at SW Melas), but radiolysis is a severe problem at other locations (e.g. Oxia Planum). The data suggest that the relief of the Valles Marineris mounds is currently being reduced by wind erosion, and that dust production on Mars < 3 Gya greatly exceeds the modern reservoir of mobile dust.
Kite, Edwin S
2016-01-01
Small-crater counts on Mars light-toned sedimentary rock are often inconsistent with any isochron; these data are usually plotted then ignored. We show (using an 18-HiRISE-image, >10^4 crater dataset) that these non-isochron crater counts are often well-fit by a model where crater production is balanced by crater obliteration via steady exhumation. For these regions, we fit erosion rates. We infer that Mars light-toned sedimentary rocks typically erode at ~10^2 nm/yr, when averaged over 10 km^2 scales and 10^7-10^8 yr timescales. Crater-based erosion-rate determination is consistent with independent techniques, but can be applied to nearly all light-toned sedimentary rocks on Mars. Erosion is swift enough that radiolysis cannot destroy complex organic matter at some locations (e.g. paleolake deposits at SW Melas), but radiolysis is a severe problem at other locations (e.g. Oxia Planum). The data suggest that the relief of the Valles Marineris mounds is currently being reduced by wind erosion, and that dust pr...
Mikulec, Bettina; McPhate, J B; Tremsin, A S; Siegmund, O H W; Clark, Allan G; CERN. Geneva
2005-01-01
The future of ground-based optical astronomy lies with advancements in adaptive optics (AO) to overcome the limitations that the atmosphere places on high resolution imaging. A key technology for AO systems on future very large telescopes are the wavefront sensors (WFS) which detect the optical phase error and send corrections to deformable mirrors. Telescopes with >30 m diameters will require WFS detectors that have large pixel formats (512x512), low noise (<3 e-/pixel) and very high frame rates (~1 kHz). These requirements have led to the idea of a bare CMOS active pixel device (the Medipix2 chip) functioning in counting mode as an anode with noiseless readout for a microchannel plate (MCP) detector and at 1 kHz continuous frame rate. First measurement results obtained with this novel detector are presented both for UV photons and beta particles.
Kamate, Wasim Ismail; Vibhute, Nupura Aniket; Baad, Rajendra Krishna
2017-04-01
Pregnancy, a period from conception till birth, causes changes in the functioning of the human body as a whole and specifically in the oral cavity that may favour the emergence of dental caries. Many studies have shown pregnant women at increased risk for dental caries, however, specific salivary caries risk factors and the particular period of pregnancy at heightened risk for dental caries are yet to be explored and give a scope of further research in this area. The aim of the present study was to assess the severity of dental caries in pregnant women compared to non-pregnant women by evaluating parameters like Decayed, Missing, Filled Teeth (DMFT) index, salivary Streptococcus mutans count, flow rate, pH and total calcium content. A total of 50 first time pregnant women in the first trimester were followed during their second trimester, third trimester and postpartum period for the evaluation of DMFT by World Health Organization (WHO) scoring criteria, salivary flow rate by drooling method, salivary pH by pH meter, salivary total calcium content by bioassay test kit and salivary Streptococcus mutans count by semiautomatic counting of colonies grown on Mitis Salivarius (MS) agar supplemented by 0.2U/ml of bacitracin and 10% sucrose. The observations of pregnant women were then compared with same parameters evaluated in the 50 non-pregnant women. Paired t-test and Wilcoxon sign rank test were performed to assess the association between the study parameters. Evaluation of different caries risk factors between pregnant and non-pregnant women clearly showed that pregnant women were at a higher risk for dental caries. Comparison of caries risk parameters during the three trimesters and postpartum period showed that the salivary Streptococcus mutans count had significantly increased in the second trimester, third trimester and postpartum period while the mean pH and mean salivary total calcium content decreased in the third trimester and postpartum period. These changes
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2012-01-01
We analyze the relationship between maximum cluster mass, M_max, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H2) and star formation rate (Sigma_SFR) in the flocculent galaxy M33, using published gas data and a catalog of more than 600 young star clusters in its disk. By comparing the radial distributions of gas and most massive cluster masses, we find that M_max is proportional to Sigma_gas^4.7, M_max is proportional Sigma_H2^1.3, and M_max is proportional to Sigma_SFR^1.0. We rule out that these correlations result from the size of sample; hence, the change of the maximum cluster mass must be due to physical causes.
Gian Paolo Beretta
2008-08-01
Full Text Available A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
Beretta, Gian P.
2008-09-01
A rate equation for a discrete probability distribution is discussed as a route to describe smooth relaxation towards the maximum entropy distribution compatible at all times with one or more linear constraints. The resulting dynamics follows the path of steepest entropy ascent compatible with the constraints. The rate equation is consistent with the Onsager theorem of reciprocity and the fluctuation-dissipation theorem. The mathematical formalism was originally developed to obtain a quantum theoretical unification of mechanics and thermodinamics. It is presented here in a general, non-quantal formulation as a part of an effort to develop tools for the phenomenological treatment of non-equilibrium problems with applications in engineering, biology, sociology, and economics. The rate equation is also extended to include the case of assigned time-dependences of the constraints and the entropy, such as for modeling non-equilibrium energy and entropy exchanges.
California Environmental Health Tracking Program — This dataset contains case counts, rates, and confidence intervals of asthma (ICD9-CM 493.0-493.9) and myocardial infarction (ICD9-CM 410) inpatient hospitalizations...
A Mocroft
2012-11-01
Full Text Available Background CD4 and viral loads are used in clinical trials as surrogate endpoints for assessing efficacy of newly available antiretrovirals. If antiretrovirals act through other pathways or negatively affect the risk of disease this would not be identified prior to licensing. The aims of this study were to investigate the CD4 and viral load specific rates of fatal and non-fatal AIDS and non-AIDS events according to current antiretrovirals. Methods Poisson regression was used to compare overall events (fatal or non-fatal AIDS, non-AIDS or death, AIDS events (fatal and non-fatal or non-AIDS events (fatal or non-fatal for specific nucleoside pairs and third drugs used with>1000 person-years of follow-up (PYFU after January 1st 2001. Results 9801 patients were included. The median baseline date was January 2004 (interquartile range [IQR] January 2001–February 2007, age was 40.4 (IQR 34.6–47.3 years, and time since starting cART was 3.3 (IQR 0.9–5.1 years. At baseline, the median nadir CD4 was 162 (IQR 71–257/mm3, baseline CD4 was 390 (IQR 249–571/mm3, viral load was 1.9 (IQR 1.7–3.3 log10copies/ml and 2961 (30.2% had a prior AIDS diagnosis and 6.4 years prior to baseline. During 42372.5 PYFU, 1203 (437 AIDS and 766 non-AIDS events occurred. The overall event rate was 2.8 per 100 PYFU (95% confidence interval [CI] 2.7–3.0, of AIDS events was 1.0 (95% CI 0.9–1.1 and of non-AIDS events was 1.8 (95% CI 1.7–1.9. Of the AIDS events, 53 (12.1%were fatal as were 239 (31.2% of the non-AIDS events. After adjustment, there was weak evidence of a difference in the overall events rates between nucleoside pairs (global p-value=0.084, and third drugs (global p-value=0.031. Compared to zidovudine/lamivudine, patients taking abacavir/lamivudine (adjusted incidence rate ratio [aIRR] 1.22; 95% CI 0.99–1.49 and abacavir plus one other nucleoside (aIRR 1.51; 95% CI 1.14–2.02 had an increased incidence of overall events. Comparing the third drugs
Magdich, Salwa; Jarboui, Raja; Rouina, Béchir Ben; Boukhris, Makki; Ammar, Emna
2012-07-15
Olive mill wastewater (OMW) spraying effects onto olive-tree fields were investigated. Three OMW levels (50, 100 and 200 m(3)ha(-1)year(-1)) were applied over six successive years. Olive-crop yields, phenolic compounds progress, phytotoxicity and microbial counts were studied at different soil depths. Olive yield showed improvements with OMW level applied. Soil polyphenolic content increased progressively in relation to OMW levels in all the investigated layers. However, no significant difference was noted in lowest treatment rate compared to the control field. In the soil upper-layers (0-40 cm), five phenolic compounds were identified over six consecutive years of OMW-spraying. In all the soil-layers, the radish germination index exceeded 85%. However, tomato germination test values decreased with the applied OMW amount. For all treatments, microbial counts increased with OMW quantities and spraying frequency. Matrix correlation showed a strong relationship between soil polyphenol content and microorganisms, and a negative one to tomato germination index.
Lovell, Dale I; Cuneo, Ross; Gass, Greg C
2010-06-01
This study examined the effect of strength training (ST) and short-term detraining on maximum force and rate of force development (RFD) in previously sedentary, healthy older men. Twenty-four older men (70-80 years) were randomly assigned to a ST group (n = 12) and C group (control, n = 12). Training consisted of three sets of six to ten repetitions on an incline squat at 70-90% of one repetition maximum three times per week for 16 weeks followed by 4 weeks of detraining. Regional muscle mass was assessed before and after training by dual-energy X-ray absorptiometry. Training increased RFD, maximum bilateral isometric force, and force in 500 ms, upper leg muscle mass and strength above pre-training values (14, 25, 22, 7, 90%, respectively; P force and RFD of older men. However, older individuals may lose some neuromuscular performance after a period of short-term detraining and that resistance exercise should be performed on a regular basis to maintain training adaptations.
Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo [Institut Laue Langevin, Grenoble (France)
2015-07-01
A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involved are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)
... Lab and Imaging Tests Understanding Blood Counts Understanding Blood Counts Understanding Blood Counts SHARE: Print Glossary Blood cell counts give ... your blood that's occupied by red cells. Normal Blood Counts Normal blood counts fall within a range ...
... limited. Home Visit Global Sites Search Help? White Blood Cell Count Share this page: Was this page helpful? ... Count; Leukocyte Count; White Count Formal name: White Blood Cell Count Related tests: Complete Blood Count , Blood Smear , ...
Salim, Arwa, E-mail: arwa.salim@eee.strath.ac.uk [University of Strathclyde, Scotland (United Kingdom); Crockett, Louise [University of Strathclyde, Scotland (United Kingdom); McLean, John; Milne, Peter [D-TACQ Solutions, Scotland (United Kingdom)
2012-12-15
Highlights: Black-Right-Pointing-Pointer The development of a new digital signal processing platform is described. Black-Right-Pointing-Pointer The system will allow users to configure the real-time signal processing through software routines. Black-Right-Pointing-Pointer The architecture of the DRUID system and signal processing elements is described. Black-Right-Pointing-Pointer A prototype of the DRUID system has been developed for the digital chopper-integrator. Black-Right-Pointing-Pointer The results of acquisition on 96 channels at 500 kSamples/s per channel are presented. - Abstract: Real-time signal processing in plasma fusion experiments is required for control and for data reduction as plasma pulse times grow longer. The development time and cost for these high-rate, multichannel signal processing systems can be significant. This paper proposes a new digital signal processing (DSP) platform for the data acquisition system that will allow users to easily customize real-time signal processing systems to meet their individual requirements. The D-TACQ reconfigurable user in-line DSP (DRUID) system carries out the signal processing tasks in hardware co-processors (CPs) implemented in an FPGA, with an embedded microprocessor ({mu}P) for control. In the fully developed platform, users will be able to choose co-processors from a library and configure programmable parameters through the {mu}P to meet their requirements. The DRUID system is implemented on a Spartan 6 FPGA, on the new rear transition module (RTM-T), a field upgrade to existing D-TACQ digitizers. As proof of concept, a multiply-accumulate (MAC) co-processor has been developed, which can be configured as a digital chopper-integrator for long pulse magnetic fusion devices. The DRUID platform allows users to set options for the integrator, such as the number of masking samples. Results from the digital integrator are presented for a data acquisition system with 96 channels simultaneously acquiring data
Perry, Mike; Kader, Gary
1998-01-01
Presents an activity on the simplification of penguin counting by employing the basic ideas and principles of sampling to teach students to understand and recognize its role in statistical claims. Emphasizes estimation, data analysis and interpretation, and central limit theorem. Includes a list of items for classroom discussion. (ASK)
Damonte, Kathleen
2004-01-01
Scientists use sampling to get an estimate of things they cannot easily count. A population is made up of all the organisms of one species living together in one place at the same time. All of the people living together in one town are considered a population. All of the grasshoppers living in a field are a population. Scientists keep track of the…
Malekifarsani, A; Skachek, M A
2009-10-01
shown that the concentrations of the following radionuclides are limited by solubility and precipitate around the waste and buffer: U, Np, Ra, Sm, Zr, Se, Tc, and Pd. The Sensitivity of maximum release rates in case precipitation shows that some nuclides such as Cs-135, Nb-94, Nb-93 m, Zr-93, Sn-126, Th-230, Pu-240, Pu-242, Pu-239, Cm-245, Am-243, Cm-245, U-233, Ac-227, Pb-210, Pa-231 and Th-229 are very little changed in case the maximum release rate from EBS corresponding to eliminate precipitation in buffer material. Some nuclides such as Se-79, Tc-99, Pd-107, Th-232, U-236, U-233, Ra-226, Np-237 U-235, U-234, and U-238 are virtually changed in the maximum release rate compared to case that taking account precipitation. In Sensitivity of maximum release rates in case to taking account stable isotopes (according to the table of inventory) there are only some nuclides with their stable isotopes in the vitrified waste. And calculation shows that Pd-107 and Se-79 are very increase in case eliminate stable isotope. The Sensitivity of maximum release rates in case retardation with sorption shows that some nuclides such as Pu-240, Pu-241, Pu-239, Cm-245, Am-241, Cm-246, and Am-243 are increased in some time in case maximum release rate from EBS corresponding to eliminate retardation in buffer material. Some nuclides such as U-235, U-233 and U-236 have a little decrease in case maximum release because their parents have short live and before decay to their daughter will released from the EBS. If the characteristic time taken for a nuclide to diffuse across the buffer exceeds its half-life, then the release rate of that nuclide from the EBS will be attenuated by radioactive decay. Thus, the retardation of the diffusion process due to sorption tends to reduce the release rates of short-lived nuclides more effectively than for the long-lived ones. For example, release rates of Pu-240, Cm-246 and Am-241, which are relatively short-lived and strongly sorbing, are very small
Nocente, M., E-mail: massimo.nocente@mib.infn.it [EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom); Dipartimento di Fisica, Università di Milano-Bicocca, Milano (Italy); Istituto di Fisica del Plasma “Piero Caldirola,” Milano (Italy); Rigamonti, D.; Croci, G.; Gorini, G. [Dipartimento di Fisica, Università di Milano-Bicocca, Milano (Italy); Istituto di Fisica del Plasma “Piero Caldirola,” Milano (Italy); Perseo, V. [Dipartimento di Fisica, Università di Milano-Bicocca, Milano (Italy); Tardocchi, M.; Cremona, A.; Muraro, A. [Istituto di Fisica del Plasma “Piero Caldirola,” Milano (Italy); Boltruczyk, G.; Broslawski, A.; Gosk, M.; Korolczuk, S.; Zychor, I. [Narodowe Centrum Badan Jadrowych (NCBJ), Otwock-Swierk (Poland); Kiptily, V. [Culham Centre for Fusion Energy, Culham (United Kingdom); Mazzocco, M.; Strano, E. [Dipartimento di Fisica, Istituto Nazionale di Fisica Nucleare, Padova (Italy); Collaboration: EUROfusion Consortium, JET, Culham Science Centre, Abingdon (United Kingdom)
2016-11-15
Gamma-ray spectroscopy measurements at MHz counting rates have been carried out, for the first time, with a compact spectrometer based on a LaBr{sub 3} scintillator and silicon photomultipliers. The instrument, which is also insensitive to magnetic fields, has been developed in view of the upgrade of the gamma-ray camera diagnostic for α particle measurements in deuterium-tritium plasmas of the Joint European Torus. Spectra were measured up to 2.9 MHz with a projected energy resolution of 3%-4% in the 3-5 MeV range, of interest for fast ion physics studies in fusion plasmas. The results reported here pave the way to first time measurements of the confined α particle profile in high power plasmas of the next deuterium-tritium campaign at the Joint European Torus.
Nocente, M; Rigamonti, D; Perseo, V; Tardocchi, M; Boltruczyk, G; Broslawski, A; Cremona, A; Croci, G; Gosk, M; Kiptily, V; Korolczuk, S; Mazzocco, M; Muraro, A; Strano, E; Zychor, I; Gorini, G
2016-11-01
Gamma-ray spectroscopy measurements at MHz counting rates have been carried out, for the first time, with a compact spectrometer based on a LaBr3 scintillator and silicon photomultipliers. The instrument, which is also insensitive to magnetic fields, has been developed in view of the upgrade of the gamma-ray camera diagnostic for α particle measurements in deuterium-tritium plasmas of the Joint European Torus. Spectra were measured up to 2.9 MHz with a projected energy resolution of 3%-4% in the 3-5 MeV range, of interest for fast ion physics studies in fusion plasmas. The results reported here pave the way to first time measurements of the confined α particle profile in high power plasmas of the next deuterium-tritium campaign at the Joint European Torus.
Nocente, M.; Rigamonti, D.; Perseo, V.; Tardocchi, M.; Boltruczyk, G.; Broslawski, A.; Cremona, A.; Croci, G.; Gosk, M.; Kiptily, V.; Korolczuk, S.; Mazzocco, M.; Muraro, A.; Strano, E.; Zychor, I.; Gorini, G.
2016-11-01
Gamma-ray spectroscopy measurements at MHz counting rates have been carried out, for the first time, with a compact spectrometer based on a LaBr3 scintillator and silicon photomultipliers. The instrument, which is also insensitive to magnetic fields, has been developed in view of the upgrade of the gamma-ray camera diagnostic for α particle measurements in deuterium-tritium plasmas of the Joint European Torus. Spectra were measured up to 2.9 MHz with a projected energy resolution of 3%-4% in the 3-5 MeV range, of interest for fast ion physics studies in fusion plasmas. The results reported here pave the way to first time measurements of the confined α particle profile in high power plasmas of the next deuterium-tritium campaign at the Joint European Torus.
Foudray, Angela M K; Habte, Frezghi; Chinn, Garry; Zhang, Jin; Levin, Craig S
2006-01-01
We are investigating a high-sensitivity, high-resolution positron emission tomography (PET) system for clinical use in the detection, diagnosis and staging of breast cancer. Using conventional figures of merit, design parameters were evaluated for count rate performance, module dead time, and construction complexity. The detector system modeled comprises extremely thin position-sensitive avalanche photodiodes coupled to lutetium oxy-orthosilicate scintillation crystals. Previous investigations of detector geometries with Monte Carlo indicated that one of the largest impacts on sensitivity is local scintillation crystal density when considering systems having the same average scintillation crystal densities (same crystal packing fraction and system solid-angle coverage). Our results show the system has very good scatter and randoms rejection at clinical activity ranges ( approximately 200 muCi).
Henzl, Vladimir [Los Alamos National Laboratory; Croft, Stephen [Los Alamos National Laboratory; Swinhoe, Martyn T. [Los Alamos National Laboratory; Tobin, Stephen J. [Los Alamos National Laboratory
2012-07-18
A key objective of the Next Generation Safeguards Initiative (NGSI) is to evaluate and develop non-destructive assay (NDA) techniques to determine the elemental plutonium content in a commercial-grade nuclear spent fuel assembly (SFA) [1]. Within this framework, we investigate by simulation a novel analytical approach based on combined information from passive measurement of the total neutron count rate of a SFA and its multiplication determined by the active interrogation using an instrument based on a Differential Die-Away technique (DDA). We use detailed MCNPX simulations across an extensive set of SFA characteristics to establish the approach and demonstrate its robustness. It is predicted that Pu content can be determined by the proposed method to a few %.
Luis Eduardo Cruz-Martínez
2014-10-01
Full Text Available Background. The formulas to predict maximum heart rate have been used for many years in different populations. Objective. To verify the significance and the association of formulas of Tanaka and 220-age when compared to real maximum heart rate. Materials and methods. 30 subjects -22 men, 8 women- between 18 and 30 years of age were evaluated on a cycle ergometer and their real MHR values were statistically compared with the values of formulas currently used to predict MHR. Results. The results demonstrate that both Tanaka p=0.0026 and 220-age p=0.000003 do not predict real MHR, nor does a linear association exist between them. Conclusions. Due to the overestimation with respect to real MHR value that these formulas make, we suggest a correction of 6 bpm to the final result. This value represents the median of the difference between the Tanaka value and the real MHR. Both Tanaka (r=0.272 and 220-age (r=0.276 are not adequate predictors of MHR during exercise at the elevation of Bogotá in subjects of 18 to 30 years of age, although more study with a larger sample size is suggested.
Drescher, A; Yoho, M; Landsberger, S; Durbin, M; Biegalski, S; Meier, D; Schwantes, J
2017-04-01
A radiation detection system consisting of two cerium doped lanthanum bromide (LaBr3:Ce) scintillation detectors in a gamma-gamma coincidence configuration has been used to demonstrate the advantages that coincident detection provides relative to a single detector, and the advantages that LaBr3:Ce detectors provide relative to high purity germanium (HPGe) detectors. Signal to noise ratios of select photopeak pairs for these detectors have been compared to high-purity germanium (HPGe) detectors in both single and coincident detector configurations in order to quantify the performance of each detector configuration. The efficiency and energy resolution of LaBr3:Ce detectors have been determined and compared to HPGe detectors. Coincident gamma-ray pairs from the radionuclides (152)Eu and (133)Ba have been identified in a sample that is dominated by (137)Cs. Gamma-gamma coincidence successfully reduced the Compton continuum from the large (137)Cs peak, revealed several coincident gamma energies characteristic of these nuclides, and improved the signal-to-noise ratio relative to single detector measurements. LaBr3:Ce detectors performed at count rates multiple times higher than can be achieved with HPGe detectors. The standard background spectrum consisting of peaks associated with transitions within the LaBr3:Ce crystal has also been significantly reduced. It is shown that LaBr3:Ce detectors have the unique capability to perform gamma-gamma coincidence measurements in very high count rate scenarios, which can potentially benefit nuclear safeguards in situ measurements of spent nuclear fuel. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shaw, A; Takács, I; Pagilla, K R; Murthy, S
2013-10-15
The Monod equation is often used to describe biological treatment processes and is the foundation for many activated sludge models. The Monod equation includes a "half-saturation coefficient" to describe the effect of substrate limitations on the process rate and it is customary to consider this parameter to be a constant for a given system. The purpose of this study was to develop a methodology, and its use to show that the half-saturation coefficient for denitrification is not constant but is in fact a function of the maximum denitrification rate. A 4-step procedure is developed to investigate the dependency of half-saturation coefficients on the maximum rate and two different models are used to describe this dependency: (a) an empirical linear model and (b) a deterministic model based on Fick's law of diffusion. Both models are proved better for describing denitrification kinetics than assuming a fixed K(NO3) at low nitrate concentrations. The empirical model is more utilitarian whereas the model based on Fick's law has a fundamental basis that enables the intrinsic K(NO3) to be estimated. In this study data was analyzed from 56 denitrification rate tests and it was found that the extant K(NO3) varied between 0.07 mgN/L and 1.47 mgN/L (5th and 95th percentile respectively) with an average of 0.47 mgN/L. In contrast to this, the intrinsic K(NO3) estimated for the diffusion model was 0.01 mgN/L which indicates that the extant K(NO3) is greatly influenced by, and mostly describes, diffusion limitations.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Rosewarne, P J; Wilson, J M; Svendsen, J C
2016-01-01
Metabolic rate is one of the most widely measured physiological traits in animals and may be influenced by both endogenous (e.g. body mass) and exogenous factors (e.g. oxygen availability and temperature). Standard metabolic rate (SMR) and maximum metabolic rate (MMR) are two fundamental physiological variables providing the floor and ceiling in aerobic energy metabolism. The total amount of energy available between these two variables constitutes the aerobic metabolic scope (AMS). A laboratory exercise aimed at an undergraduate level physiology class, which details the appropriate data acquisition methods and calculations to measure oxygen consumption rates in rainbow trout Oncorhynchus mykiss, is presented here. Specifically, the teaching exercise employs intermittent flow respirometry to measure SMR and MMR, derives AMS from the measurements and demonstrates how AMS is affected by environmental oxygen. Students' results typically reveal a decline in AMS in response to environmental hypoxia. The same techniques can be applied to investigate the influence of other key factors on metabolic rate (e.g. temperature and body mass). Discussion of the results develops students' understanding of the mechanisms underlying these fundamental physiological traits and the influence of exogenous factors. More generally, the teaching exercise outlines essential laboratory concepts in addition to metabolic rate calculations, data acquisition and unit conversions that enhance competency in quantitative analysis and reasoning. Finally, the described procedures are generally applicable to other fish species or aquatic breathers such as crustaceans (e.g. crayfish) and provide an alternative to using higher (or more derived) animals to investigate questions related to metabolic physiology.
Ma, Jingxing; Mungoni, Lucy Jubeki; Verstraete, Willy; Carballa, Marta
2009-07-01
The maximum propionic acid (HPr) removal rate (R(HPr)) was investigated in two lab-scale Upflow Anaerobic Sludge Bed (UASB) reactors. Two feeding strategies were applied by modifying the hydraulic retention time (HRT) in the UASB(HRT) and the influent HPr concentration in the UASB(HPr), respectively. The experiment was divided into three main phases: phase 1, influent with only HPr; phase 2, HPr with macro-nutrients supplementation and phase 3, HPr with macro- and micro-nutrients supplementation. During phase 1, the maximum R(HPr) achieved was less than 3 g HPr-CODL(-1)d(-1) in both reactors. However, the subsequent supplementation of macro- and micro-nutrients during phases 2 and 3 allowed to increase the R(HPr) up to 18.1 and 32.8 g HPr-CODL(-1)d(-1), respectively, corresponding with an HRT of 0.5h in the UASB(HRT) and an influent HPr concentration of 10.5 g HPr-CODL(-1) in the UASB(HPr). Therefore, the high operational capacity of these reactor systems, specifically converting HPr with high throughput and high influent HPr level, was demonstrated. Moreover, the presence of macro- and micro-nutrients is clearly essential for stable and high HPr removal in anaerobic digestion.
Gonzalez-Lopezlira, Rosa A; Kroupa, Pavel
2013-01-01
We analyze the relationship between maximum cluster mass, and surface densities of total gas (Sigma_gas), molecular gas (Sigma_H_2), neutral gas (Sigma_HI) and star formation rate (Sigma_SFR) in the grand design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. We find for clusters older than 25 Myr that M_3rd, the median of the 5 most massive clusters, is proportional to Sigma_HI^0.4. There is no correlation with Sigma_gas, Sigma_H2, or Sigma_SFR. For clusters younger than 10 Myr, M_3rd is proportional to Sigma_HI^0.6, M_3rd is proportional to Sigma_gas^0.5; there is no correlation with either Sigma_H_2 or Sigma_SFR. The results could hardly be more different than those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but M_3rd is proportional to Sigma_g...
Hony, S; Galliano, F; Galametz, M; Cormier, D; Chen, C -H R; Dib, S; Hughes, A; Klessen, R S; Roman-Duval, J; Smith, L; Bernard, J -P; Bot, C; Carlson, L; Gordon, K; Indebetouw, R; Lebouteiller, V; Lee, M -Y; Madden, S C; Meixner, M; Oliveira, J; Rubio, M; Sauvage, M; Wu, R
2015-01-01
The rate at which interstellar gas is converted into stars, and its dependence on environment, is one of the pillars on which our understanding of the visible Universe is build. We present a comparison of the surface density of young stars (Sigma_*) and dust surface density (Sigma_d) across NGC346 (N66) in 115 independent pixels of 6x6 pc^2. We find a correlation between Sigma_* and Sigma_d with a considerable scatter. A power law fit to the data yields a steep relation with an exponent of 2.6+-0.2. We convert Sigma_d to gas surface density (Sigma_g) and Sigma_* to star formation rate (SFR) surface densities (Sigma_SFR), using simple assumptions for the gas-to-dust mass ratio and the duration of star formation. The derived total SFR (4+-1 10^-3 M_sun/yr) is consistent with SFR estimated from the Ha emission integrated over the Ha nebula. On small scales the Sigma_SFR derived using Ha systematically underestimates the count-based Sigma_SFR, by up to a factor of 10. This is due to ionizing photons escaping the ...
Hony, S.; Gouliermis, D. A.; Galliano, F.; Galametz, M.; Cormier, D.; Chen, C.-H. R.; Dib, S.; Hughes, A.; Klessen, R. S.; Roman-Duval, J.; Smith, L.; Bernard, J.-P.; Bot, C.; Carlson, L.; Gordon, K.; Indebetouw, R.; Lebouteiller, V.; Lee, M.-Y.; Madden, S. C.; Meixner, M.; Oliveira, J.; Rubio, M.; Sauvage, M.; Wu, R.
2015-04-01
The rate at which interstellar gas is converted into stars, and its dependence on environment, is one of the pillars on which our understanding of the visible Universe is build. We present a comparison of the surface density of young stars (Σ⋆) and dust surface density (Σdust) across NGC 346 (N66) in 115 independent pixels of 6 × 6 pc2. We find a correlation between Σ⋆ and Σdust with a considerable scatter. A power-law fit to the data yields a steep relation with an exponent of 2.6 ± 0.2. We convert Σdust to gas surface density (Σgas) and Σ⋆ to star formation rate (SFR) surface densities (ΣSFR), using simple assumptions for the gas-to-dust mass ratio and the duration of star formation. The derived total SFR (4 ± 1×10-3 M⊙ yr-1) is consistent with SFR estimated from the Hα emission integrated over the Hα nebula. On small scales the ΣSFR derived using Hα systematically underestimates the count-based ΣSFR, by up to a factor of 10. This is due to ionizing photons escaping the area, where the stars are counted. We find that individual 36 pc2 pixels fall systematically above integrated disc galaxies in the Schmidt-Kennicutt diagram by on average a factor of ˜7. The NGC 346 average SFR over a larger area (90 pc radius) lies closer to the relation but remains high by a factor of ˜3. The fraction of the total mass (gas plus young stars) locked in young stars is systematically high (˜10 per cent) within the central 15 pc and systematically lower outside (2 per cent), which we interpret as variations in star formation efficiency. The inner 15 pc is dominated by young stars belonging to a centrally condensed cluster, while the outer parts are dominated by a dispersed population. Therefore, the observed trend could reflect a change of star formation efficiency between clustered and non-clustered star formation.
Alfredo Tomasetta
2010-06-01
Full Text Available Timothy Williamson supports the thesis that every possible entity necessarily exists and so he needs to explain how a possible son of Wittgenstein’s, for example, exists in our world:he exists as a merely possible object (MPO, a pure locus of potential. Williamson presents a short argument for the existence of MPOs: how many knives can be made by fitting together two blades and two handles? Four: at the most two are concrete objects, the others being merely possible knives and merely possible objects. This paper defends the idea that one can avoid reference and ontological commitment to MPOs. My proposal is that MPOs can be dispensed with by using the notion of rules of knife-making. I first present a solution according to which we count lists of instructions - selected by the rules - describing physical combinations between components. This account, however, has its own difficulties and I eventually suggest that one can find a way out by admitting possible worlds, entities which are more commonly accepted - at least by philosophers - than MPOs. I maintain that, in answering Williamson’s questions, we count classes of physically possible worlds in which the same instance of a general rule is applied.
Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc
2015-09-01
This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series.
1993-07-01
This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government`s interest is approximately 78% and CUSA`s interest is approximately 22%. The government`s interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS).
... may be ordered when: CBC results show a decreased RBC count and/or a decreased hemoglobin and hematocrit A healthcare practitioner wants to ... and hematocrit, to help determine the degree and rate of overproduction of RBCs ... during pregnancy . Newborns have a higher percentage of reticulocytes, but ...
Abadi, Ali Salehi Sahl; Mazlomi, Adel; Saraji, Gebraeil Nasl; Zeraati, Hojjat; Hadian, Mohammad Reza; Jafari, Amir Homayoun
2015-10-01
In spite of the widespread use of automation in industry, manual material handling (MMH) is still performed in many occupational settings. The emphasis on ergonomics in MMH tasks is due to the potential risks of workplace accidents and injuries. This study aimed to assess the effect of box size, frequency of lift, and height of lift on maximum acceptable weight of lift (MAWL) on the heart rates of male university students in Iran. This experimental study was conducted in 2015 with 15 male students recruited from Tehran University of Medical Sciences. Each participant performed 18 different lifting tasks that involved three lifting frequencies (1lift/min, 4.3 lifts/min and 6.67 lifts/min), three lifting heights (floor to knuckle, knuckle to shoulder, and shoulder to arm reach), and two box sizes. Each set of experiments was conducted during the 20 min work period using the free-style lifting technique. The working heart rates (WHR) were recorded for the entire duration. In this study, we used SPSS version 18 software and descriptive statistical methods, analysis of variance (ANOVA), and the t-test for data analysis. The results of the ANOVA showed that there was a significant difference between the mean of MAWL in terms of frequencies of lifts (p = 0.02). Tukey's post hoc test indicated that there was a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0. 01). There was a significant difference between the mean heart rates in terms of frequencies of lifts (p = 0.006), and Tukey's post hoc test indicated a significant difference between the frequencies of 1 lift/minute and 6.67 lifts/minute (p = 0.004). But, there was no significant difference between the mean of MAWL and the mean heart rate in terms of lifting heights (p > 0.05). The results of the t-test showed that there was a significant difference between the mean of MAWL and the mean heart rate in terms of the sizes of the two boxes (p = 0.000). Based on the results of
Blok, Chris; Jackson, Brian E; Guo, Xianfeng; de Visser, Pieter H B; Marcelis, Leo F M
2017-01-01
Growing on rooting media other than soils in situ -i.e., substrate-based growing- allows for higher yields than soil-based growing as transport rates of water, nutrients, and oxygen in substrate surpass those in soil. Possibly water-based growing allows for even higher yields as transport rates of water and nutrients in water surpass those in substrate, even though the transport of oxygen may be more complex. Transport rates can only limit growth when they are below a rate corresponding to maximum plant uptake. Our first objective was to compare Chrysanthemum growth performance for three water-based growing systems with different irrigation. We compared; multi-point irrigation into a pond (DeepFlow); one-point irrigation resulting in a thin film of running water (NutrientFlow) and multi-point irrigation as droplets through air (Aeroponic). Second objective was to compare press pots as propagation medium with nutrient solution as propagation medium. The comparison included DeepFlow water-rooted cuttings with either the stem 1 cm into the nutrient solution or with the stem 1 cm above the nutrient solution. Measurements included fresh weight, dry weight, length, water supply, nutrient supply, and oxygen levels. To account for differences in radiation sum received, crop performance was evaluated with Radiation Use Efficiency (RUE) expressed as dry weight over sum of Photosynthetically Active Radiation. The reference, DeepFlow with substrate-based propagation, showed the highest RUE, even while the oxygen supply provided by irrigation was potentially growth limiting. DeepFlow with water-based propagation showed 15-17% lower RUEs than the reference. NutrientFlow showed 8% lower RUE than the reference, in combination with potentially limiting irrigation supply of nutrients and oxygen. Aeroponic showed RUE levels similar to the reference and Aeroponic had non-limiting irrigation supply of water, nutrients, and oxygen. Water-based propagation affected the subsequent
Photon counting digital holography
Demoli, Nazif; Skenderović, Hrvoje; Stipčević, Mario; Pavičić, Mladen
2016-05-01
Digital holography uses electronic sensors for hologram recording and numerical method for hologram reconstruction enabling thus the development of advanced holography applications. However, in some cases, the useful information is concealed in a very wide dynamic range of illumination intensities and successful recording requires an appropriate dynamic range of the sensor. An effective solution to this problem is the use of a photon-counting detector. Such detectors possess counting rates of the order of tens to hundreds of millions counts per second, but conditions of recording holograms have to be investigated in greater detail. Here, we summarize our main findings on this problem. First, conditions for optimum recording of digital holograms for detecting a signal significantly below detector's noise are analyzed in terms of the most important holographic measures. Second, for time-averaged digital holograms, optimum recordings were investigated for exposures shorter than the vibration cycle. In both cases, these conditions are studied by simulations and experiments.
Sharpe, A N; Hearn, E M; Kovacs-Nolan, J
2000-01-01
Food suspensions prepared by Pulsifier contained less debris and filtered 1.3x to 12x faster through hydrophobic grid membrane filters (HGMFs) than those prepared by Stomacher 400. Coliform and Escherichia coli counts made by an HGMF method yielded 84 and 36 paired samples, respectively, positive by both suspending methods. Overall counts of pulsificates and stomachates did not differ significantly for either analysis, though coliform counts by Pulsifier were significantly higher in mushrooms and significantly lower in ground pork (P = 0.05). Regression equations for log10 counts of coliform and E. coli by Pulsifier and Stomacher were: Pulsifier = 0.12 + 0.97 x Stomacher, and Pulsifier = 0.01 + 1.01 x Stomacher, respectively.
Walker, Anthony P.; Quaife, Tristan; Van Bodegom, Peter M.; De Kauwe, Martin G.; Keenan, Trevor F.; Joiner, Joanna; Lomas, Mark R.; MacBean, Natasha; Xu, Chongang; Yang, Xiaojuan;
2017-01-01
The maximum photosynthetic carboxylation rate (V (sub cmax)) is an influential plant trait that has multiple scaling hypotheses, which is a source of uncertainty in predictive understanding of global gross primary production (GPP). Four trait-scaling hypotheses (plant functional type, nutrient limitation, environmental filtering, and plant plasticity) with nine specific implementations were used to predict global V(sub cmax) distributions and their impact on global GPP in the Sheffield Dynamic Global Vegetation Model (SDGVM). Global GPP varied from 108.1 to 128.2 petagrams of Carbon (PgC) per year, 65 percent of the range of a recent model intercomparison of global GPP. The variation in GPP propagated through to a 27percent coefficient of variation in net biome productivity (NBP). All hypotheses produced global GPP that was highly correlated (r equals 0.85-0.91) with three proxies of global GPP. Plant functional type-based nutrient limitation, underpinned by a core SDGVM hypothesis that plant nitrogen (N) status is inversely related to increasing costs of N acquisition with increasing soil carbon, adequately reproduced global GPP distributions. Further improvement could be achieved with accurate representation of water sensitivity and agriculture in SDGVM. Mismatch between environmental filtering (the most data-driven hypothesis) and GPP suggested that greater effort is needed understand V(sub cmax) variation in the field, particularly in northern latitudes.
Modal dispersion, pulse broadening and maximum transmission rate in GRIN optical fibers encompass a central dip in the core index profile
El-Diasty, Fouad; El-Hennawi, H. A.; El-Ghandoor, H.; Soliman, Mona A.
2013-12-01
Intermodal and intramodal dispersions signify one of the problems in graded-index multi-mode optical fibers (GRIN) used for LAN communication systems and for sensing applications. A central index dip (depression) in the profile of core refractive-index may occur due to the CVD fabrication processes. The index dip may also be intentionally designed to broaden the fundamental mode field profile toward a plateau-like distribution, which have advantages for fiber-source connections, fiber amplifiers and self-imaging applications. Effect of core central index dip on the propagation parameters of GRIN fiber, such as intermodal dispersion, intramodal dispersion and root-mean-square broadening, is investigated. The conventional methods usually study optical signal propagation in optical fiber in terms of mode characteristics and the number of modes, but in this work multiple-beam Fizeau interferometry is proposed as an inductive but alternative methodology to afford a radial approach to determine dispersion, pulse broadening and maximum transmission rate in GRIN optical fiber having a central index dip.
Su, Yu-min; Makinia, Jacek; Pagilla, Krishna R
2008-04-01
The autotrophic maximum specific growth rate constant, muA,max, is the critical parameter for design and performance of nitrifying activated sludge systems. In literature reviews (i.e., Henze et al., 1987; Metcalf and Eddy, 1991), a wide range of muA,max values have been reported (0.25 to 3.0 days(-1)); however, recent data from several wastewater treatment plants across North America revealed that the estimated muA,max values remained in the narrow range 0.85 to 1.05 days(-1). In this study, long-term operation of a laboratory-scale sequencing batch reactor system was investigated for estimating this coefficient according to the low food-to-microorganism ratio bioassay and simulation methods, as recommended in the Water Environment Research Foundation (Alexandria, Virginia) report (Melcer et al., 2003). The estimated muA,max values using steady-state model calculations for four operating periods ranged from 0.83 to 0.99 day(-1). The International Water Association (London, United Kingdom) Activated Sludge Model No. 1 (ASM1) dynamic model simulations revealed that a single value of muA,max (1.2 days(-1)) could be used, despite variations in the measured specific nitrification rates. However, the average muA,max was gradually decreasing during the activated sludge chlorination tests, until it reached the value of 0.48 day(-1) at the dose of 5 mg chlorine/(g mixed liquor suspended solids x d). Significant discrepancies between the predicted XA/YA ratios were observed. In some cases, the ASM1 predictions were approximately two times higher than the steady-state model predictions. This implies that estimating this ratio from a complex activated sludge model and using it in simple steady-state model calculations should be accepted with great caution and requires further investigation.
High quantum efficiency S-20 photocathodes in photon counting detectors
Orlov, D. A.; DeFazio, J.; Duarte Pinto, S.; Glazenborg, R.; Kernen, E.
2016-04-01
Based on conventional S-20 processes, a new series of high quantum efficiency (QE) photocathodes has been developed that can be specifically tuned for use in the ultraviolet, blue or green regions of the spectrum. The QE values exceed 30% at maximum response, and the dark count rate is found to be as low as 30 Hz/cm2 at room temperature. This combination of properties along with a fast temporal response makes these photocathodes ideal for application in photon counting detectors, which is demonstrated with an MCP photomultiplier tube for single and multi-photoelectron detection.
Gonzalez-Lopezlira, Rosa A. [On sabbatical leave from the Centro de Radioastronomia y Astrofisica, UNAM, Campus Morelia, Michoacan, C.P. 58089, Mexico. (Mexico); Pflamm-Altenburg, Jan; Kroupa, Pavel, E-mail: r.gonzalez@crya.unam.mx [Argelander Institut fuer Astronomie, Universitaet Bonn, Auf dem Huegel 71, D-53121 Bonn (Germany)
2013-06-20
We analyze the relationship between maximum cluster mass and surface densities of total gas ({Sigma}{sub gas}), molecular gas ({Sigma}{sub H{sub 2}}), neutral gas ({Sigma}{sub H{sub I}}), and star formation rate ({Sigma}{sub SFR}) in the grand-design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. By comparing the two-dimensional distribution of cluster masses and gas surface densities, we find for clusters older than 25 Myr that M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.4{+-}0.2}}, whereM{sub 3rd} is the median of the five most massive clusters. There is no correlation with{Sigma}{sub gas},{Sigma}{sub H2}, or{Sigma}{sub SFR}. For clusters younger than 10 Myr, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub I}{sup 0.6{+-}0.1}} and M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 0.5{+-}0.2}; there is no correlation with either {Sigma}{sub H{sub 2}} or{Sigma}{sub SFR}. The results could hardly be more different from those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but we have determined M{sub 3rd}{proportional_to}{Sigma}{sub gas}{sup 3.8{+-}0.3}, M{sub 3rd}{proportional_to}{Sigma}{sub H{sub 2}{sup 1.2{+-}0.1}}, and M{sub 3rd}{proportional_to}{Sigma}{sub SFR}{sup 0.9{+-}0.1}. For the older sample in M51, the lack of tight correlations is probably due to the combination of strong azimuthal variations in the surface densities of gas and star formation rate, and the cluster ages. These two facts mean that neither the azimuthal average of the surface densities at a given radius nor the surface densities at the present-day location of a stellar cluster represent the true surface densities at the place and time of cluster formation. In the case of the younger sample, even if the clusters have not yet
Chiba Shigeru
2007-09-01
Full Text Available Abstract Background Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. Methods An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index ρmax, which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. Results The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in ρmax with time. Conclusion The physiological index, ρmax, will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
Hubbard, S. M.; Coutts, D. S.; Matthews, W.; Guest, B.; Bain, H.
2015-12-01
In basins adjacent to continually active arcs, detrital zircon geochronology can be used to establish a high-resolution chronostratigraphic framework for deep-time strata. Large-nU-Pb geochronological datasets can yield a statistically significant signature from the youngest sub-population of detrital zircons, which we deduce from maximum depositional age (MDA) calculations. MDA is determined through numerous methods such as the mean age of three or more overlapping grain ages at 2σ error, favored in this analysis. Positive identification of the youngest detrital zircon population in a rock is the limiting factor on precision and resolution. The Campanian-Paleogene Nanaimo Group of B.C., Canada, was deposited in a forearc basin, outboard of the Coast Mountain Batholith. The record of a deep-water sediment-routing system is exhumed at Denman and Hornby islands; sandstone- and conglomerate- dominated strata compose a composite sedimentary unit 20 km across and 1.5 km thick, in strike section. Volcanic ashes are absent from the succession, which has been constrained biostratigraphically. Eleven detrital zircon samples are analyzed to define stratigraphic architecture and provide insight into sedimentation rates. Our dataset (n=3081) constrains the overall duration of channelization to ~18 Ma. A series of at least five distinct composite channel fills 3-6 km wide and 400-600 m thick are identified. The MDA of these units are statistically distinct and constrained to better than 3% precision. Sedimentation rates amongst the channel fills increase upward, from 60-100 m/Ma to >500 m/Ma. This is likely linked to the tendency of a slope channel system to be dominated by sediment bypass early in its evolution, and later dominated by aggradation as large-scale levees develop. Channel processes were not continuous, with the longest hiatus ~6 Ma. The large-n detrital zircon dataset provides unprecedented insight into long-term sediment routing, evidence for which is
Mocroft, A.; Phillips, A.N.; Gatell, J.; Horban, A.; Ledergerber, B.; Zilmer, K.; Jevtovic, D.; Maltez, F.; Podlekareva, D.; Lundgren, J.D.; Burger, D.M.
2013-01-01
BACKGROUND: CD4 cell count and viral loads are used in clinical trials as surrogate endpoints for assessing efficacy of newly available antiretrovirals. If antiretrovirals act through other pathways or increase the risk of disease this would not be identified prior to licensing. The aim of this stud
Koenig, Serena P.; Bernard, Daphne; Dévieux, Jessy G.; Atwood, Sidney; McNairy, Margaret L.; Severe, Patrice; Marcelin, Adias; Julma, Pierrot; Apollon, Alexandra; Pape, Jean W.
2016-01-01
Background High attrition during the period from HIV testing to antiretroviral therapy (ART) initiation is widely reported. Though treatment guidelines have changed to broaden ART eligibility and services have been widely expanded over the past decade, data on the temporal trends in pre-ART outcomes are limited; such data would be useful to guide future policy decisions. Methods We evaluated temporal trends and predictors of retention for each step from HIV testing to ART initiation over the past decade at the GHESKIO clinic in Port-au-Prince Haiti. The 24,925 patients >17 years of age who received a positive HIV test at GHESKIO from March 1, 2003 to February 28, 2013 were included. Patients were followed until they remained in pre-ART care for one year or initiated ART. Results 24,925 patients (61% female, median age 35 years) were included, and 15,008 (60%) had blood drawn for CD4 count within 12 months of HIV testing; the trend increased over time from 36% in Year 1 to 78% in Year 10 (p500 cells/mm3, respectively. The trend increased over time for each CD4 strata, and in Year 10, 94%, 95%, 79%, and 74% were retained in pre-ART care or initiated ART for each CD4 strata. Predictors of pre-ART attrition included male gender, low income, and low educational status. Older age and tuberculosis (TB) at HIV testing were associated with retention in care. Conclusions The proportion of patients completing assessments for ART eligibility, remaining in pre-ART care, and initiating ART have increased over the last decade across all CD4 count strata, particularly among patients with CD4 count ≤350 cells/mm3. However, additional retention efforts are needed for patients with higher CD4 counts. PMID:26901795
Serena P Koenig
Full Text Available High attrition during the period from HIV testing to antiretroviral therapy (ART initiation is widely reported. Though treatment guidelines have changed to broaden ART eligibility and services have been widely expanded over the past decade, data on the temporal trends in pre-ART outcomes are limited; such data would be useful to guide future policy decisions.We evaluated temporal trends and predictors of retention for each step from HIV testing to ART initiation over the past decade at the GHESKIO clinic in Port-au-Prince Haiti. The 24,925 patients >17 years of age who received a positive HIV test at GHESKIO from March 1, 2003 to February 28, 2013 were included. Patients were followed until they remained in pre-ART care for one year or initiated ART.24,925 patients (61% female, median age 35 years were included, and 15,008 (60% had blood drawn for CD4 count within 12 months of HIV testing; the trend increased over time from 36% in Year 1 to 78% in Year 10 (p500 cells/mm3, respectively. The trend increased over time for each CD4 strata, and in Year 10, 94%, 95%, 79%, and 74% were retained in pre-ART care or initiated ART for each CD4 strata. Predictors of pre-ART attrition included male gender, low income, and low educational status. Older age and tuberculosis (TB at HIV testing were associated with retention in care.The proportion of patients completing assessments for ART eligibility, remaining in pre-ART care, and initiating ART have increased over the last decade across all CD4 count strata, particularly among patients with CD4 count ≤350 cells/mm3. However, additional retention efforts are needed for patients with higher CD4 counts.
N. Alavizadeh
2017-01-01
Full Text Available ims: Apelin is an adipokine, which secreted from adipose tissue and has positive effects against the insulin resistance. The aim of this study was to investigate the effect of 8-week aerobic exercise on levels of apelin and insulin resistance index in sedentary men. Materials & Methods: In this semi-experimental study with controlled group pre/post-test design in 2015, 27 healthy sedentary men living in Mashhad City, Iran, were selected by convenience sampling method. They were divided into two groups; experimental group (n=14 and control group (n=13. In the trained group, the volunteers participated in 8 weeks aerobic exercise, 3 days/week (equivalent to 75-85% of maximum oxygen consumption for 60 minutes per session. The research variables were assessed before and after the intervention in both groups. The collected data were analyzed using SPSS 20 software using paired and independent sample T tests. Findings: 8-week aerobic exercise significantly decreased the weight, BMI and apelin, insulin and insulin resistance index levels and increased the maximum oxygen consumption in experimental group sedentary men (p<0.05. Moreover, there were significant differences in levels of FBS, insulin, apelin, insulin resistance index and maximum oxygen consumption between experimental and control groups (p<0.05. Conclusion: 8-week aerobic exercise reduces apelin levels and insulin resistance index in sedentary men.
高茜; 管莹; 米其利; 李雪梅; 缪明明; 夭建华
2012-01-01
Using CHO bioengineering cell as target, application of flow cytometer in cell counting and cell survival rate calculation was explored in this paper. The results showed that cell counting and survival rate calculation could be accurate by the flow cytometer through the set of three parameters SS,EV, and FL3. Compared with blood cell counting plate method, flow cytometer method was more efficient and stable with faster operation and lower SD value. Therefore, to improve the production efficiency and toxicological evaluation reliability, flow cytometer method was recommend to be applied in large scale experiments for cell counting and survival rate calculation.%以CHO生物工程细胞为对象,探索了流式细胞仪在细胞计数和细胞存活率计算方面的应用.通过设定侧向角散射SS、电子体积EV及荧光强度FL3等3个参数,编制CHO细胞计数程序,再应用流式细胞仪进行细胞计数和存活率计算,其结果与血球计数板法基本一致,但操作更迅速、SD值更低,说明流式细胞仪法较血球计数板法更高效稳定.流式细胞仪法提高了生物工程的生产效率和毒理学评价的准确性,可应用于大规模细胞实验中.
Kurz, Christopher, E-mail: Christopher.Kurz@physik.uni-muenchen.de; Bauer, Julia [Heidelberg Ion-Beam Therapy Center and Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg 69120 (Germany); Conti, Maurizio; Guérin, Laura; Eriksson, Lars [Siemens Healthcare Molecular Imaging, Knoxville, Tennessee 37932 (United States); Parodi, Katia [Heidelberg Ion-Beam Therapy Center and Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg 69120, Germany and Department of Experimental Physics – Medical Physics, Ludwig-Maximilians-University, Munich 85748 (Germany)
2015-07-15
Purpose: External beam radiotherapy with protons and heavier ions enables a tighter conformation of the applied dose to arbitrarily shaped tumor volumes with respect to photons, but is more sensitive to uncertainties in the radiotherapeutic treatment chain. Consequently, an independent verification of the applied treatment is highly desirable. For this purpose, the irradiation-induced β{sup +}-emitter distribution within the patient is detected shortly after irradiation by a commercial full-ring positron emission tomography/x-ray computed tomography (PET/CT) scanner installed next to the treatment rooms at the Heidelberg Ion-Beam Therapy Center (HIT). A major challenge to this approach is posed by the small number of detected coincidences. This contribution aims at characterizing the performance of the used PET/CT device and identifying the best-performing reconstruction algorithm under the particular statistical conditions of PET-based treatment monitoring. Moreover, this study addresses the impact of radiation background from the intrinsically radioactive lutetium-oxyorthosilicate (LSO)-based detectors at low counts. Methods: The authors have acquired 30 subsequent PET scans of a cylindrical phantom emulating a patientlike activity pattern and spanning the entire patient counting regime in terms of true coincidences and random fractions (RFs). Accuracy and precision of activity quantification, image noise, and geometrical fidelity of the scanner have been investigated for various reconstruction algorithms and settings in order to identify a practical, well-suited reconstruction scheme for PET-based treatment verification. Truncated listmode data have been utilized for separating the effects of small true count numbers and high RFs on the reconstructed images. A corresponding simulation study enabled extending the results to an even wider range of counting statistics and to additionally investigate the impact of scatter coincidences. Eventually, the recommended
Blok, Chris; Jackson, Brian E.; Guo, Xianfeng; Visser, De Pieter H.B.; Marcelis, Leo F.M.
2017-01-01
Growing on rooting media other than soils in situ -i.e., substrate-based growing- allows for higher yields than soil-based growing as transport rates of water, nutrients, and oxygen in substrate surpass those in soil. Possibly water-based growing allows for even higher yields as transport rates of
Sander, Pia; Mouritsen, L; Andersen, J Thorup
2002-01-01
OBJECTIVE: The aim of this study was to evaluate the value of routine measurements of urinary flow rate and residual urine volume as a part of a "minimal care" assessment programme for women with urinary incontinence in detecting clinical significant bladder emptying problems. MATERIAL AND METHOD...... female urinary incontinence. Thus, primary health care providers can assess women based on simple guidelines without expensive equipment for assessment of urine flow rate and residual urine....
Maj, Piotr; Grybos, P.; Szczgiel, R.; Kmon, P.; Drozd, A.; Deptuch, G.
2013-11-07
We present a prototype chip in 40 nm CMOS technology for readout of hybrid pixel detector. The prototype chip has a matrix of 18x24 pixels with a pixel pitch of 100 m. It can operate both in single photon counting (SPC) mode and in C8P1 mode. In SPC the measured ENC is 84 e rms (for the peaking time of 48 ns), while the effective offset spread is below 2 mV rms. In the C8P1 mode the chip reconstructs full charge deposited in the detector, even in the case of charge sharing, and it identifies a pixel with the largest charge deposition. The chip architecture and preliminary measurements are reported.
Schiefelbein, Sarah; Fröhlich, Alexander; John, Gernot T; Beutler, Falco; Wittmann, Christoph; Becker, Judith
2013-08-01
Dissolved oxygen plays an essential role in aerobic cultivation especially due to its low solubility. Under unfavorable conditions of mixing and vessel geometry it can become limiting. This, however, is difficult to predict and thus the right choice for an optimal experimental set-up is challenging. To overcome this, we developed a method which allows a robust prediction of the dissolved oxygen concentration during aerobic growth. This integrates newly established mathematical correlations for the determination of the volumetric gas-liquid mass transfer coefficient (kLa) in disposable shake-flasks from the filling volume, the vessel size and the agitation speed. Tested for the industrial production organism Corynebacterium glutamicum, this enabled a reliable design of culture conditions and allowed to predict the maximum possible cell concentration without oxygen limitation.
Strasser, Barbara; Schwarz, Joachim; Haber, Paul; Schobersberger, Wolfgang
2011-12-01
Aim of this study was to evaluate reliable guide values for heart rate (HF) and blood pressure (RR) with reference to defined sub maximum exertion considering age, gender and body mass. One hundred and eighteen healthy but non-trained subjects (38 women, 80 men) were included in the study. For interpretation, finally facts of 28 women and 59 men were used. We found gender differences for HF and RR. Further, we noted significant correlations between HF and age as well as between RR and body mass at all exercise levels. We established formulas for gender-specific calculation of reliable guide values for HF and RR on sub maximum exercise levels.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Blazevich, Anthony J; Horne, Sara; Cannavan, Dale
2008-01-01
knee extension training was performed 3 x week(-1) for 10 weeks. Maximal isometric strength (+11.2%) and RFD (measured from 0-30/50/100/200 ms, respectively; +10.5%-20.5%) increased after 10 weeks (P training mode. Peak EMG amplitude and rate of EMG rise......This study examined the effects of slow-speed resistance training involving concentric (CON, n = 10) versus eccentric (ECC, n = 11) single-joint muscle contractions on contractile rate of force development (RFD) and neuromuscular activity (EMG), and its maintenance through detraining. Isokinetic...... were not significantly altered with training or detraining. Subjects with below-median normalized RFD (RFD/MVC) at 0 weeks significantly increased RFD after 5- and 10-weeks training, which was associated with increased neuromuscular activity. Subjects who maintained their higher RFD after detraining...
Thornley, John H M; Parsons, Anthony J
2014-02-07
Treating resource allocation within plants, and between plants and associated organisms, is essential for plant, crop and ecosystem modelling. However, it is still an unresolved issue. It is also important to consider quantitatively when it is efficient and to what extent a plant can invest profitably in a mycorrhizal association. A teleonomic model is used to address these issues. A six state-variable model giving exponential growth is constructed. This represents carbon (C), nitrogen (N) and phosphorus (P) substrates with structure in shoot, root and mycorrhiza. The shoot is responsible for uptake of substrate C, the root for substrates N and P, and the mycorrhiza also for substrates N and P. A teleonomic goal, maximizing proportional growth rate, is solved analytically for the allocation fractions. Expressions allocating new dry matter to shoot, root and mycorrhiza are derived which maximize growth rate. These demonstrate several key intuitive phenomena concerning resource sharing between plant components and associated mycorrhizae. For instance, if root uptake rate for phosphorus is equal to that achievable by mycorrhiza and without detriment to root uptake rate for nitrogen, then this gives a faster growing mycorrhizal-free plant. However, if root phosphorus uptake is below that achievable by mycorrhiza, then a mycorrhizal association may be a preferred strategy. The approach offers a methodology for introducing resource sharing between species into ecosystem models. Applying teleonomy may provide a valuable short-term means of modelling allocation, avoiding the circularity of empirical models, and circumventing the complexities and uncertainties inherent in mechanistic approaches. However it is subjective and brings certain irreducible difficulties with it.
Coplestone-Loomis, Lenny
1981-01-01
Pumpkin seeds are counted after students convert pumpkins to jack-o-lanterns. Among the activities involved, pupils learn to count by 10s, make estimates, and to construct a visual representation of 1,000. (MP)
L. Ocola
2008-01-01
Full Text Available Post-disaster reconstruction management of urban areas requires timely information on the ground response microzonation to strong levels of ground shaking to minimize the rebuilt-environment vulnerability to future earthquakes. In this paper, a procedure is proposed to quantitatively estimate the severity of ground response in terms of peak ground acceleration, that is computed from macroseismic rating data, soil properties (acoustic impedance and predominant frequency of shear waves at a site. The basic mathematical relationships are derived from properties of wave propagation in a homogeneous and isotropic media. We define a Macroseismic Intensity Scale I_{MS} as the logarithm of the quantity of seismic energy that flows through a unit area normal to the direction of wave propagation in unit time. The derived constants that relate the I_{MS} scale and peak acceleration agree well with coefficients derived from a linear regression between MSK macroseismic rating and peak ground acceleration for historical earthquakes recorded at a strong motion station, at IGP's former headquarters, since 1954. The procedure was applied to 3-October-1974 Lima macroseismic intensity data at places where there was geotechnical data and predominant ground frequency information. The observed and computed peak acceleration values, at nearby sites, agree well.
An alternative calibration method for counting P-32 reactor monitors
Quirk, T.J. [Applied Nuclear Technologies, Sandia National Laboratories, MS 1143, PO Box 5800, Albuquerque, NM 87185-1143 (United States); Vehar, D.W. [Sandia National Laboratories, Albuquerque, NM 87185-1143 (United States)
2011-07-01
Radioactivation of sulfur is a common technique used to measure fast neutron fluences in test and research reactors. Elemental sulfur can be pressed into pellets and used as monitors. The {sup 32}S(n, p) {sup 32}P reaction has a practical threshold of about 3 MeV and its cross section and associated uncertainties are well characterized [1]. The product {sup 32P} emits a beta particle with a maximum energy of 1710 keV [2]. This energetic beta particle allows pellets to be counted intact. ASTM Standard Test Method for Measuring Reaction Rates and Fast-Neutron Fluences by Radioactivation of Sulfur-32 (E265) [3] details a method of calibration for counting systems and subsequent analysis of results. This method requires irradiation of sulfur monitors in a fast-neutron field whose spectrum and intensity are well known. The resultant decay-corrected count rate is then correlated to the known fast neutron fluence. The Radiation Metrology Laboratory (RML) at Sandia has traditionally performed calibration irradiations of sulfur pellets using the {sup 252}Cf spontaneous fission neutron source at the National Inst. of Standards and Technology (NIST) [4] as a transfer standard. However, decay has reduced the intensity of NIST's source; thus lowering the practical upper limits of available fluence. As of May 2010, neutron emission rates have decayed to approximately 3 e8 n/s. In practice, this degradation of capabilities precludes calibrations at the highest fluence levels produced at test reactors and limits the useful range of count rates that can be measured. Furthermore, the reduced availability of replacement {sup 252}Cf threatens the long-term viability of the NIST {sup 252}Cf facility for sulfur pellet calibrations. In lieu of correlating count rate to neutron fluence in a reference field the total quantity of {sup 32}P produced in a pellet can be determined by absolute counting methods. This offers an attractive alternative to extended {sup 252}Cf exposures because
Kemmler, Wolfgang; Schliffka, Rebecca; Mayhew, Jerry L; von Stengel, Simon
2010-07-01
We evaluated the effect of whole-body electromyostimulation (WB-EMS) during dynamic exercises over 14 weeks on anthropometric, physiological, and muscular parameters in postmenopausal women. Thirty women (64.5 +/- 5.5 years) with experience in physical training (>3 years) were randomly assigned either to a control group (CON, n = 15) that maintained their general training program (2 x 60 min.wk of endurance and dynamic strength exercise) or to an electromyostimulation group (WB-EMS, n = 15) that additionally performed a 20-minute WB-EMS training (2 x 20 min.10 d). Resting metabolic rate (RMR) determined from spirometry was selected to indicate muscle mass. In addition, body circumferences, subcutaneous skinfolds, strength, power, and dropout and adherence values. Resting metabolic rate was maintained in WB-EMS (-0.1 +/- 4.8 kcal.h) and decreased in CON (-3.2+/-5.2 kcal.h, p = 0.038); although group differences were not significant (p = 0.095), there was a moderately strong effect size (ES = 0.62). Sum of skinfolds (28.6%) and waist circumference (22.3%) significantly decreased in WB-EMS whereas both parameters (1.4 and 0.1%, respectively) increased in CON (p = 0.001, ES = 1.37 and 1.64, respectively), whereas both parameters increased in CON (1.4 and 0.1%, respectively). Isometric strength changes of the trunk extensors and leg extensors differed significantly (p < or = 0.006) between WB-EMS and CON (9.9% vs. -6.4%, ES = 1.53; 9.6% vs. -4.5%, ES = 1.43, respectively). In summary, adjunct WB-EMS training significantly exceeds the effect of isolated endurance and resistance type exercise on fitness and fatness parameters. Further, we conclude that for elderly subjects unable or unwilling to perform dynamic strength exercises, electromyostimulation may be a smooth alternative to maintain lean body mass, strength, and power.
Eduardo Marcel Fernandes Nascimento
2011-08-01
Full Text Available The objective of this study was to analyze the heart rate (HR profile plotted against incremental workloads (IWL during a treadmill test using three mathematical models [linear, linear with 2 segments (Lin2, and sigmoidal], and to determine the best model for the identification of the HR threshold that could be used as a predictor of ventilatory thresholds (VT1 and VT2. Twenty-two men underwent a treadmill incremental test (retest group: n=12 at an initial speed of 5.5 km.h-1, with increments of 0.5 km.h-1 at 1-min intervals until exhaustion. HR and gas exchange were continuously measured and subsequently converted to 5-s and 20-s averages, respectively. The best model was chosen based on residual sum of squares and mean square error. The HR/IWL ratio was better fitted with the Lin2 model in the test and retest groups (p0.05. During a treadmill incremental test, the HR/IWL ratio seems to be better fitted with a Lin2 model, which permits to determine the HR threshold that coincides with VT1.
Nuclear counting filter based on a centered Skellam test and a double exponential smoothing
Coulon, Romain; Kondrasovs, Vladimir; Dumazert, Jonathan; Rohee, Emmanuel; Normand Stephane [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette, (France)
2015-07-01
Online nuclear counting represents a challenge due to the stochastic nature of radioactivity. The count data have to be filtered in order to provide a precise and accurate estimation of the count rate, this with a response time compatible with the application in view. An innovative filter is presented in this paper addressing this issue. It is a nonlinear filter based on a Centered Skellam Test (CST) giving a local maximum likelihood estimation of the signal based on a Poisson distribution assumption. This nonlinear approach allows to smooth the counting signal while maintaining a fast response when brutal change activity occur. The filter has been improved by the implementation of a Brown's double Exponential Smoothing (BES). The filter has been validated and compared to other state of the art smoothing filters. The CST-BES filter shows a significant improvement compared to all tested smoothing filters. (authors)
Gomez-Paccard, Miriam; Osete, Maria Luisa; Chauvin, Annick; Pérez-Asensio, Manuel; Jimenez-Castillo, Pedro
2014-05-01
Available European data indicate that during the past 2500 years there have been periods of rapid intensity geomagnetic fluctuations interspersed with periods of little change. The challenge now is to precisely describe these rapid changes. Due to the difficulty to obtain precisely dated heated materials to obtain a high-resolution description of past geomagnetic field intensity changes, new high-quality archeomagnetic data from archeological heated materials founded in well-defined superposed stratigraphic units are particularly valuable. In this work we report the archeomagnetic study of several groups of ceramic fragments from southeastern Spain that belong to 14 superposed stratigraphic levels corresponding to a surface no bigger than 3 m by 7 m. Between four and eight ceramic fragments were selected per stratigraphic unit. The age of the pottery fragments range from the second half of the 7th to the11th centuries. The dates were established by three radiocarbon dates and by archeological/historical constraints including typological comparisons and well-controlled stratigraphic constrains.Between two and four specimens per pottery fragment were studied. The classical Thellier and Thellier method including pTRM checks and TRM anisotropy and cooling rate corrections was used to estimate paleointensities at specimen level. All accepted results correspond to well-defined single components of magnetization going toward the origin and to high-quality paleointensity determinations. From these experiments nine new high-quality mean intensities have been obtained. The new data provide an improved description of the sharp abrupt intensity changes that took place in this region between the 7th and the 11th centuries. The results confirm that several rapid intensity changes (of about ~15-20 µT/century) took place in Western Europe during the recent history of the Earth.
Croft, Stephen [Oak Ridge National Laboratory (ORNL), One Bethel Valley Road, Oak Ridge, TN (United States); Burr, Tom [International Atomic Energy Agency (IAEA), Vienna (Austria); Favalli, Andrea [Los Alamos National Laboratory (LANL), MS E540, Los Alamos, NM 87545 (United States); Nicholson, Andrew [Oak Ridge National Laboratory (ORNL), One Bethel Valley Road, Oak Ridge, TN (United States)
2016-03-01
The declared linear density of {sup 238}U and {sup 235}U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of {sup 235}U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to model the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. We find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters.
Sauzay, G. [Commissariat a l' Energie Atomique, 91 - Saclay (France). Centre d' Etudes Nucleaires
1967-11-01
Radioactive tracers are applied to the direct measurement of the sediment transport rate of sand beds. The theoretical measurement formula is derived: the variation of the count rate balance is inverse of that of the transport thickness. Simultaneously the representativeness of the tracer is critically studied. The minimum quantity of tracer which has to be injected in order to obtain a correct statistical definition of count rate given by a low number of grains 'seen' by the detector is then studied. A field experiment was made and has let to study the technological conditions for applying this method: only the treatment of results is new, the experiment itself is carried out with conventional techniques applied with great care. (author) [French] Les indicateurs radioactifs sont appliques a la mesure directe du debit de charriage des lits sableux. On etablit la formule theorique de mesure: le bilan des taux de comptage varie en sens inverse de l'epaisseur de charriage. Parallelement on fait une etude critique de la representativite de l'indicateur, puis on determine la quantite minimale de traceur qu'il faut immerger pour que les taux de comptage fournis pour un faible nombre de grains 'vus' par le detecteur aient une definition statistique correcte. Une experience de terrain a permis d'etudier les conditions technologiques de cette methode: seul le depouillement des resultats est nouveau. L'experimentation in-situ se fait suivant les procedes classiques avec un tres grand soin. (auteur)
1970-01-01
The Health Physics counting room, where the quantity of induced radioactivity in materials is determined. This information is used to evaluate possible radiation hazards from the material investigated.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Generalized Entropy Concentration for Counts
Oikonomou, Kostas N
2016-01-01
We consider the phenomenon of entropy concentration under linear constraints in a discrete setting, using the "balls and bins" paradigm, but without the assumption that the number of balls allocated to the bins is known. Therefore instead of \\ frequency vectors and ordinary entropy, we have count vectors with unknown sum, and a certain generalized entropy. We show that if the constraints bound the allowable sums, this suffices for concentration to occur even in this setting. The concentration can be either in terms of deviation from the maximum generalized entropy value, or in terms of the norm of the difference from the maximum generalized entropy vector. Without any asymptotic considerations, we quantify the concentration in terms of various parameters, notably a tolerance on the constraints which ensures that they are always satisfied by an integral vector. Generalized entropy maximization is not only compatible with ordinary MaxEnt, but can also be considered an extension of it, as it allows us to address...
Li, Hang Wun Raymond; Lee, Vivian Chi Yan; Lau, Estella Yee Lan; Yeung, William Shu Biu; Ho, Pak Chung; Ng, Ernest Hung Yu
2014-01-01
To evaluate ovarian response and cumulative live birth rate of women undergoing in-vitro fertilization (IVF) treatment who had discordant baseline serum anti-Mullerian hormone (AMH) level and antral follicle count (AFC). This is a retrospective cohort study on 1,046 women undergoing the first IVF cycle in Queen Mary Hospital, Hong Kong. Subjects receiving standard IVF treatment with the GnRH agonist long protocol were classified according to their quartiles of baseline AMH and AFC measurements after GnRH agonist down-regulation and before commencing ovarian stimulation. The number of retrieved oocytes, ovarian sensitivity index (OSI) and cumulative live-birth rate for each classification category were compared. Among our studied subjects, 32.2% were discordant in their AMH and AFC quartiles. Among them, those having higher AMH within the same AFC quartile had higher number of retrieved oocytes and cumulative live-birth rate. Subjects discordant in AMH and AFC had intermediate OSI which differed significantly compared to those concordant in AMH and AFC on either end. OSI of those discordant in AMH and AFC did not differ significantly whether either AMH or AFC quartile was higher than the other. When AMH and AFC are discordant, the ovarian responsiveness is intermediate between that when both are concordant on either end. Women having higher AMH within the same AFC quartile had higher number of retrieved oocytes and cumulative live-birth rate.
Monitoring Milk Somatic Cell Counts
Gheorghe Şteţca
2014-11-01
Full Text Available The presence of somatic cells in milk is a widely disputed issue in milk production sector. The somatic cell counts in raw milk are a marker for the specific cow diseases such as mastitis or swollen udder. The high level of somatic cells causes physical and chemical changes to milk composition and nutritional value, and as well to milk products. Also, the mastitic milk is not proper for human consumption due to its contribution to spreading of certain diseases and food poisoning. According to these effects, EU Regulations established the maximum threshold of admitted somatic cells in raw milk to 400000 cells / mL starting with 2014. The purpose of this study was carried out in order to examine the raw milk samples provided from small farms, industrial type farms and milk processing units. There are several ways to count somatic cells in milk but the reference accepted method is the microscopic method described by the SR EN ISO 13366-1/2008. Generally samples registered values in accordance with the admissible limit. By periodical monitoring of the somatic cell count, certain technological process issues are being avoided and consumer’s health ensured.
Anarthria impairs subvocal counting.
Cubelli, R; Nichelli, P; Pentore, R
1993-12-01
We studied subvocal counting in two pure anarthric patients. Analysis showed that they performed definitively worse than normal subjects free to articulate subvocally and their scores were in the lower bounds of the performances of subjects suppressing articulation. These results suggest that subvocal counting is impaired after anarthria.
Phillip P. Allen
2014-05-01
Full Text Available Techniques that analyze biological remains from sediment sequences for environmental reconstructions are well established and widely used. Yet, identifying, counting, and recording biological evidence such as pollen grains remain a highly skilled, demanding, and time-consuming task. Standard procedure requires the classification and recording of between 300 and 500 pollen grains from each representative sample. Recording the data from a pollen count requires significant effort and focused resources from the palynologist. However, when an adaptation to the recording procedure is utilized, efficiency and time economy improve. We describe EcoCount, which represents a development in environmental data recording procedure. EcoCount is a voice activated fully customizable digital count sheet that allows the investigator to continuously interact with a field of view during the data recording. Continuous viewing allows the palynologist the opportunity to remain engaged with the essential task, identification, for longer, making pollen counting more efficient and economical. EcoCount is a versatile software package that can be used to record a variety of environmental evidence and can be installed onto different computer platforms, making the adoption by users and laboratories simple and inexpensive. The user-friendly format of EcoCount allows any novice to be competent and functional in a very short time.
Sublattice Counting and Orbifolds
Hanany, Amihay; Reffert, Susanne
2010-01-01
Abelian orbifolds of C^3 are known to be encoded by hexagonal brane tilings. To date it is not known how to count all such orbifolds. We fill this gap by employing number theoretic techniques from crystallography, and by making use of Polya's Enumeration Theorem. The results turn out to be beautifully encoded in terms of partition functions and Dirichlet Series. The same methods apply to counting orbifolds of any toric non-compact Calabi-Yau singularity. As additional examples, we count the orbifolds of the conifold, of the L^{aba} theories, and of C^4.
邓春亮; 胡南辉
2012-01-01
在非自然联系情形下讨论了广义线性模型拟似然方程的解βn在λn→∞和其他一些正则性条件下证明了解的弱相合性，并得到其收敛于真值βo的速度为Op（λn^-1/2），其中λn（λ^-n）为方阵Sn=n∑i=1XiX^11的最小（最大）特征值．%In this paper,we study the solution βn of quasi - maximum likelihood equation for generalized linear mod- els （GLMs）. Under the assumption of an unnatural link function and other some mild conditions , we prove the weak consistency of the solution to βnquasi - - maximum likelihood equation and present its convergence rate isOp（λn^-1/2）,λn（^λn） which denotes the smallest （Maximum）eigervalue of the matrixSn =n∑i=1XiX^11,
US Fish and Wildlife Service, Department of the Interior — The goal of St. Vincent National Wildlife Refuge's Track Count Protocol is to provide an index to the population size of game animals inhabiting St. Vincent Island.
Your blood contains red blood cells (RBC), white blood cells (WBC), and platelets. Blood count tests measure the number and types of cells in your blood. This helps doctors check on your overall health. ...
Kersting, Kristian; Natarajan, Sriraam
2012-01-01
A major benefit of graphical models is that most knowledge is captured in the model structure. Many models, however, produce inference problems with a lot of symmetries not reflected in the graphical structure and hence not exploitable by efficient inference techniques such as belief propagation (BP). In this paper, we present a new and simple BP algorithm, called counting BP, that exploits such additional symmetries. Starting from a given factor graph, counting BP first constructs a compressed factor graph of clusternodes and clusterfactors, corresponding to sets of nodes and factors that are indistinguishable given the evidence. Then it runs a modified BP algorithm on the compressed graph that is equivalent to running BP on the original factor graph. Our experiments show that counting BP is applicable to a variety of important AI tasks such as (dynamic) relational models and boolean model counting, and that significant efficiency gains are obtainable, often by orders of magnitude.
Department of Housing and Urban Development — This report displays the data communities reported to HUD about the nature of their dedicated homeless inventory, referred to as their Housing Inventory Count (HIC)....
Allegheny County Traffic Counts
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Traffic sensors at over 1,200 locations in Allegheny County collect vehicle counts for the Pennsylvania Department of Transportation. Data included in the Health...
Carlsson, Sten
1993-01-01
In liquid scintillation counting (LSC) we use the process of luminescense to detect ionising radiation emit$ed from a radionuclide. Luminescense is emission of visible light of nonthermal origin. 1t was early found that certain organic molecules have luminescent properties and such molecules are used in LSC. Today LSC is the mostwidespread method to detect pure beta-ernitters like tritium and carbon-14. 1t has unique properties in its efficient counting geometry, deteetability and the lack of...
2015-01-01
In this paper we consider an elementary, and largely unexplored, combinatorial problem in low-dimensional topology. Consider a real 2-dimensional compact surface $S$, and fix a number of points $F$ on its boundary. We ask: how many configurations of disjoint arcs are there on $S$ whose boundary is $F$? We find that this enumerative problem, counting curves on surfaces, has a rich structure. For instance, we show that the curve counts obey an effective recursion, in the general framework of to...
Gukov, Sergei
2016-01-01
Interpreting renormalization group flows as solitons interpolating between different fixed points, we ask various questions that are normally asked in soliton physics but not in renormalization theory. Can one count RG flows? Are there different "topological sectors" for RG flows? What is the moduli space of an RG flow, and how does it compare to familiar moduli spaces of (supersymmetric) dowain walls? Analyzing these questions in a wide variety of contexts --- from counting RG walls to AdS/C...
房祥忠; 陈家鼎
2011-01-01
强度随时间变化的非齐次Possion过程在很多领域应用广泛.对一类非常广泛的非齐次Poisson过程—指数多项式模型,得到了当观测时间趋于无穷大时,参数的最大似然估计的“最优”收敛速度.%The model of nonhomogeneous Poisson processes with varying intensity function is applied in many fields. The best convergence rate for the maximum likelihood estimate ( MLE ) of exponential polynomial model, which is a kind of wide used nonhomogeneous Poisson processes, is given when time going to infinity.
李晓丽; 葛良全; 杨佳; 熊盛青
2015-01-01
月表Th元素分布特征对于分析月表岩石成因及化学特征等提供了重要依据。为了减小嫦娥二号伽玛谱(CE2-GRS)噪声对获取Th元素分布特征的影响，提出了一种基于噪声调整的奇异值分解(NASVD)去噪算法。通过伽玛谱预处理、去噪、本底扣除及净峰面积求解等步骤，获得月表放射性元素Th计数率全月分布图。通过与国内外其他方法所获得的Th元素计数率分布图对比有较高的一致性。与传统伽玛谱去噪算法比较，聚类NASVD算法能有效地去除统计涨落噪声影响，提取出嫦娥二号伽玛谱中的特征峰信息。%The distribution of thorium on the lunar surface provides the important evidence for lunar evolution history. It is diﬃcult to obtain the distribution of thorium on the lunar surface from CE2-GRS because of noise in the spectrum. The method for smoothing the spectrum is proposed which is based on cluster NASVD. The counting rate map of thorium on the lunar surface is achieved from CE2-GRS. Counting rate map of thorium gamma-rays shows a surface thorium distribution that is in general agreement with other measurement from LP-GRS and SLENE GRS which have better accuracy. It is more effectively to reduce the noise and get the weak information of characteristic peak used the method of cluster NASVD than other traditional methods for smoothing gamma-ray spectrum.
Novel Photon-Counting Detectors for Free-Space Communication
Krainak, Michael A.; Yang, Guan; Sun, Xiaoli; Lu, Wei; Merritt, Scott; Beck, Jeff
2016-01-01
We present performance data for novel photon counting detectors for free space optical communication. NASA GSFC is testing the performance of three novel photon counting detectors 1) a 2x8 mercury cadmium telluride avalanche array made by DRS Inc. 2) a commercial 2880 silicon avalanche photodiode array and 3) a prototype resonant cavity silicon avalanche photodiode array. We will present and compare dark count, photon detection efficiency, wavelength response and communication performance data for these detectors. We discuss system wavelength trades and architectures for optimizing overall communication link sensitivity, data rate and cost performance. The HgCdTe APD array has photon detection efficiencies of greater than 50 were routinely demonstrated across 5 arrays, with one array reaching a maximum PDE of 70. High resolution pixel-surface spot scans were performed and the junction diameters of the diodes were measured. The junction diameter was decreased from 31 m to 25 m resulting in a 2x increase in e-APD gain from 470 on the 2010 array to 1100 on the array delivered to NASA GSFC. Mean single photon SNRs of over 12 were demonstrated at excess noise factors of 1.2-1.3.The commercial silicon APD array has a fast output with rise times of 300ps and pulse widths of 600ps. Received and filtered signals from the entire array are multiplexed onto this single fast output. The prototype resonant cavity silicon APD array is being developed for use at 1 micron wavelength.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Tamura, Takako
2015-12-01
The circulating blood volume accounts for 8% of the body weight, of which 45% comprises cellular components (blood cells) and 55% liquid components. We can measure the number and morphological features of blood cells (leukocytes, red blood cells, platelets), or count the amount of hemoglobin in a complete blood count: (CBC). Blood counts are often used to detect inflammatory diseases such as infection, anemia, a bleeding tendency, and abnormal cell screening of blood disease. This count is widely used as a basic data item of health examination. In recent years, clinical tests before consultation have become common among outpatient clinics, and the influence of laboratory values on consultation has grown. CBC, which is intended to count the number of raw cells and to check morphological features, is easily influenced by the environment, techniques, etc., during specimen collection procedures and transportation. Therefore, special attention is necessary to read laboratory data. Providing correct test values that accurately reflect a patient's condition from the laboratory to clinical side is crucial. Inappropriate medical treatment caused by erroneous values resulting from altered specimens should be avoided. In order to provide correct test values, the daily management of devices is a matter of course, and comprehending data variables and positively providing information to the clinical side are important. In this chapter, concerning sampling collection, blood collection tubes, dealing with specimens, transportation, and storage, I will discuss their effects on CBC, along with management or handling methods.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Soeker, H. [Deutsches Windenergie-Institut (Germany)
1996-09-01
As state of the art method the rainflow counting technique is presently applied everywhere in fatigue analysis. However, the author feels that the potential of the technique is not fully recognized in wind energy industries as it is used, most of the times, as a mere data reduction technique disregarding some of the inherent information of the rainflow counting results. The ideas described in the following aim at exploitation of this information and making it available for use in the design and verification process. (au)
张庆
2015-01-01
Objective To investigate the effect of platelet counts on platelet aggregation test,to regulate platelet aggrega-tion rate detection to improve detection quality,in order to ensure the reliability of measurements of platelet aggregation. Meth-ods 211 cases of healthy people venous whole blood samples were included and centrifuged to obtain platelet-rich plasma ( plate-let-rich plasma,PRP) and platelet-poor plasma (platelet-poor plasma,PPP),the PRP with PPP got different dilutions of platelet concentration of PRP samples tested ,concentration of PRP platelet were confirmed in plasma;Platelet aggregation rate was detected by platelet aggregation nephelometry, and discuss the correlation with platelet. Results Adenosine diphosphate ( ADP ) and arachidonic acid( AA) induced platelet aggregation rate within a laboratory setting reference range. with decreasing concentration of platelet,the aggregation rate decreased significantly(P<0. 05);when the concentration was reduced to <95 ×109/L,the aggre-gate rate of the test resulted below the established reference range;platelet 90~350 × 10 9/L was significantly correlation with the accumulation(rAA =0. 67,rADP =0. 69),other concentrations and aggregation had no correlation. Conclusion Platelet concentra-tion can affect platelet aggregation rate,determination of aggregation rate should be limited at above of 95 × 10 9/L,which reflect the relation of concentration and aggregation rate,and accurately reflect the concentration of aggregation.%目的 探讨血小板数量对血小板聚集率检测的影响,提高检测质量,保证聚集率测定结果的可靠性. 方法 收集211例健康人静脉全血标本,离心获取富血小板血浆( platelet-rich plasma,PRP)和乏血小板血浆( platelet-poor plas-ma,PPP) ,将PRP用自身PPP梯度稀释后获取不同血小板浓度的PRP检测样本,并确认PRP血浆中血小板浓度;利用血浆比浊法测定血小板聚集率,讨论血小板数量与血小
Dougherty Stahl, Katherine A.
2014-01-01
Each disciplinary community has its own criteria for determining what counts as evidence of knowledge in their academic field. The criteria influence the ways that a community's knowledge is created, communicated, and evaluated. Situating reading, writing, and language instruction within the content areas enables teachers to explicitly…
Stuart P. Green
2016-08-01
Full Text Available What counts, or should count, as prostitution? In the criminal law today, prostitution is understood to involve the provision of sexual services in exchange for money or other benefits. But what exactly is a ‘sexual service’? And what exactly is the nature of the required ‘exchange’? The key to answering these questions is to recognize that how we choose to define prostitution will inevitably depend on why we believe one or more aspects of prostitution are wrong or harmful, or should be criminalized or otherwise deterred, in the first place. These judgements, in turn, will often depend on an assessment of the contested empirical evidence on which they rest. This article describes a variety of real-world contexts in which the ‘what counts as prostitution’ question has arisen, surveys a range of leading rationales for deterring prostitution, and demonstrates how the answer to the definition question depends on the answer to the normative question. The article concludes with some preliminary thoughts on how analogous questions about what should count as sexual conduct arise in the context of consensual offences such as adultery and incest, as well as non-consensual offences such as sexual assault.
Optimal allocation of point-count sampling effort
Barker, R.J.; Sauer, J.R.; Link, W.A.
1993-01-01
Both unlimited and fixedradius point counts only provide indices to population size. Because longer count durations lead to counting a higher proportion of individuals at the point, proper design of these surveys must incorporate both count duration and sampling characteristics of population size. Using information about the relationship between proportion of individuals detected at a point and count duration, we present a method of optimizing a pointcount survey given a fixed total time for surveying and travelling between count points. The optimization can be based on several quantities that measure precision, accuracy, or power of tests based on counts, including (1) meansquare error of estimated population change; (2) mean-square error of average count; (3) maximum expected total count; or (4) power of a test for differences in average counts. Optimal solutions depend on a function that relates count duration at a point to the proportion of animals detected. We model this function using exponential and Weibull distributions, and use numerical techniques to conduct the optimization. We provide an example of the procedure in which the function is estimated from data of cumulative number of individual birds seen for different count durations for three species of Hawaiian forest birds. In the example, optimal count duration at a point can differ greatly depending on the quantities that are optimized. Optimization of the mean-square error or of tests based on average counts generally requires longer count durations than does estimation of population change. A clear formulation of the goals of the study is a critical step in the optimization process.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Bulaevskaya, Vera; Bernstein, Adam
2011-06-01
This paper analyzes the sensitivity of antineutrino count rate measurements to changes in the fissile content of civil power reactors. Such measurements may be useful in IAEA reactor safeguards applications. We introduce a hypothesis testing procedure to identify statistically significant differences between the antineutrino count rate evolution of a standard "baseline" fuel cycle and that of an anomalous cycle, in which plutonium is removed and replaced with an equivalent fissile worth of uranium. The test would allow an inspector to detect anomalous reactor activity, or to positively confirm that the reactor is operating in a manner consistent with its declared fuel inventory and power level. We show that with a reasonable choice of detector parameters, the test can detect replacement of 82 kg of plutonium in 90 days with 95% probability, while controlling the false positive rate at 5%. We show that some improvement on this level of sensitivity may be obtained by various means, including use of the method in conjunction with existing reactor safeguards methods. We also identify a necessary and sufficient minimum daily antineutrino count rate and a maximum tolerable background rate to achieve the quoted sensitivity, and list examples of detectors in which such rates have been attained.
32-channel single photon counting module for ultrasensitive detection of DNA sequences
Gudkov, Georgiy; Dhulla, Vinit; Borodin, Anatoly; Gavrilov, Dmitri; Stepukhovich, Andrey; Tsupryk, Andrey; Gorbovitski, Boris; Gorfinkel, Vera
2006-10-01
We continue our work on the design and implementation of multi-channel single photon detection systems for highly sensitive detection of ultra-weak fluorescence signals, for high-performance, multi-lane DNA sequencing instruments. A fiberized, 32-channel single photon detection (SPD) module based on single photon avalanche diode (SPAD), model C30902S-DTC, from Perkin Elmer Optoelectronics (PKI) has been designed and implemented. Unavailability of high performance, large area SPAD arrays and our desire to design high performance photon counting systems drives us to use individual diodes. Slight modifications in our quenching circuit has doubled the linear range of our system from 1MHz to 2MHz, which is the upper limit for these devices and the maximum saturation count rate has increased to 14 MHz. The detector module comprises of a single board computer PC-104 that enables data visualization, recording, processing, and transfer. Very low dark count (300-1000 counts/s), robust, efficient, simple data collection and processing, ease of connectivity to any other application demanding similar requirements and similar performance results to the best commercially available single photon counting module (SPCM from PKI) are some of the features of this system.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
Valter Abrantes Pereira da Silva
2007-03-01
Full Text Available OBJETIVO: O presente estudo objetivou comparar os valores de freqüência cardíaca máxima (FCmáx medidos durante um teste de esforço progressivo (TEP, com os obtidos através de equações de predição, em idosas brasileiras. MÉTODOS: Um TEP máximo sob o protocolo modificado de Bruce, realizado em esteira, foi utilizado para obtenção dos valores de referência da freqüência cardíaca máxima (FCmáx, em 93 mulheres idosas (67,1±5,16 anos. Os valores obtidos foram comparados aos estimados pelas equações "220 - idade" e a de Tanaka e cols., através da ANOVA, para amostras repetidas. A correlação e a concordância entre os valores medidos e os estimados foram testadas. Adicionalmente, a correlação entre os valores de FCmáx medidos e a idade das voluntárias foi examinada. RESULTADOS: Os resultados foram os seguintes: 1 a média da FCmáx atingida no TEP foi de 145,5±12,5 batimentos por minuto (bpm; 2 as equações "220 - idade" e a de Tanaka e cols. (2001 superestimaram significativamente (p OBJECTIVE: This study sought to compare maximum heart rate (HRmax values measured during a graded exercise test (GXT with those calculated from prediction equations in Brazilian elderly women. METHODS: A treadmill maximal graded exercise test in accordance with the modified Bruce protocol was used to obtain reference values for maximum heart rate (HRmax in 93 elderly women (mean age 67.1 ± 5.16. Measured values were compared with those estimated from the "220 - age" and Tanaka et al formulas using repeated-measures ANOVA. Correlation and agreement between measured and estimated values were tested. Also evaluated was the correlation between measured HRmax and volunteers’ age. RESULTS: Results were as follows: 1 mean HRmax reached during GXT was 145.5 ± 12,5 beats per minute (bpm; 2 both the "220 - age" and Tanaka et al (2001 equations significantly overestimated (p < 0.001 HRmax by a mean difference of 7.4 and 15.5 bpm, respectively; 3
Yuan-Hong Jiang
Full Text Available OBJECTIVES: The aim of this study was to investigate the predictive values of the total International Prostate Symptom Score (IPSS-T and voiding to storage subscore ratio (IPSS-V/S in association with total prostate volume (TPV and maximum urinary flow rate (Qmax in the diagnosis of bladder outlet-related lower urinary tract dysfunction (LUTD in men with lower urinary tract symptoms (LUTS. METHODS: A total of 298 men with LUTS were enrolled. Video-urodynamic studies were used to determine the causes of LUTS. Differences in IPSS-T, IPSS-V/S ratio, TPV and Qmax between patients with bladder outlet-related LUTD and bladder-related LUTD were analyzed. The positive and negative predictive values (PPV and NPV for bladder outlet-related LUTD were calculated using these parameters. RESULTS: Of the 298 men, bladder outlet-related LUTD was diagnosed in 167 (56%. We found that IPSS-V/S ratio was significantly higher among those patients with bladder outlet-related LUTD than patients with bladder-related LUTD (2.28±2.25 vs. 0.90±0.88, p1 or >2 was factored into the equation instead of IPSS-T, PPV were 91.4% and 97.3%, respectively, and NPV were 54.8% and 49.8%, respectively. CONCLUSIONS: Combination of IPSS-T with TPV and Qmax increases the PPV of bladder outlet-related LUTD. Furthermore, including IPSS-V/S>1 or >2 into the equation results in a higher PPV than IPSS-T. IPSS-V/S>1 is a stronger predictor of bladder outlet-related LUTD than IPSS-T.
Photon-Counting Arrays for Time-Resolved Imaging
I. Michel Antolovic
2016-06-01
Full Text Available The paper presents a camera comprising 512 × 128 pixels capable of single-photon detection and gating with a maximum frame rate of 156 kfps. The photon capture is performed through a gated single-photon avalanche diode that generates a digital pulse upon photon detection and through a digital one-bit counter. Gray levels are obtained through multiple counting and accumulation, while time-resolved imaging is achieved through a 4-ns gating window controlled with subnanosecond accuracy by a field-programmable gate array. The sensor, which is equipped with microlenses to enhance its effective fill factor, was electro-optically characterized in terms of sensitivity and uniformity. Several examples of capture of fast events are shown to demonstrate the suitability of the approach.
Imaging by photon counting with 256x256 pixel matrix
Tlustos, Lukas; Campbell, Michael; Heijne, Erik H. M.; Llopart, Xavier
2004-09-01
Using 0.25µm standard CMOS we have developed 2-D semiconductor matrix detectors with sophisticated functionality integrated inside each pixel of a hybrid sensor module. One of these sensor modules is a matrix of 256x256 square 55µm pixels intended for X-ray imaging. This device is called 'Medipix2' and features a fast amplifier and two-level discrimination for signals between 1000 and 100000 equivalent electrons, with overall signal noise ~150 e- rms. Signal polarity and comparator thresholds are programmable. A maximum count rate of nearly 1 MHz per pixel can be achieved, which corresponds to an average flux of 3x10exp10 photons per cm2. The selected signals can be accumulated in each pixel in a 13-bit register. The serial readout takes 5-10 ms. A parallel readout of ~300 µs could also be used. Housekeeping functions such as local dark current compensation, test pulse generation, silencing of noisy pixels and threshold tuning in each pixel contribute to the homogeneous response over a large sensor area. The sensor material can be adapted to the energy of the X-rays. Best results have been obtained with high-resistivity silicon detectors, but also CdTe and GaAs detectors have been used. The lowest detectable X-ray energy was about 4 keV. Background measurements have been made, as well as measurements of the uniformity of imaging by photon counting. Very low photon count rates are feasible and noise-free at room temperature. The readout matrix can be used also with visible photons if an energy or charge intensifier structure is interposed such as a gaseous amplification layer or a microchannel plate or acceleration field in vacuum.
The right to count does not always count
Sodemann, Morten
2013-01-01
The best prescription against illness is learning to read and to count. People who are unable to count have a harder time learning to read. People who have difficulty counting make poorer decisions, are less able to combine information and are less likely to have a strategy for life...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...
Symptoms High red blood cell count By Mayo Clinic Staff A high red blood cell count is an increase in oxygen-carrying cells in your bloodstream. Red blood cells transport oxygen from your lungs to tissues throughout ...
Counting and Topological Order
陈阳军
1997-01-01
The counting method is a simple and efficient method for processing linear recursive datalog queries.Its time complexity is bounded by O(n,e)where n and e denote the numbers the numbers of nodes and edges,respectively,in the graph representing the input.relations.In this paper,the concepts of heritage appearance function and heritage selection function are introduced,and an evaluation algorithm based on the computation of such functions in topological order is developed .This new algorithm requires only linear time in the case of non-cyclic data.
Lodwick, Rebecca K; Sabin, Caroline A; Porter, Kholoud;
2010-01-01
Whether people living with HIV who have not received antiretroviral therapy (ART) and have high CD4 cell counts have higher mortality than the general population is unknown. We aimed to examine this by analysis of pooled data from industrialised countries....
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
徐革锋; 尹家胜; 韩英; 刘洋; 牟振波
2014-01-01
This study examined the effects of water temperature on the metabolic characteristics and aerobic exer-cise capacity of juvenile manchurian trout , Brachymystax lenok ( Pallas) .The resting metabolic rate ( RMR) ,maxi-mum metabolic rate (MMR), metabolic scope(MS)and critical swimming speed (UCrit) of juveniles were measured at different temperature (4, 8, 12, 16, 20℃).The results showed that both the RMR and the MMR increased sig-nificantly with the increasing of water temperature ( P<0 .05 ) .Compared with test group at 4℃, the RMR for 8℃, 12℃, 16℃ and 20℃increased by 62%, 165%, 390%, 411%,respectively, and the MMR increased by 3%, 34%, 111%, 115%, respectively .However , the MS decreased with the increasing of water temperature with the highest MS occurring at 4℃.UCrit was significantly affected by water temperature (P<0.05), but the varia-tions of UCrit didn′t follow certain pattern with temperature .In the test of aerobic exercise , the MMR for each tem-perature level occurred at the swimming speed of 70% UCrit , probably due to the start of anaerobic metabolism , which caused excessive creatine in body , consequently hindered the aerobic metabolism .%为了探究温度对细鳞鲑（ Brachymystax lenok）幼鱼的代谢特征和有氧运动能力的影响，在不同温度（4℃、8℃、12℃、16℃、20℃）下测定了实验鱼的静止代谢率（ RMR）、有氧运动过程中的最大代谢率（ MMR）以及能量代谢范围（MS）和临界游泳速度（UCrit）。结果表明，随着温度的上升，RMR和MMR均显著提高（P＜0．05），各温度下的RMR和MMR分别较4℃条件的提高了62％（8℃）、165％（12℃）、390％（16℃）、411％（20℃）和3％（8℃）、34％（12℃）、111％（16℃）、115％（20℃）；MS随水温的升高呈现下降的趋势，且4℃条件具有最大的代谢范围。不同温度条件下UCrit存在显著性差异，但随着温度升高未表现出明显的变
Counting coalescent histories.
Rosenberg, Noah A
2007-04-01
Given a species tree and a gene tree, a valid coalescent history is a list of the branches of the species tree on which coalescences in the gene tree take place. I develop a recursion for the number of valid coalescent histories that exist for an arbitrary gene tree/species tree pair, when one gene lineage is studied per species. The result is obtained by defining a concept of m-extended coalescent histories, enumerating and counting these histories, and taking the special case of m = 1. As a sum over valid coalescent histories appears in a formula for the probability that a random gene tree evolving along the branches of a fixed species tree has a specified labeled topology, the enumeration of valid coalescent histories can considerably reduce the effort required for evaluating this formula.
Oscillations in counting statistics
Wilk, Grzegorz
2016-01-01
The very large transverse momenta and large multiplicities available in present LHC experiments on pp collisions allow a much closer look at the corresponding distributions. Some time ago we discussed a possible physical meaning of apparent log-periodic oscillations showing up in p_T distributions (suggesting that the exponent of the observed power-like behavior is complex). In this talk we concentrate on another example of oscillations, this time connected with multiplicity distributions P(N). We argue that some combinations of the experimentally measured values of P(N) (satisfying the recurrence relations used in the description of cascade-stochastic processes in quantum optics) exhibit distinct oscillatory behavior, not observed in the usual Negative Binomial Distributions used to fit data. These oscillations provide yet another example of oscillations seen in counting statistics in many different, apparently very disparate branches of physics further demonstrating the universality of this phenomenon.
Counting Statistics and Ion Interval Density in AMS
Vogel, J S; Ognibene, T; Palmblad, M; Reimer, P
2004-08-03
Confidence in the precisions of AMS and decay measurements must be comparable for the application of the {sup 14}C calibration to age determinations using both technologies. We confirmed the random nature of the temporal distribution of {sup 14}C ions in an AMS spectrometer for a number of sample counting rates and properties of the sputtering process. The temporal distribution of ion counts was also measured to confirm the applicability of traditional counting statistics.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Moraes, D; Nygård, E
2008-01-01
This ASIC is a counting mode front-end electronic optimized for the readout of CdZnTe/CdTe and silicon sensors, for possible use in applications where the flux of ionizing radiation is high. The chip is implemented in 0.25 μm CMOS technology. The circuit comprises 128 channels equipped with a transimpedance amplifier followed by a gain shaper stage with 21 ns peaking time, two discriminators and two 18-bit counters. The channel architecture is optimized for the detector characteristics in order to achieve the best energy resolution at counting rates of up to 5 M counts/second. The amplifier shows a linear sensitivity of 118 mV/fC and an equivalent noise charge of about 711 e−, for a detector capacitance of 5 pF. Complete evaluation of the circuit is presented using electronic pulses and pixel detectors.
Alixon David Reyes Rodríguez
2011-06-01
theoretical points of reference that responded to scientific needs before, but which are insufficient now. It has been observed in national and international conferences, seminaries, research encounters, in our universities and in different kinds of scientific meetings that some obsolete assumptions are still being taught, which slows down progress in Education Sciences and Sports Science. We recognize that some predictive formulas used to calculate the estimated maximum heart rate (EMHR represented progress for Exercise Science and Exercise Physiology, at some point; however, there are important aspects that should be considered. It is not that we despise them, but we intend to demonstrate and demystify the use of the traditional formula almost as the only calculation and measurement pattern for EMHR and, to offer, from the perspective of other researchers, better possibilities of exercise dosage for certain populations with particular characteristics.
Jia-Yu Tang; Zu-Hui Fan
2003-01-01
We study the counts of resolved SZE (Sunyaev-Zel'dovich effect) clus-ters expected from an interferometric survey in different cosmological models underdifferent conditions. The self-similar universal gas model and Press-Schechter massfunction are used. We take the observing frequency to be 90 GHz, and consider twodish diameters, 1.2 m and 2.5 m. We calculate the number density of the galaxyclusters dN/(dΩdz) at a high flux limit Slimv = 100mJy and at a relative lowSlimv = 10 mJy. The total numbers of SZE clusters N in two low-Ω0 models arecompared. The results show that the influence of the resolved effect depends notonly on D, but also on Slimv: at a given D, the effect is more significant for a highthan for a low Slim Also, the resolved effect for a flat universe is more impressivethan that for an open universe. For D = 1.2m and Slimv= 10mJy, the resolvedeffect is very weak. Considering the designed interferometers which will be used tosurvey SZE clusters, we find that the resolved effect is insignificant when estimatingthe expected yield of the SZE cluster surveys.
Multivariate ultrametric root counting
Avendano, Martin
2011-01-01
Let $K$ be a field, complete with respect to a discrete non-archimedian valuation and let $k$ be the residue field. Consider a system $F$ of $n$ polynomial equations in $K\\vars$. Our first result is a reformulation of the classical Hensel's Lemma in the language of tropical geometry: we show sufficient conditions (semiregularity at $w$) that guarantee that the first digit map $\\delta:(K^\\ast)^n\\to(k^\\ast)^n$ is a one to one correspondence between the solutions of $F$ in $(K^\\ast)^n$ with valuation $w$ and the solutions in $(k^\\ast)^n$ of the initial form system ${\\rm in}_w(F)$. Using this result, we provide an explicit formula for the number of solutions in $(K^\\ast)^n$ of a certain class of systems of polynomial equations (called regular), characterized by having finite tropical prevariety, by having initial forms consisting only of binomials, and by being semiregular at any point in the tropical prevariety. Finally, as a consequence of the root counting formula, we obtain the expected number of roots in $(K...
Making environmental DNA count.
Kelly, Ryan P
2016-01-01
The arc of reception for a new technology or method--like the reception of new information itself--can pass through predictable stages, with audiences' responses evolving from 'I don't believe it', through 'well, maybe' to 'yes, everyone knows that' to, finally, 'old news'. The idea that one can sample a volume of water, sequence DNA out of it, and report what species are living nearby has experienced roughly this series of responses among biologists, beginning with the microbial biologists who developed genetic techniques to reveal the unseen microbiome. 'Macrobial' biologists and ecologists--those accustomed to dealing with species they can see and count--have been slower to adopt such molecular survey techniques, in part because of the uncertain relationship between the number of recovered DNA sequences and the abundance of whole organisms in the sampled environment. In this issue of Molecular Ecology Resources, Evans et al. (2015) quantify this relationship for a suite of nine vertebrate species consisting of eight fish and one amphibian. Having detected all of the species present with a molecular toolbox of six primer sets, they consistently find DNA abundances are associated with species' biomasses. The strength and slope of this association vary for each species and each primer set--further evidence that there is no universal parameter linking recovered DNA to species abundance--but Evans and colleagues take a significant step towards being able to answer the next question audiences tend to ask: 'Yes, but how many are there?'
LAWRENCE RADIATION LABORATORY COUNTING HANDBOOK
Group, Nuclear Instrumentation
1966-10-01
The Counting Handbook is a compilation of operational techniques and performance specifications on counting equipment in use at the Lawrence Radiation Laboratory, Berkeley. Counting notes have been written from the viewpoint of the user rather than that of the designer or maintenance man. The only maintenance instructions that have been included are those that can easily be performed by the experimenter to assure that the equipment is operating properly.
Counting Frequencies from Zotero Items
Spencer Roberts
2013-04-01
Full Text Available In Counting Frequencies you learned how to count the frequency of specific words in a list using python. In this lesson, we will expand on that topic by showing you how to get information from Zotero HTML items, save the content from those items, and count the frequencies of words. It may be beneficial to look over the previous lesson before we begin.
Photon counting modules using RCA silicon avalanche photodiodes
Lightstone, Alexander W.; Macgregor, Andrew D.; Macsween, Darlene E.; Mcintyre, Robert J.; Trottier, Claude; Webb, Paul P.
1989-01-01
Avalanche photodiodes (APD) are excellent small area, solid state detectors for photon counting. Performance possibilities include: photon detection efficiency in excess of 50 percent; wavelength response from 400 to 1000 nm; count rate to 10 (exp 7) counts per sec; afterpulsing at negligible levels; timing resolution better than 1 ns. Unfortunately, these performance levels are not simultaneously available in a single detector amplifier configuration. By considering theoretical performance predictions and previous and new measurements of APD performance, the anticipated performance of a range of proposed APD-based photon counting modules is derived.
Social Security Administration — Staging Instance for all SUMs Counts related projects including: Redeterminations/Limited Issue, Continuing Disability Resolution, CDR Performance Measures, Initial...
Oda, Yasuyuki; Sato, Eiichi; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Osawa, Akihiro; Matsukiyo, Hiroshi; Enomoto, Toshiyuki; Watanabe, Manabu; Kusachi, Shinya; Sugimura, Shigeaki; Endo, Haruyuki; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2011-07-01
High-speed X-ray photon counting is useful for discriminating photon energy, and the counting can be used for constructing an X-ray computed tomography (CT) system. A photon-counting X-ray CT system consists of an X-ray generator, a turntable, an oscillation linear detector, a two-stage controller, a multipixel photon counter (MPPC) module, a 1.0 mm-thick crystal (scintillator) of YAP(Ce) (cerium-doped yttrium aluminum perovskite), a counter card (CC), and a personal computer (PC). Tomography is accomplished by repeating the linear scanning and the rotation of an object, and projection curves of the object are obtained by the linear scanning using the detector consisting of an MPPC module, the YAP(Ce), and a scan stage. The pulses of the event signal from the module are counted by the CC in conjunction with the PC. Because the lower level of the photon energy was roughly determined by a comparator in the module, the average photon energy of the X-ray spectra increased with increase in the lower-level voltage of the comparator at a constant tube voltage. The maximum count rate was approximately 3 Mcps (mega counts per second), and photon-counting CT was carried out.
Counting Closed String States in a Box
Meana, M L; Peñalba, J P; Meana, Marco Laucelli; Peñalba, Jesús Puente
1997-01-01
The computation of the microcanonical density of states for a string gas in a finite volume needs a one by one count because of the discrete nature of the spectrum. We present a way to do it using geometrical arguments in phase space. We take advantage of this result in order to obtain the thermodynamical magnitudes of the system. We show that the results for an open universe exactly coincide with the infinite volume limit of the expression obtained for the gas in a box. For any finite volume the Hagedorn temperature is a maximum one, and the specific heat is always positive. We also present a definition of pressure compatible with R-duality seen as an exact symmetry, which allows us to make a study on the physical phase space of the system. Besides a maximum temperature the gas presents an asymptotic pressure.
Growth and maximum size of tiger sharks (Galeocerdo cuvier in Hawaii.
Carl G Meyer
Full Text Available Tiger sharks (Galecerdo cuvier are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL, with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W, in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km, after 366 days at liberty (DAL. We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured. We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.
Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N
2014-01-01
Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Reference counting for reversible languages
Mogensen, Torben Ægidius
2014-01-01
deallocation. This requires the language to be linear: A pointer can not be copied and it can only be eliminated by deallocating the node to which it points. We overcome this limitation by adding reference counts to nodes: Copying a pointer to a node increases the reference count of the node and eliminating...
Coinductive counting with weighted automata
Rutten, J.J.M.M.
2002-01-01
A general methodology is developed to compute the solution of a wide variety of basic counting problems in a uniform way: (1) the objects to be counted are enumerated by means of an infinite weighted automaton; (2) the automaton is reduced by means of the quantitative notion of stream bisimulation;
Symptoms Low white blood cell count By Mayo Clinic Staff A low white blood cell count (leukopenia) is a decrease in disease-fighting cells ( ... a decrease in a certain type of white blood cell (neutrophil). The definition of low white blood cell ...
Hanford whole body counting manual
Palmer, H.E.; Brim, C.P.; Rieksts, G.A.; Rhoads, M.C.
1987-05-01
This document, a reprint of the Whole Body Counting Manual, was compiled to train personnel, document operation procedures, and outline quality assurance procedures. The current manual contains information on: the location, availability, and scope of services of Hanford's whole body counting facilities; the administrative aspect of the whole body counting operation; Hanford's whole body counting facilities; the step-by-step procedure involved in the different types of in vivo measurements; the detectors, preamplifiers and amplifiers, and spectroscopy equipment; the quality assurance aspect of equipment calibration and recordkeeping; data processing, record storage, results verification, report preparation, count summaries, and unit cost accounting; and the topics of minimum detectable amount and measurement accuracy and precision. 12 refs., 13 tabs.
Computed neutron coincidence counting applied to passive waste assay
Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R. [Nuclear Research Centre, Mol (Belgium)
1997-11-01
Neutron coincidence counting applied for the passive assay of fissile material is generally realised with dedicated electronic circuits. This paper presents a software based neutron coincidence counting method with data acquisition via a commercial PC-based Time Interval Analyser (TIA). The TIA is used to measure and record all time intervals between successive pulses in the pulse train up to count-rates of 2 Mpulses/s. Software modules are then used to compute the coincidence count-rates and multiplicity related data. This computed neutron coincidence counting (CNCC) offers full access to all the time information contained in the pulse train. This paper will mainly concentrate on the application and advantages of CNCC for the non-destructive assay of waste. An advanced multiplicity selective Rossi-alpha method is presented and its implementation via CNCC demonstrated. 13 refs., 4 figs., 2 tabs.
Wholebody Radiation Counting System.
1985-05-01
curves shown are for a Cesium 137 source and show that the OPAMP is achieving the multiplication of forty and that it is being operated in the non...voltage in the range of one to three volts. This was accomplished by using a Fairchild uA715 operational amplifier ( OPAMP ). This unit was selected...for its ability to operate at very high rates with little distortion. The Fairchild OPAMP has a slew rate of 400 volts per microsecond and the ability
Counting pairs of faint galaxies
Woods, D; Richer, H B; Woods, David; Fahlman, Gregory G; Richer, Harvey B
1995-01-01
The number of close pairs of galaxies observed to faint magnitude limits, when compared to nearby samples, determines the interaction or merger rate as a function of redshift. The prevalence of mergers at intermediate redshifts is fundamental to understanding how galaxies evolve and the relative population of galaxy types. Mergers have been used to explain the excess of galaxies in faint blue counts above the numbers expected from no-evolution models. Using deep CFHT (I\\leq24) imaging of a ``blank'' field we find a pair fraction which is consistent with the galaxies in our sample being randomly distributed with no significant excess of ``physical'' close pairs. This is contrary to the pair fraction of 34\\%\\pm9\\% found by Burkey {\\it et al.} for similar magnitude limits and using an identical approach to the pair analysis. Various reasons for this discrepancy are discussed. Colors and morphologies of our close pairs are consistent with the bulk of them being random superpositions although, as indicators of int...
Dark-count-less photon-counting x-ray computed tomography system using a YAP-MPPC detector
Sato, Eiichi; Sato, Yuich; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Enomoto, Toshiyuki; Watanabe, Manabu; Kusachi, Shinya; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2012-10-01
A high-sensitive X-ray computed tomography (CT) system is useful for decreasing absorbed dose for patients, and a dark-count-less photon-counting CT system was developed. X-ray photons are detected using a YAP(Ce) [cerium-doped yttrium aluminum perovskite] single crystal scintillator and an MPPC (multipixel photon counter). Photocurrents are amplified by a high-speed current-voltage amplifier, and smooth event pulses from an integrator are sent to a high-speed comparator. Then, logical pulses are produced from the comparator and are counted by a counter card. Tomography is accomplished by repeated linear scans and rotations of an object, and projection curves of the object are obtained by the linear scan. The image contrast of gadolinium medium slightly fell with increase in lower-level voltage (Vl) of the comparator. The dark count rate was 0 cps, and the count rate for the CT was approximately 250 kcps.
An analytical model of crater count equilibrium
Hirabayashi, Masatoshi; Minton, David A.; Fassett, Caleb I.
2017-06-01
Crater count equilibrium occurs when new craters form at the same rate that old craters are erased, such that the total number of observable impacts remains constant. Despite substantial efforts to understand this process, there remain many unsolved problems. Here, we propose an analytical model that describes how a heavily cratered surface reaches a state of crater count equilibrium. The proposed model formulates three physical processes contributing to crater count equilibrium: cookie-cutting (simple, geometric overlap), ejecta-blanketing, and sandblasting (diffusive erosion). These three processes are modeled using a degradation parameter that describes the efficiency for a new crater to erase old craters. The flexibility of our newly developed model allows us to represent the processes that underlie crater count equilibrium problems. The results show that when the slope of the production function is steeper than that of the equilibrium state, the power law of the equilibrium slope is independent of that of the production function slope. We apply our model to the cratering conditions in the Sinus Medii region and at the Apollo 15 landing site on the Moon and demonstrate that a consistent degradation parameterization can successfully be determined based on the empirical results of these regions. Further developments of this model will enable us to better understand the surface evolution of airless bodies due to impact bombardment.
The origins of counting algorithms.
Cantlon, Jessica F; Piantadosi, Steven T; Ferrigno, Stephen; Hughes, Kelly D; Barnard, Allison M
2015-06-01
Humans' ability to count by verbally labeling discrete quantities is unique in animal cognition. The evolutionary origins of counting algorithms are not understood. We report that nonhuman primates exhibit a cognitive ability that is algorithmically and logically similar to human counting. Monkeys were given the task of choosing between two food caches. First, they saw one cache baited with some number of food items, one item at a time. Then, a second cache was baited with food items, one at a time. At the point when the second set was approximately equal to the first set, the monkeys spontaneously moved to choose the second set even before that cache was completely baited. Using a novel Bayesian analysis, we show that the monkeys used an approximate counting algorithm for comparing quantities in sequence that is incremental, iterative, and condition controlled. This proto-counting algorithm is structurally similar to formal counting in humans and thus may have been an important evolutionary precursor to human counting. © The Author(s) 2015.
Vote Counting as Mathematical Proof
Schürmann, Carsten; Pattinson, Dirk
2015-01-01
Trust in the correctness of an election outcome requires proof of the correctness of vote counting. By formalising particular voting protocols as rules, correctness of vote counting amounts to verifying that all rules have been applied correctly. A proof of the outcome of any particular election......-based formalisation of voting protocols inside a theorem prover, we synthesise vote counting programs that are not only provably correct, but also produce independently verifiable certificates. These programs are generated from a (formal) proof that every initial set of ballots allows to decide the election winner...
White blood cell counting system
1972-01-01
The design, fabrication, and tests of a prototype white blood cell counting system for use in the Skylab IMSS are presented. The counting system consists of a sample collection subsystem, sample dilution and fluid containment subsystem, and a cell counter. Preliminary test results show the sample collection and the dilution subsystems are functional and fulfill design goals. Results for the fluid containment subsystem show the handling bags cause counting errors due to: (1) adsorption of cells to the walls of the container, and (2) inadequate cleaning of the plastic bag material before fabrication. It was recommended that another bag material be selected.
Development of an aerial counting system in oil palm plantations
Zulyma Miserque Castillo, Jhany; Laverde Diaz, Rubbermaid; Rueda Guzmán, Claudia Leonor
2016-07-01
This paper proposes the development of a counting aerial system capable of capturing, process and analyzing images of an oil palm plantation to register the number of cultivated palms. It begins with a study of the available UAV technologies to define the most appropriate model according to the project needs. As result, a DJI Phantom 2 Vision+ is used to capture pictures that are processed by a photogrammetry software to create orthomosaics from the areas of interest, which are handled by the developed software to calculate the number of palms contained in them. The implemented algorithm uses a sliding window technique in image pyramids to generate candidate windows, an LBP descriptor to model the texture of the picture, a logistic regression model to classify the windows and a non-maximum suppression algorithm to refine the decision. The system was tested in different images than the ones used for training and for establishing the set point. As result, the system showed a 95.34% detection rate with a 97.83% precision in mature palms and a 79.26% detection rate with a 97.53% precision in young palms giving an FI score of 0.97 for mature palms and 0.87 for the small ones. The results are satisfactory getting the census and high-quality images from which is possible to get more information from the area of interest. All this, achieved through a low-cost system capable of work even in cloudy conditions.
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Relationship of blood and milk cell counts with mastitic pathogens in Murrah buffaloes
C. Singh
2010-02-01
Full Text Available The present study was undertaken to see the effect of mastitic pathogens on the blood and milk counts of Murrah buffaloes. Milk and blood samples were collected from 9 mastitic Murrah buffaloes. The total leucocyte Counts (TLC and Differential leucocyte counts (DLC in blood were within normal range and there was a non-significant change in blood counts irrespective of different mastitic pathogens. Normal milk quarter samples had significantly (P<0.01 less Somatic cell counts (SCC. Lymphocytes were significantly higher in normal milk samples, whereas infected samples had a significant increase (P<0.01 in milk neutrophils. S. aureus infected buffaloes had maximum milk SCC, followed by E. coli and S. agalactiae. Influx of neutrophils in the buffalo mammary gland was maximum for S. agalactiae, followed by E.cli and S. aureus. The study indicated that level of mastitis had no affect on blood counts but it influenced the milk SCC of normal quarters.
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — The Make My Trip Count (MMTC) commuter survey, conducted in September and October 2015 by GBA, the Pittsburgh 2030 District, and 10 other regional transportation...
Bagdziunaite, Dalia; Jensen, Anne Strande; Auning-Hansen, Julie
2014-01-01
of the relative strength of these effects, and how they may dynamically interact. To abate this problem, we conducted an experiment in which we recruited participants from three regions (Italy, France and rest of world), to undergo wine testing and rating of wine taste preference, and willingness to pay (WTP......) while being exposed to the CoO and price of each wine. Unbeknownst to the participants, they all tasted the same wine. To provide a better understanding of the underlying mechanisms of the observed effects, emotional arousal was assessed using pupillometry. Using a linear regression model, our results...... demonstrate that price and CoO individually have a significant effect on the hedonic experience of wine (R² = 0.11, p
Chanier, Thomas
2015-01-01
The Maya were known for their astronomical proficiency. This is demonstrated in the Mayan codices where ritual practices were related to astronomical events/predictions. Whereas Mayan mathematics were based on a vigesimal system, they used a different base when dealing with long periods of time, the Long Count Calendar (LCC), composed of different Long Count Periods: the Tun of 360 days, the Katun of 7200 days and the Baktun of 144000 days. There were two other calendars used in addition to t...
Counting Word Frequencies with Python
William J. Turkel
2012-07-01
Full Text Available Your list is now clean enough that you can begin analyzing its contents in meaningful ways. Counting the frequency of specific words in the list can provide illustrative data. Python has an easy way to count frequencies, but it requires the use of a new type of variable: the dictionary. Before you begin working with a dictionary, consider the processes used to calculate frequencies in a list.
Awoke, Tadesse; Worku, Alemayehu; Kebede, Yigzaw; Kasim, Adetayo; Birlie, Belay; Braekers, Roel; Zuma, Khangelani; Shkedy, Ziv
2016-01-01
Antiretroviral therapy has shown to be effective in reducing morbidity and mortality in patients infected with HIV for the past couples of decades. However, there remains a need to better understand the characteristics of long-term treatment outcomes in resource poor settings. The main aim of this study was to determine and compare the long-term response of patients on nevirapine and efavirenz based first line antiretroviral therapy regimen in Ethiopia. Hospital based retrospective cohort study was conducted from January 2009 to December 2013 at University hospital located in Northwest Ethiopia. Human subject research approval for this study was received from University of Gondar Research Ethics Committee and the medical director of the hospital. Cox-proportional hazards model was used to assess the effect of baseline covariates on composite outcome and a semi-parametric mixed effect model was used to investigate CD4 counts response to treatments. A total of 2386 HIV/AIDS naive patients were included in this study. Nearly one-in-four patients experienced the events, of which death, lost to follow up, treatment substitution and discontinuation of Non-Nucleoside Reverse Transcriptase Inhibitors(NNRTI) accounted: 99 (26.8%), 122 (33.0%), 137 (37.0%) and 12 (3.2%), respectively. The hazard of composite outcome on nevirapine compared with efavirenz was 1.02(95%CI: 0.52-1.99) with p-value = 0.96. Similarly, the hazard of composite outcome on tenofovir and stavudine compared with zidovudine were 1.87 (95%CI: 1.52-2.32), p-value HIV/AIDS patients in Ethiopia. There was significant difference on risk of composite outcome between patients who were initiated with Tenofovir containing ART regimen compared with zidovudine after controlling for NNRTI drug combinations.
Direct calibration of click-counting detectors
Bohmann, M.; Kruse, R.; Sperling, J.; Silberhorn, C.; Vogel, W.
2017-03-01
We introduce and experimentally implement a method for the detector calibration of photon-number-resolving time-bin multiplexing layouts based on the measured click statistics of superconducting nanowire detectors. In particular, the quantum efficiencies, the dark count rates, and the positive operator-valued measures of these measurement schemes are directly obtained with high accuracy. The method is based on the moments of the click-counting statistics for coherent states with different coherent amplitudes. The strength of our analysis is that we can directly conclude—on a quantitative basis—that the detection strategy under study is well described by a linear response function for the light-matter interaction and that it is sensitive to the polarization of the incident light field. Moreover, our method is further extended to a two-mode detection scenario. Finally, we present possible applications for such well-characterized detectors, such as sensing of atmospheric loss channels and phase sensitive measurements.
张戈
2015-01-01
We studies the issue raised by Reference[3],according to appropriate assumptions and other smooth conditions,With a more simple method,Proved that asymptotic existence of quasi likelihood equations in Quasi-likelihood nonlinear model ,and proved the convergence rate of the solution.%在适当假定及其它一些光滑条件下,用更为简便的方法证明了拟似然非线性模型的拟似然方程解的渐近存在性,并且求出了该解收敛于真值的速度.
Scarcella, Carmelo; Tosi, Alberto, E-mail: alberto.tosi@polimi.it; Villa, Federica; Tisa, Simone; Zappa, Franco [Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy)
2013-12-15
We developed a single-photon counting multichannel detection system, based on a monolithic linear array of 32 CMOS SPADs (Complementary Metal-Oxide-Semiconductor Single-Photon Avalanche Diodes). All channels achieve a timing resolution of 100 ps (full-width at half maximum) and a photon detection efficiency of 50% at 400 nm. Dark count rate is very low even at room temperature, being about 125 counts/s for 50 μm active area diameter SPADs. Detection performance and microelectronic compactness of this CMOS SPAD array make it the best candidate for ultra-compact time-resolved spectrometers with single-photon sensitivity from 300 nm to 900 nm.
Photon Counting Using Edge-Detection Algorithm
Gin, Jonathan W.; Nguyen, Danh H.; Farr, William H.
2010-01-01
New applications such as high-datarate, photon-starved, free-space optical communications require photon counting at flux rates into gigaphoton-per-second regimes coupled with subnanosecond timing accuracy. Current single-photon detectors that are capable of handling such operating conditions are designed in an array format and produce output pulses that span multiple sample times. In order to discern one pulse from another and not to overcount the number of incoming photons, a detection algorithm must be applied to the sampled detector output pulses. As flux rates increase, the ability to implement such a detection algorithm becomes difficult within a digital processor that may reside within a field-programmable gate array (FPGA). Systems have been developed and implemented to both characterize gigahertz bandwidth single-photon detectors, as well as process photon count signals at rates into gigaphotons per second in order to implement communications links at SCPPM (serial concatenated pulse position modulation) encoded data rates exceeding 100 megabits per second with efficiencies greater than two bits per detected photon. A hardware edge-detection algorithm and corresponding signal combining and deserialization hardware were developed to meet these requirements at sample rates up to 10 GHz. The photon discriminator deserializer hardware board accepts four inputs, which allows for the ability to take inputs from a quadphoton counting detector, to support requirements for optical tracking with a reduced number of hardware components. The four inputs are hardware leading-edge detected independently. After leading-edge detection, the resultant samples are ORed together prior to deserialization. The deserialization is performed to reduce the rate at which data is passed to a digital signal processor, perhaps residing within an FPGA. The hardware implements four separate analog inputs that are connected through RF connectors. Each analog input is fed to a high-speed 1
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Jani, Ilesh V; Sitoe, Nádia E; Alfai, Eunice R; Chongo, Patrina L; Quevedo, Jorge I; Rocha, Beatriz M; Lehe, Jonathan D; Peter, Trevor F
2011-10-29
Loss to follow-up of HIV-positive patients before initiation of antiretroviral therapy can exceed 50% in low-income settings and is a challenge to the scale-up of treatment. We implemented point-of-care counting of CD4 cells in Mozambique and assessed the effect on loss to follow-up before immunological staging and treatment initiation. In this observational cohort study, data for enrolment into HIV management and initiation of antiretroviral therapy were extracted retrospectively from patients' records at four primary health clinics providing HIV treatment and point-of-care CD4 services. Loss to follow-up and the duration of each preparatory step before treatment initiation were measured and compared with baseline data from before the introduction of point-of-care CD4 testing. After the introduction of point-of-care CD4 the proportion of patients lost to follow-up before completion of CD4 staging dropped from 57% (278 of 492) to 21% (92 of 437) (adjusted odds ratio [OR] 0·2, 95% CI 0·15-0·27). Total loss to follow-up before initiation of antiretroviral treatment fell from 64% (314 of 492) to 33% (142 of 437) (OR 0·27, 95% CI 0·21-0·36) and the proportion of enrolled patients initiating antiretroviral therapy increased from 12% (57 of 492) to 22% (94 of 437) (OR 2·05, 95% CI 1·42-2·96). The median time from enrolment to antiretroviral therapy initiation reduced from 48 days to 20 days (pCD4 staging, which decreased from 32 days to 3 days (pantiretroviral therapy initiation did not change significantly (OR 0·84, 95% CI 0·49-1·45). Point-of-care CD4 testing enabled clinics to stage patients rapidly on-site after enrolment, which reduced opportunities for pretreatment loss to follow-up. As a result, more patients were identified as eligible for and initiated antiretroviral treatment. Point-of-care testing might therefore be an effective intervention to reduce pretreatment loss to follow-up. Absolute Return for Kids and UNITAID. Copyright © 2011 Elsevier
Stride Counting in Human Walking and Walking Distance Estimation Using Insole Sensors
Truong, Phuc Huu; Lee, Jinwook; Kwon, Ae-Ran; Jeong, Gu-Min
2016-01-01
This paper proposes a novel method of estimating walking distance based on a precise counting of walking strides using insole sensors. We use an inertial triaxial accelerometer and eight pressure sensors installed in the insole of a shoe to record walkers’ movement data. The data is then transmitted to a smartphone to filter out noise and determine stance and swing phases. Based on phase information, we count the number of strides traveled and estimate the movement distance. To evaluate the accuracy of the proposed method, we created two walking databases on seven healthy participants and tested the proposed method. The first database, which is called the short distance database, consists of collected data from all seven healthy subjects walking on a 16 m distance. The second one, named the long distance database, is constructed from walking data of three healthy subjects who have participated in the short database for an 89 m distance. The experimental results show that the proposed method performs walking distance estimation accurately with the mean error rates of 4.8% and 3.1% for the short and long distance databases, respectively. Moreover, the maximum difference of the swing phase determination with respect to time is 0.08 s and 0.06 s for starting and stopping points of swing phases, respectively. Therefore, the stride counting method provides a highly precise result when subjects walk. PMID:27271634
Stride Counting in Human Walking and Walking Distance Estimation Using Insole Sensors
Phuc Huu Truong
2016-06-01
Full Text Available This paper proposes a novel method of estimating walking distance based on a precise counting of walking strides using insole sensors. We use an inertial triaxial accelerometer and eight pressure sensors installed in the insole of a shoe to record walkers’ movement data. The data is then transmitted to a smartphone to filter out noise and determine stance and swing phases. Based on phase information, we count the number of strides traveled and estimate the movement distance. To evaluate the accuracy of the proposed method, we created two walking databases on seven healthy participants and tested the proposed method. The first database, which is called the short distance database, consists of collected data from all seven healthy subjects walking on a 16 m distance. The second one, named the long distance database, is constructed from walking data of three healthy subjects who have participated in the short database for an 89 m distance. The experimental results show that the proposed method performs walking distance estimation accurately with the mean error rates of 4.8% and 3.1% for the short and long distance databases, respectively. Moreover, the maximum difference of the swing phase determination with respect to time is 0.08 s and 0.06 s for starting and stopping points of swing phases, respectively. Therefore, the stride counting method provides a highly precise result when subjects walk.
Submillimeter Number Counts From Statistical Analysis of BLAST Maps
Patanchon, Guillaume; Bock, James J; Chapin, Edward L; Devlin, Mark J; Dicker, Simon R; Griffin, Matthew; Gundersen, Joshua O; Halpern, Mark; Hargrave, Peter C; Hughes, David H; Klein, Jeff; Marsden, Gaelen; Mauskopf, Philip; Moncelsi, Lorenzo; Netterfield, Calvin B; Olmi, Luca; Pascale, Enzo; Rex, Marie; Scott, Douglas; Semisch, Christopher; Thomas, Nicholas; Truch, Matthew D P; Tucker, Carole; Tucker, Gregory S; Viero, Marco P; Wiebe, Donald V
2009-01-01
We describe the application of a statistical method to estimate submillimeter galaxy number counts from the confusion limited observations of the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Our method is based on a maximum likelihood fit to the pixel histogram, sometimes called 'P(D)', an approach which has been used before to probe faint counts, the difference being that here we advocate its use even for sources with relatively high signal-to-noise ratios. This method has an advantage over standard techniques of source extraction in providing an unbiased estimate of the counts from the bright end down to flux densities well below the confusion limit. We specifically analyse BLAST observations of a roughly 10 sq. deg map centered on the Great Observatories Origins Deep Survey South field. We provide estimates of number counts at the three BLAST wavelengths, 250, 350, and 500 microns, instead of counting sources in flux bins we estimate the counts at several flux density nodes connected with ...
Hanford whole body counting manual
Palmer, H.E.; Rieksts, G.A.; Lynch, T.P.
1990-06-01
This document describes the Hanford Whole Body Counting Program as it is administered by Pacific Northwest Laboratory (PNL) in support of the US Department of Energy--Richland Operations Office (DOE-RL) and its Hanford contractors. Program services include providing in vivo measurements of internally deposited radioactivity in Hanford employees (or visitors). Specific chapters of this manual deal with the following subjects: program operational charter, authority, administration, and practices, including interpreting applicable DOE Orders, regulations, and guidance into criteria for in vivo measurement frequency, etc., for the plant-wide whole body counting services; state-of-the-art facilities and equipment used to provide the best in vivo measurement results possible for the approximately 11,000 measurements made annually; procedures for performing the various in vivo measurements at the Whole Body Counter (WBC) and related facilities including whole body counts; operation and maintenance of counting equipment, quality assurance provisions of the program, WBC data processing functions, statistical aspects of in vivo measurements, and whole body counting records and associated guidance documents. 16 refs., 48 figs., 22 tabs.
Negative Avalanche Feedback Detectors for Photon-Counting Optical Communications
Farr, William H.
2009-01-01
Negative Avalanche Feedback photon counting detectors with near-infrared spectral sensitivity offer an alternative to conventional Geiger mode avalanche photodiode or phototube detectors for free space communications links at 1 and 1.55 microns. These devices demonstrate linear mode photon counting without requiring any external reset circuitry and may even be operated at room temperature. We have now characterized the detection efficiency, dark count rate, after-pulsing, and single photon jitter for three variants of this new detector class, as well as operated these uniquely simple to use devices in actual photon starved free space optical communications links.
Negative Avalanche Feedback Detectors for Photon-Counting Optical Communications
Farr, William H.
2009-01-01
Negative Avalanche Feedback photon counting detectors with near-infrared spectral sensitivity offer an alternative to conventional Geiger mode avalanche photodiode or phototube detectors for free space communications links at 1 and 1.55 microns. These devices demonstrate linear mode photon counting without requiring any external reset circuitry and may even be operated at room temperature. We have now characterized the detection efficiency, dark count rate, after-pulsing, and single photon jitter for three variants of this new detector class, as well as operated these uniquely simple to use devices in actual photon starved free space optical communications links.
Kami, Syouta; Sato, Eiichi; Kogita, Hayato; Numahata, Wataru; Hamaya, Tatsuki; Nihei, Shinichi; Arakawa, Yumeka; Oda, Yasuyuki; Kodama, Hajime; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Enomoto, Toshiyuki; Watanabe, Manabu; Kusachi, Shinya; Sato, Shigehiro; Ogawa, Akira
2014-07-01
To measure X-ray spectra and to perform photon-counting computed tomography (PC-CT) with high count rates, we developed a zero-dark-counting spectrometer using a short-decay-time scintillator. A method exploiting a YAP(Ce) [cerium-doped yttrium aluminum perovskite] single crystal scintillator with a decay time of 30 ns and an MPPC (multipixel photon counter) has been developed to count X-ray photons. The photocurrent from the MPPC was amplified by a high-speed current-voltage amplifier, and the event pulse was sent to a multichannel analyzer (MCA) to measure X-ray spectra. The MPPC was driven under pre-Geiger mode at a bias voltage of the MPPC of 70.7 V and a temperature of 23 °C. The PC-CT was accomplished by repeated linear scans and rotations of an object, and projection curves of the object were obtained by the linear scan at a tube current of 1.0 mA. The exposure time for obtaining a tomogram was 10 min at a scan step of 0.5 mm and a rotation step of 1.0°. At a tube voltage of 100 kV, the maximum count rate was 200 kcps. In the PC-CT using gadolinium media, we observed image-contrast variations with changes in lower-level discrimination voltage of the event pulse using a comparator.
Gervais, V.
2004-11-01
The subject of this report is the study and simulation of a model describing the infill of sedimentary basins on large scales in time and space. It simulates the evolution through time of the sediment layer in terms of geometry and rock properties. A parabolic equation is coupled to an hyperbolic equation by an input boundary condition at the top of the basin. The model also considers a unilaterality constraint on the erosion rate. In the first part of the report, the mathematical model is described and particular solutions are defined. The second part deals with the definition of numerical schemes and the simulation of the model. In the first chap-ter, finite volume numerical schemes are defined and studied. The Newton algorithm adapted to the unilateral constraint used to solve the schemes is given, followed by numerical results in terms of performance and accuracy. In the second chapter, a preconditioning strategy to solve the linear system by an iterative solver at each Newton iteration is defined, and numerical results are given. In the last part, a simplified model is considered in which a variable is decoupled from the other unknowns and satisfies a parabolic equation. A weak formulation is defined for the remaining coupled equations, for which the existence of a unique solution is obtained. The proof uses the convergence of a numerical scheme. (author)
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Christensen, Anette L; Lundbye-Christensen, Søren; Overvad, Kim; Rasmussen, Lars H; Dethlefsen, Claus
2012-11-20
Seasonal variation in the occurrence of cardiovascular diseases has been recognized for decades. In particular, incidence rates of hospitalization with atrial fibrillation (AF) and stroke have shown to exhibit a seasonal variation. Stroke in AF patients is common and often severe. Obtaining a description of a possible seasonal variation in the occurrence of stroke in AF patients is crucial in clarifying risk factors for developing stroke and initiating prophylaxis treatment. Using a dynamic generalized linear model we were able to model gradually changing seasonal variation in hospitalization rates of stroke in AF patients from 1977 to 2011. The study population consisted of all Danes registered with a diagnosis of AF comprising 270,017 subjects. During follow-up, 39,632 subjects were hospitalized with stroke. Incidence rates of stroke in AF patients were analyzed assuming the seasonal variation being a sum of two sinusoids and a local linear trend. The results showed that the peak-to-trough ratio decreased from 1.25 to 1.16 during the study period, and that the times of year for peak and trough changed slightly. The present study indicates that using dynamic generalized linear models provides a flexible modeling approach for studying changes in seasonal variation of stroke in AF patients and yields plausible results.
VersaCount: customizable manual tally software for cell counting
DeRisi Joseph L
2010-01-01
Full Text Available Abstract Background The manual counting of cells by microscopy is a commonly used technique across biological disciplines. Traditionally, hand tally counters have been used to track event counts. Although this method is adequate, there are a number of inefficiencies which arise when managing large numbers of samples or large sample sizes. Results We describe software that mimics a traditional multi-register tally counter. Full customizability allows operation on any computer with minimal hardware requirements. The efficiency of counting large numbers of samples and/or large sample sizes is improved through the use of a "multi-count" register that allows single keystrokes to correspond to multiple events. Automatically updated multi-parameter values are implemented as user-specified equations, reducing errors and time required for manual calculations. The user interface was optimized for use with a touch screen and numeric keypad, eliminating the need for a full keyboard and mouse. Conclusions Our software provides an inexpensive, flexible, and productivity-enhancing alternative to manual hand tally counters.
The Origins of Counting Algorithms
Cantlon, Jessica F.; Piantadosi, Steven T.; Ferrigno, Stephen; Hughes, Kelly D.; Allison M Barnard
2015-01-01
Humans’ ability to ‘count’ by verbally labeling discrete quantities is unique in animal cognition. The evolutionary origins of counting algorithms are not understood. We report that non-human primates exhibit a cognitive ability that is algorithmically and logically similar to human counting. Monkeys were given the task of choosing between two food caches. Monkeys saw one cache baited with some number of food items, one item at a time. Then, a second cache was baited with food items, one at a...
Tree modules and counting polynomials
Kinser, Ryan
2011-01-01
We give a formula for counting tree modules for the quiver S_g with g loops and one vertex in terms of tree modules on its universal cover. This formula, along with work of Helleloid and Rodriguez-Villegas, is used to show that the number of d-dimensional tree modules for S_g is polynomial in g with the same degree and leading coefficient as the counting polynomial A_{S_g}(d, q) for absolutely indecomposables over F_q, evaluated at q=1.
Imaging by photon counting with 256 x 256 pixel matrix
Tlustos, Lukas; Heijne, Erik H M; Llopart-Cudie, Xavier
2004-01-01
Using 0.25 mum standard CMOS we have developed 2-D semiconductor matrix detectors with sophisticated functionality integrated inside each pixel of a hybrid sensor module. One of these sensor modules is a matrix of 256 multiplied by 256 square 55mum pixels intended for X- ray imaging. This device is called 'Medipix2' and features a fast amplifier and two-level discrimination for signals between 1000 and 100000 equivalent electrons, with overall signal noise similar to 150 e- rms. Signal polarity and comparator thresholds are programmable. A maximum count rate of nearly 1 MHz per pixel can be achieved, which corresponds to an average flux of 3 multiplied by 10exp10 photons per cm2. The selected signals can be accumulated in each pixel in a 13- bit register. The serial readout takes 5-10 ms. A parallel readout of similar to 300 mus could also be used. Housekeeping functions such as local dark current compensation, test pulse generation, silencing of noisy pixels and threshold tuning in each pixel contribute to t...
Calculation of Maximum Waste Heat and Recovery Rate of Liquid and Gas Fuels%液气燃料烟气的最大余热量与节能率计算研究
丛永杰
2016-01-01
The consumption of various liqui d oil and gas fuel grows rapidly in Chinese energy structure. The discharged flue's temperature is generally 160℃ ~180℃ after these fuels are combusted. This part of energy can be used as secondary energy, though whose grade is low. A lot of H elements are in the form of liquid and gas fuels, and the vapor is the flue's main ingredi-ents. In this paper, the waste heat quantity and recovery rate of 0# light diesel oil and natural gas's flue is calculated, whose tem-perature is from 180℃ to 25℃ at the condition of 1 atm. In the 0# light diesel's flue, the residual heat's proportion of the vapor's heat is about 55. 08%. In the natural gas's flue, which proportion is about 79. 41%. Moreover, the vapor's latent heat is about 3/4. Therefore, recovering the latent heat of vapor is of great significance for the heat recovery of the low temperature waste heat.%在中国能源结构中,燃油与天然气所占比例迅速上升.燃烧后排烟温度一般为160℃~180℃,仍含有较多能量,可以二次利用.本文通过对液、气体燃料中具有代表性的0号轻质柴油及天然气烟气的余热量与节能率进行计算,发现低温烟气余热中的水蒸气余热量占有很大比例,柴油烟气为55.08%,天热气烟气为79.41%.回收烟气余热,尤其是其中水蒸汽的潜热对低温烟气的热回收具有重要意义.若有效回收利用,既是对一次能源的二次利用,更符合"十三五"期间国家节能减排的相关政策要求.
Construction of a fast ionization chamber for high-rate particle identification
Chae, K.Y., E-mail: kchae@skku.edu [Department of Physics, Sungkyunkwan University, Suwon 440-746 (Korea, Republic of); Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Ahn, S. [Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996 (United States); Bardayan, D.W. [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Department of Physics, University of Notre Dame, Notre Dame, IN 46556 (United States); Chipps, K.A. [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996 (United States); Department of Physics, Colorado School of Mines, Golden, CO 80401 (United States); Manning, B. [Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854 (United States); Pain, S.D. [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Peters, W.A. [Oak Ridge Associated Universities, Oak Ridge, TN 37831 (United States); Schmitt, K.T. [Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996 (United States); Smith, M.S. [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Strauss, S.Y. [Department of Physics, University of Notre Dame, Notre Dame, IN 46556 (United States); Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854 (United States)
2014-07-01
A new gas-filled ionization chamber for high count rate particle identification has been constructed and commissioned at the Holifield Radioactive Ion Beam Facility (HRIBF) at Oak Ridge National Laboratory (ORNL). To enhance the response time of the ionization chamber, a design utilizing a tilted entrance window and tilted electrodes was adopted, which is modified from an original design by Kimura et al. [1]. A maximum counting rate of ∼700,000 particles per second has been achieved. The detector has been used for several radioactive beam measurements performed at the HRIBF.
Construction of a fast ionization chamber for high-rate particle identification
Chae, K. Y.; Ahn, S.; Bardayan, D. W.; Chipps, K. A.; Manning, B.; Pain, S. D.; Peters, W. A.; Schmitt, K. T.; Smith, M. S.; Strauss, S. Y.
2014-07-01
A new gas-filled ionization chamber for high count rate particle identification has been constructed and commissioned at the Holifield Radioactive Ion Beam Facility (HRIBF) at Oak Ridge National Laboratory (ORNL). To enhance the response time of the ionization chamber, a design utilizing a tilted entrance window and tilted electrodes was adopted, which is modified from an original design by Kimura et al. [1]. A maximum counting rate of ~700,000 particles per second has been achieved. The detector has been used for several radioactive beam measurements performed at the HRIBF.
Koop, G.; Dik, N; Nielen, M; Lipman, L. J. A.
2010-01-01
The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms, 3 bulk milk samples were collected at intervals of 2 wk. The samples were cultured for SPC, coliform count, and staphylococcal count and for the presence of Staphylococcus aureus. Furthermore, SCC ...
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
KIDS COUNT New Hampshire, 2000.
Shemitz, Elllen, Ed.
This Kids Count report presents statewide trends in the well-being of New Hampshire's children. The statistical report is based on 22 indicators of child well-being in 5 interrelated areas: (1) children and families (including child population, births, children living with single parent, and children experiencing parental divorce); (2) economic…
Counting a Culture of Mealworms
Ashbrook, Peggy
2007-01-01
Math is not the only topic that will be discussed when young children are asked to care for and count "mealworms," a type of insect larvae (just as caterpillars are the babies of butterflies, these larvae are babies of beetles). The following activity can take place over two months as the beetles undergo metamorphosis from larvae to adults. As the…
Verbal Counting in Bilingual Contexts
Donevska-Todorova, Ana
2015-01-01
Informal experiences in mathematics often include playful competitions among young children in counting numbers in as many as possible different languages. Can these enjoyable experiences result with excellence in the formal processes of education? This article discusses connections between mathematical achievements and natural languages within…
Shakespeare Live! and Character Counts.
Brookshire, Cathy A.
This paper discusses a live production of Shakespeare's "Macbeth" (in full costume but with no sets) for all public middle school and high school students in Harrisonburg and Rockingham, Virginia. The paper states that the "Character Counts" issues that are covered in the play are: decision making, responsibility and…
On Counting the Rational Numbers
Almada, Carlos
2010-01-01
In this study, we show how to construct a function from the set N of natural numbers that explicitly counts the set Q[superscript +] of all positive rational numbers using a very intuitive approach. The function has the appeal of Cantor's function and it has the advantage that any high school student can understand the main idea at a glance…
Counting problems for number rings
Brakenhoff, Johannes Franciscus
2009-01-01
In this thesis we look at three counting problems connected to orders in number fields. First we study the probability that for a random polynomial f in Z[X] the ring Z[X]/f is the maximal order in Q[X]/f. Connected to this is the probability that a random polynomial has a squarefree discriminant. T
Counting a Culture of Mealworms
Ashbrook, Peggy
2007-01-01
Math is not the only topic that will be discussed when young children are asked to care for and count "mealworms," a type of insect larvae (just as caterpillars are the babies of butterflies, these larvae are babies of beetles). The following activity can take place over two months as the beetles undergo metamorphosis from larvae to adults. As the…
Cho, Hyo-Min; Ding, Huanjun; Molloi, Sabee, E-mail: symolloi@uci.edu [Department of Radiological Sciences, University of California, Irvine, California 92697 (United States); Barber, William C.; Iwanczyk, Jan S. [DxRay Inc., Northridge, California 91324 (United States)
2014-09-15
Purpose: The possible clinical applications which can be performed using a newly developed detector depend on the detector's characteristic performance in a number of metrics including the dynamic range, resolution, uniformity, and stability. The authors have evaluated a prototype energy resolved fast photon counting x-ray detector based on a silicon (Si) strip sensor used in an edge-on geometry with an application specific integrated circuit to record the number of x-rays and their energies at high flux and fast frame rates. The investigated detector was integrated with a dedicated breast spectral computed tomography (CT) system to make use of the detector's high spatial and energy resolution and low noise performance under conditions suitable for clinical breast imaging. The aim of this article is to investigate the intrinsic characteristics of the detector, in terms of maximum output count rate, spatial and energy resolution, and noise performance of the imaging system. Methods: The maximum output count rate was obtained with a 50 W x-ray tube with a maximum continuous output of 50 kVp at 1.0 mA. A{sup 109}Cd source, with a characteristic x-ray peak at 22 keV from Ag, was used to measure the energy resolution of the detector. The axial plane modulation transfer function (MTF) was measured using a 67 μm diameter tungsten wire. The two-dimensional (2D) noise power spectrum (NPS) was measured using flat field images and noise equivalent quanta (NEQ) were calculated using the MTF and NPS results. The image quality parameters were studied as a function of various radiation doses and reconstruction filters. The one-dimensional (1D) NPS was used to investigate the effect of electronic noise elimination by varying the minimum energy threshold. Results: A maximum output count rate of 100 million counts per second per square millimeter (cps/mm{sup 2}) has been obtained (1 million cps per 100 × 100 μm pixel). The electrical noise floor was less than 4 keV. The
Teaching Emotionally Disturbed Students to Count Feelings.
Bartels, Cynthia S.; Calkin, Abigail B.
The paper describes a program to teach high school students with emotional and behavior problems to count their feelings, thereby improving their self concept. To aid in instruction, a hierarchy was developed which involved four phases: counting tasks completed and tasks not completed, counting independent actions in class, counting perceptions of…
Energy-correction photon counting pixel for photon energy extraction under pulse pile-up
Lee, Daehee; Park, Kyungjin; Lim, Kyung Taek; Cho, Gyuseong
2017-06-01
A photon counting detector (PCD) has been proposed as an alternative solution to an energy-integrating detector (EID) in medical imaging field due to its high resolution, high efficiency, and low noise. The PCD has expanded to variety of fields such as spectral CT, k-edge imaging, and material decomposition owing to its capability to count and measure the number and the energy of an incident photon, respectively. Nonetheless, pulse pile-up, which is a superimposition of pulses at the output of a charge sensitive amplifier (CSA) in each PC pixel, occurs frequently as the X-ray flux increases due to the finite pulse processing time (PPT) in CSAs. Pulse pile-up induces not only a count loss but also distortion in the measured X-ray spectrum from each PC pixel and thus it is a main constraint on the use of PCDs in high flux X-ray applications. To minimize these effects, an energy-correction PC (ECPC) pixel is proposed to resolve pulse pile-up without cutting off the PPT by adding an energy correction logic (ECL) via a cross detection method (CDM). The ECPC pixel with a size of 200×200 μm2 was fabricated by using a 6-metal 1-poly 0.18 μm CMOS process with a static power consumption of 7.2 μW/pixel. The maximum count rate of the ECPC pixel was extended by approximately three times higher than that of a conventional PC pixel with a PPT of 500 nsec. The X-ray spectrum of 90 kVp, filtered by 3 mm Al filter, was measured as the X-ray current was increased using the CdTe and the ECPC pixel. As a result, the ECPC pixel dramatically reduced the energy spectrum distortion at 2 Mphotons/pixel/s when compared to that of the ERCP pixel with the same 500 nsec PPT.
Bruno Ramalho de Carvalho
2003-03-01
ína ácida e velocidade de hemossedimentação mostraram-se pouco sensíveis e específicos. CONCLUSÕES: O leucograma e a proteína C reativa apresentam-se alterados de forma significativa nos casos de apendicite aguda, independentemente do sexo ou da faixa etária. O leucograma e, principalmente, a proteína C reativa devem ser exames considerados em indivíduos com tempo de evolução sintomática superior a 24 horas. Valores aumentados, entretanto, devem ser somados e não substituir a avaliação clínica do médico examinador. Dosagens de velocidade de hemossedimentação e da alfa-1 glicoproteína ácida não trazem auxílio ao diagnóstico da apendicite aguda.BACKGROUND: The diagnosis of acute appendicitis is clinic, but in some cases, it can present unusual symptoms. The diagnostic difficulties still lead surgeons to unnecessary laparotomies, which reach rates from 15% to 40%. Laboratory exams, then, may become important to complement appendicitis diagnosis. The leucocyte count seems to be the most important value, but measurement of acute phase proteins, specially, the C-reactive protein, is object of several studies. PATIENTS AND METHODS: This was a prospective study, involving 63 patients submitted to appendecectomies for acute appendicitis suspicion, in "Hospital das Clínicas", Federal University of Uberlândia, MG, Brazil, in whose blood were made dosages of acute phase proteins and the leucocyte count. RESULTS: The sample was composed by 44 male and 19 female patients, and the majority of them was between 11 and 30 years of age. The flegmonous type was the most freqüent (52.4%. The leucocyte count was altered in 74.6% of the cases and C-reactive protein elevation was observed in 88.9%. The alfa-1 acid glycoprotein and the erithrocyte sedimmentation rate were predominantly normal. The C-reactive protein was augmented in more than 80% of the cases in all ages. Leucocyte count and C-reactive protein were altered in 80% of the patients with the limit of 24
Graphite Nodule and Cell Count in Cast Iron
E Fraś
2007-07-01
Full Text Available In this work, a model is proposed for heterogeneous nucleation on substrates whose size distribution can be described by the Weibull statistics. It is found that the nuclei density, Nnuc can be given in terms of the maximum undercooling, ΔTm by Nnuc = Ns exp(-b/ΔTm; where Ns is the density of nucleation sites in the melt and b is the nucleation coefficient (b > 0 . When nucleation occurs on all the possible substrates, the graphite nodule density, NV,n or eutectic cell density NV after solidification equals Ns. In this work, measurements of NV,n and NV values were carried out on experimental nodular and flake graphite iron castings processed under various inoculation conditions. The volumetric nodule NV,,n or graphite eutectic cell NV count, were estimated from the area nodule count, NA,n or eutectic cell count NA on polished cast iron surface sections by stereological means. In addition, maximum undercoolings, ΔTm were measured using thermal analysis. The experimental outcome indicates that volumetric nodule NV,n or graphite eutectic cell NV count can be properly described by the proposed expression NV,,n = NV = Ns exp(-b/ΔTm. Moreover, the Ns and b values were experimentally determined. In particular, the proposed model suggests that the size distribution of nucleation sites is exponential in nature.
Experimental reconstruction of photon statistics without photon counting.
Zambra, Guido; Andreoni, Alessandra; Bondani, Maria; Gramegna, Marco; Genovese, Marco; Brida, Giorgio; Rossi, Andrea; Paris, Matteo G A
2005-08-05
Experimental reconstructions of photon number distributions of both continuous-wave and pulsed light beams are reported. Our scheme is based on on/off avalanche photo-detection assisted by maximum-likelihood estimation and does not involve photon counting. Reconstructions of the distribution for both semiclassical and quantum states of light are reported for single-mode as well as for multi-mode beams.
Predictive Model Assessment for Count Data
2007-09-05
critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002
44 CFR 208.12 - Maximum Pay Rate Table.
2010-10-01
... reimbursement and Backfill, for the System Member's actual compensation or the actual compensation of the... OF HOMELAND SECURITY DISASTER ASSISTANCE NATIONAL URBAN SEARCH AND RESCUE RESPONSE SYSTEM General...
Entrainment and maximum vapour flow rate of trays
Van Sinderen, AH; Wijn, EF; Zanting, RWJ
This is a report on free entrainment measurements in a small (0.20 m x 0.20 in) air-water column. An adjustable weir controlled the liquid height on a test tray. Several sieve and valve trays were studied. The results were interpreted with a two- or three-layer model of the two-phase mixture on the
Nitric-glycolic flowsheet testing for maximum hydrogen generation rate
Martino, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Newell, J. D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-01
The Defense Waste Processing Facility (DWPF) at the Savannah River Site is developing for implementation a flowsheet with a new reductant to replace formic acid. Glycolic acid has been tested over the past several years and found to effectively replace the function of formic acid in the DWPF chemical process. The nitric-glycolic flowsheet reduces mercury, significantly lowers the chemical generation of hydrogen and ammonia, allows purge reduction in the Sludge Receipt and Adjustment Tank (SRAT), stabilizes the pH and chemistry in the SRAT and the Slurry Mix Evaporator (SME), allows for effective adjustment of the SRAT/SME rheology, and is favorable with respect to melter flammability. The objective of this work was to perform DWPF Chemical Process Cell (CPC) testing at conditions that would bound the catalytic hydrogen production for the nitric-glycolic flowsheet.
MAXIMUM PRODUCTION OF TRANSMISSION MESSAGES RATE FOR SERVICE DISCOVERY PROTOCOLS
Intisar Al-Mejibli
2011-12-01
Full Text Available Minimizing the number of dropped User Datagram Protocol (UDP messages in a network is regarded asa challenge by researchers. This issue represents serious problems for many protocols particularly thosethat depend on sending messages as part of their strategy, such us service discovery protocols.This paper proposes and evaluates an algorithm to predict the minimum period of time required betweentwo or more consecutive messages and suggests the minimum queue sizes for the routers, to manage thetraffic and minimise the number of dropped messages that has been caused by either congestion or queueoverflow or both together. The algorithm has been applied to the Universal Plug and Play (UPnPprotocol using ns2 simulator. It was tested when the routers were connected in two configurations; as acentralized and de centralized. The message length and bandwidth of the links among the routers weretaken in the consideration. The result shows Better improvement in number of dropped messages `amongthe routers.
veteran athletes exercise at higher maximum heart rates than are ...
questionnaire, a full medical examination and a routine. sECG. Thereafter ... activities than during stress testing in the laboratory. (P < 0.01). ... After the risks and procedures involved ..... for the first time in either rehabilitation or sporting activities. .... set were i. Results. E. 25 - 29.9), underwei increased. ;;, 24-year- pressure,.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras
Miklas S. Kristoffersen
2016-01-01
Full Text Available The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences.
High performance universal analog and counting photodetector for LIDAR applications
Linga, Krishna; Krutov, Joseph; Godik, Edward; Seemungal, Wayne; Shushakov, Dmitry; Shubin, V. E.
2005-08-01
We demonstrate the feasibility of applying the emerging technology of internal discrete amplification to create an efficient, ultra low noise, universal analog and counting photodetector for LIDAR remote sensing. Photodetectors with internal discrete amplification can operate in the linear detection mode with a gain-bandwidth product of up to 1015 and in the photon counting mode with count rates of up to 109 counts/sec. Detectors based on this mechanism could have performance parameters superior to those of conventional avalanche photodiodes and photomultiplier tubes. For silicon photodetector prototypes, measured excess noise factor is as low as 1.02 at gains greater than 100,000. This gives the photodetectors and, consequently, the LIDAR systems new capabilities that could lead to important advances in LIDAR remote sensing.
Chanier, Thomas
2013-01-01
The Maya had a very elaborate and accurate calendar. First, the Mayan Long Count Calendar (LCC) was used to point historical events from a selected "beginning of time". It is also characterized by the existence of a religious month Tzolk'in of 260 days and a civic year Haab' of 365 days. The LCC is supposed to begin on 11 August -3114 BC known as the Goodman-Martinez-Thompson (GMT) correlation to the Gregorian calendar based on historical facts and end on 21 December 2012 corresponding to a period of approximately 5125 years or 13 Baktun. We propose here to explain the origin the 13 Baktun cycle, the Long Count Periods and the religious month Tzolk'in.
Counting Irreducible Double Occurrence Words
Burns, Jonathan
2011-01-01
A double occurrence word $w$ over a finite alphabet $\\Sigma$ is a word in which each alphabet letter appears exactly twice. Such words arise naturally in the study of topology, graph theory, and combinatorics. Recently, double occurrence words have been used for studying DNA recombination events. We develop formulas for counting and enumerating several elementary classes of double occurrence words such as palindromic, irreducible, and strongly-irreducible words.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Approximate Counting of Graphical Realizations.
Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.
Approximate Counting of Graphical Realizations.
Péter L Erdős
Full Text Available In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007, for regular directed graphs (by Greenhill, 2011 and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013. Several heuristics on counting the number of possible realizations exist (via sampling processes, and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS for counting of all realizations.
Manual and automated reticulocyte counts.
Simionatto, Mackelly; de Paula, Josiane Padilha; Chaves, Michele Ana Flores; Bortoloso, Márcia; Cicchetti, Domenic; Leonart, Maria Suely Soares; do Nascimento, Aguinaldo José
2010-12-01
Manual reticulocyte counts were examined under light microscopy, using the property whereby supravital stain precipitates residual ribosomal RNA versus the automated flow methods, with the suggestion that in the latter there is greater precision and an ability to determine both mature and immature reticulocyte fractions. Three hundred and forty-one venous blood samples of patients were analyzed of whom 224 newborn and the rest adults; 51 males and 66 females, with ages between 0 and 89 years, as part of the laboratory routine for hematological examinations at the Clinical Laboratory of the Hospital Universitário do Oeste do Paraná. This work aimed to compare manual and automated methodologies for reticulocyte countings and evaluate random and systematic errors. The results obtained showed that the difference between the two methods was very small, with an estimated 0·4% systematic error and 3·9% random error. Thus, it has been confirmed that both methods, when well conducted, can reflect precisely the reticulocyte counts for adequate clinical use.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
Photon counting chirped amplitude modulation lidar using an asymmetric triangular wave modulation
Zhang, Zijing; Cen, Longzhu; Zhang, Jiandong; Ma, Kun; Wang, Feng; Zhao, Yuan
2016-11-01
We propose a novel strategy of asymmetric triangular-wave modulation for photon-counting chirped amplitude modulation (PCCAM) lidar. Earlier studies use the symmetric triangle wave modulation, by which the velocity can be detected only when the Doppler shift caused by a moving target is greater than Full Width Half Maximum (FWHM) of Intermediate Frequency (IF). We use an alternative method known as the asymmetric triangular wave modulation method, in which the modulation rates of the up-ramp and the down-ramp are different. This new method avoids the overlapping of the up-ramp and the down-ramp IF peaks, and breaks the limit of the FWHM of IF peak to improve the velocity measuring sensitivity (also called the minimum detectable velocity). Finally, a proof-of-principle experiment is carried out in the laboratory. The experimental results agree well with the theoretical results and show the improvement of the minimum detectable velocity.
Sun, Jindong; Feng, Zhaozhong; Leakey, Andrew D B; Zhu, Xinguang; Bernacchi, Carl J; Ort, Donald R
2014-09-01
The responses of CO2 assimilation to [CO2] (A/Ci) were investigated at two developmental stages (R5 and R6) and in several soybean cultivars grown under two levels of CO2, the ambient level of 370 μbar versus the elevated level of 550 μbar. The A/Ci data were analyzed and compared by either the combined iterations or the separated iterations of the Rubisco-limited photosynthesis (Ac) and/or the RuBP-limited photosynthesis (Aj) using various curve-fitting methods: the linear 2-segment model; the non-rectangular hyperbola model; the rectangular hyperbola model; the constant rate of electron transport (J) method and the variable J method. Inconsistency was found among the various methods for the estimation of the maximum rate of carboxylation (Vcmax), the mitochondrial respiration rate in the light (Rd) and mesophyll conductance (gm). The analysis showed that the inconsistency was due to inconsistent estimates of gm values that decreased with an instantaneous increase in [CO2], and varied with the transition Ci cut-off between Rubisco-limited photosynthesis and RuBP-regeneration-limited photosynthesis, and due to over-parameters for non-linear curve-fitting with gm included. We proposed an alternate solution to A/Ci curve-fitting for estimates of Vcmax, Rd, Jmax and gm with the various A/Ci curve-fitting methods. The study indicated that down-regulation of photosynthetic capacity by elevated [CO2] and leaf aging was due to partially the decrease in the maximum rates of carboxylation and partially the decrease in gm. Mesophyll conductance lowered photosynthetic capacity by 18% on average for the case of soybean plants.
Koop, G.; Dik, N.; Nielen, M.; Lipman, L.J.A.
2010-01-01
The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms,
Avalanche photodiode photon counting receivers for space-borne lidars
Sun, Xiaoli; Davidson, Frederic M.
1991-01-01
Avalanche photodiodes (APD) are studied for uses as photon counting detectors in spaceborne lidars. Non-breakdown APD photon counters, in which the APD's are biased below the breakdown point, are shown to outperform: (1) conventional APD photon counters biased above the breakdown point; (2) conventional APD photon counters biased above the breakdown point; and (3) APD's in analog mode when the received optical signal is extremely weak. Non-breakdown APD photon counters were shown experimentally to achieve an effective photon counting quantum efficiency of 5.0 percent at lambda = 820 nm with a dead time of 15 ns and a dark count rate of 7000/s which agreed with the theoretically predicted values. The interarrival times of the counts followed an exponential distribution and the counting statistics appeared to follow a Poisson distribution with no after pulsing. It is predicted that the effective photon counting quantum efficiency can be improved to 18.7 percent at lambda = 820 nm and 1.46 percent at lambda = 1060 nm with a dead time of a few nanoseconds by using more advanced commercially available electronic components.
Optical People Counting for Demand Controlled Ventilation: A Pilot Study of Counter Performance
Fisk, William J.; Sullivan, Douglas
2009-12-26
This pilot scale study evaluated the counting accuracy of two people counting systems that could be used in demand controlled ventilation systems to provide control signals for modulating outdoor air ventilation rates. The evaluations included controlled challenges of the people counting systems using pre-planned movements of occupants through doorways and evaluations of counting accuracies when naive occupants (i.e., occupants unaware of the counting systems) passed through the entrance doors of the building or room. The two people counting systems had high counting accuracy accuracies, with errors typically less than 10percent, for typical non-demanding counting events. However, counting errors were high in some highly challenging situations, such as multiple people passing simultaneously through a door. Counting errors, for at least one system, can be very high if people stand in the field of view of the sensor. Both counting system have limitations and would need to be used only at appropriate sites and where the demanding situations that led to counting errors were rare.
Kargarfard, Mehdi; Poursafa, Parinaz; Rezanejad, Saber; Mousavinasab, Firouzeh
2011-07-01
The purpose of this study was to assess the effects of exercise on the aerobic power, serum lactate level, and cell blood count among active individuals in the environments with similar climatic characteristics differing in their level of air pollution. This trial comprised 20 volunteer students of Physical education in The University of Isfahan, Iran. Two places with the same climate (altitude, temperature, and humidity), but low and high level of air pollutants air were selected in Isfahan, Iran. Participants underwent a field Cooper test with a 12-minute run for fitness assessment. Then the aerobic power, serum lactate, and cell blood counts were measured and compared between the two areas. The study participants had a mean (SD) age of 21.70 (2.10) years and body mass index (BMI) of 24.44 (2.32) Kg/m2. We found a significant decrease in mean Vo2 max, red blood cell count, hemoglobin, hematocrit, and mean corpuscular hemoglobin, as well as significant increase in mean lactate level, white blood cell count and mean corpuscular volume in the higher-polluted than in the lower-polluted area. No significant difference was documented for other parameters as platelet counts or maximum heart rate. Exercise in high-polluted air resulted in a significant reduction in the performance at submaximal levels of physical exertion. Therefore, the acute exposure to polluted air may cause a significant reduction in the performance of active individuals. The clinical importance of these findings should be assessed in longitudinal studies.
High counting rate, two-dimensional position sensitive timing RPC
Petrovici, M.; Simion, V; Bartos, D; Caragheorgheopol, G; Deppner, I; Adamczewski-Musch, J; Linev, S; Williams, MCS; Loizeau, P; Herrmann, N; Doroud, K; Radulescu, L; Constantin, F
2012-01-01
Resistive Plate Chambers (RPCs) are widely employed as muon trigger systems at the Large Hadron Collider (LHC) experiments. Their large detector volume and the use of a relatively expensive gas mixture make a closed-loop gas circulation unavoidable. The return gas of RPCs operated in conditions similar to the experimental background foreseen at LHC contains large amount of impurities potentially dangerous for long-term operation. Several gas-cleaning agents, characterized during the past years, are currently in use. New test allowed understanding of the properties and performance of a large number of purifiers. On that basis, an optimal combination of different filters consisting of Molecular Sieve (MS) 5Å and 4Å, and a Cu catalyst R11 has been chosen and validated irradiating a set of RPCs at the CERN Gamma Irradiation Facility (GIF) for several years. A very important feature of this new configuration is the increase of the cycle duration for each purifier, which results in better system stabilit...
Signatures of synchrony in pairwise count correlations
Tatjana Tchumatchenko
2010-04-01
Full Text Available Concerted neural activity can reflect specific features of sensory stimuli or behavioral tasks. Correlation coefficients and count correlations are frequently used to measure correlations between neurons, design synthetic spike trains and build population models. But are correlation coefficients always a reliable measure of input correlations? Here, we consider a stochastic model for the generation of correlated spike sequences which replicate neuronal pairwise correlations in many important aspects. We investigate under which conditions the correlation coefficients reflect the degree of input synchrony and when they can be used to build population models. We find that correlation coefficients can be a poor indicator of input synchrony for some cases of input correlations. In particular, count correlations computed for large time bins can vanish despite the presence of input correlations. These findings suggest that network models or potential coding schemes of neural population activity need to incorporate temporal properties of correlated inputs and take into consideration the regimes of firing rates and correlation strengths to ensure that their building blocks are an unambiguous measures of synchrony.
Alaska Steller Sea Lion Pup Count Database
National Oceanic and Atmospheric Administration, Department of Commerce — This database contains counts of Steller sea lion pups on rookeries in Alaska made between 1961 and 2015. Pup counts are conducted in late June-July. Pups are...
CalCOFI Egg Counts Positive Tows
National Oceanic and Atmospheric Administration, Department of Commerce — Fish egg counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets], and...
CalCOFI Larvae Counts Positive Tows
National Oceanic and Atmospheric Administration, Department of Commerce — Fish larvae counts and standardized counts for eggs captured in CalCOFI icthyoplankton nets (primarily vertical [Calvet or Pairovet], oblique [bongo or ring nets],...
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Jayanthi, Hari Kishan; Tulasi, Sai Krishna
2016-01-01
Introduction: Dengue is one of the common diseases presenting as fever with thrombocytopenia, also causing significant morbidity and complications. Objectives: Though the correlation between platelet count, bleeding manifestations and hemorrhagic complications has been extensively studied, less is known about the correlation between platelet count and non hemorrhagic complications. This study was done to see the correlation between platelet count and non hemorrhagic complications, duration of hospital stay and additive effect of leucopenia with thrombocytopenia on complications. Methods: Our study is prospective observational study done on 99 patients who had dengue fever with thrombocytopenia. Correlations were obtained using scatter plot and SPSS software trail version. Results: Transaminitis (12.12%) was the most common complication followed by acute renal injury (2%). In our study we found that, as the platelet count decreased the complication rate increased (P = 0.0006). In our study duration of hospital increased (P is 0.00597) with decreasing platelet count when compared to other study where there was no correlation between the two. There was no correlation between thrombocytopenia with leucopenia and complications (P is 0.292), similar to other study. Conclusion: Platelet count can be used to predict the complication and duration of hospital stay and hence better use of resources. PMID:27453855
Kohei Honda
2013-01-01
Conclusions: These results suggest that pollen count levels may correlate with the rate of sensitization for JC pollinosis, but may not affect the rate of onset among sensitized children in northeast Japan.
DC KIDS COUNT e-Databook Indicators
DC Action for Children, 2012
2012-01-01
This report presents indicators that are included in DC Action for Children's 2012 KIDS COUNT e-databook, their definitions and sources and the rationale for their selection. The indicators for DC KIDS COUNT represent a mix of traditional KIDS COUNT indicators of child well-being, such as the number of children living in poverty, and indicators of…
Monte Carlo Simulation of Counting Experiments.
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Digital coincidence counting - initial results
Butcher, K. S. A.; Watt, G. C.; Alexiev, D.; van der Gaast, H.; Davies, J.; Mo, Li; Wyllie, H. A.; Keightley, J. D.; Smith, D.; Woods, M. J.
2000-08-01
Digital Coincidence Counting (DCC) is a new technique in radiation metrology, based on the older method of analogue coincidence counting. It has been developed by the Australian Nuclear Science and Technology Organisation (ANSTO), in collaboration with the National Physical Laboratory (NPL) of the United Kingdom, as a faster more reliable means of determining the activity of ionising radiation samples. The technique employs a dual channel analogue-to-digital converter acquisition system for collecting pulse information from a 4π beta detector and an NaI(Tl) gamma detector. The digitised pulse information is stored on a high-speed hard disk and timing information for both channels is also stored. The data may subsequently be recalled and analysed using software-based algorithms. In this letter we describe some recent results obtained with the new acquistion hardware being tested at ANSTO. The system is fully operational and is now in routine use. Results for 60Co and 22Na radiation activity calibrations are presented, initial results with 153Sm are also briefly mentioned.
Association between absolute blood eosinophil count and CKD stages among cardiac patients.
Ishii, Rui; Fujita, Shu-Ichi; Kizawa, Shun; Sakane, Kazushi; Morita, Hideaki; Ozeki, Michishige; Sohmiya, Koichi; Hoshiga, Masaaki; Ishizaka, Nobukazu
2016-02-01
Elevated eosinophil count was shown to be associated with the development of cholesterol embolization syndrome, a potentially life-threatening condition, after catheter-based procedures. We investigated the association between stages of chronic kidney disease (CKD) and the absolute eosinophil count (AEC) among cardiac patients. CKD stages were determined solely on the estimated glomerular filtration rate or requirement for hemodialysis. Eosinophilia is defined as an eosinophil count exceeding 500/μL. A total of 1022 patients were enrolled in the current study, and eosinophil counts (/μL) in the first through fourth eosinophil count quartiles were blood pressure, and total white blood cell count. Similarly, after adjustment for the same variables, eosinophilia was associated with severe renal dysfunction with an odds ratio of 2.60 (95 % confidence interval, 1.08-6.26, P count was positively associated with higher CKD stages among cardiology patients, some fraction of which might be related to subclinical cholesterol embolization.
How much do women count if they not counted?
Federica Taddia
2006-01-01
Full Text Available The condition of women throughout the world is marked by countless injustices and violations of the most fundamental rights established by the Universal Declaration of human rights and every culture is potentially prone to commit discrimination against women in various forms. Women are worse fed, more exposed to physical violence, more exposed to diseases and less educated; they have less access to, or are excluded from, vocational training paths; they are the most vulnerable among prisoners of conscience, refugees and immigrants and the least considered within ethnic minorities; from their very childhood, women are humiliated, undernourished, sold, raped and killed; their work is generally less paid compared to men’s work and in some countries they are victims of forced marriages. Such condition is the result of old traditions that implicit gender-differentiated education has long promoted through cultural models based on theories, practices and policies marked by discrimination and structured differentially for men and women. Within these cultural models, the basic educational institutions have played and still play a major role in perpetuating such traditions. Nevertheless, if we want to overcome inequalities and provide women with empowerment, we have to start right from the educational institutions and in particular from school, through the adoption of an intercultural approach to education: an approach based on active pedagogy and on methods of analysis, exchange and enhancement typical of socio-educational animation. The intercultural approach to education is attentive to promote the realisation of each individual and the dignity and right of everyone to express himself/herself in his/her own way. Such an approach will give women the opportunity to become actual agents of collective change and to get the strength and wellbeing necessary to count and be counted as human beings entitled to freedom and equality, and to have access to all
Neutron triples counting data for uranium
Croft, Stephen, E-mail: crofts@ornl.gov [Oak Ridge National laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States); LaFleur, Adrienne M. [Los Alamos National Laboratory, Los Alamos , NM 87545 (United States); McElroy, Robert D. [Oak Ridge National laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States); Swinhoe, Martyn T. [Los Alamos National Laboratory, Los Alamos , NM 87545 (United States)
2015-06-01
Correlated neutron counting using multiplicity shift register logic extracts the first three factorial moments from the detected neutron pulse train. The descriptive properties of the measurement item (mass, the ratio of (α,n) to spontaneous fission neutron production, and leakage self-multiplication) are related to the observed singles (S), doubles (D) and triples (T) rates, and this is the basis of the widely used multiplicity counting assay method. The factorial moments required to interpret and invert the measurement data in the framework of the point kinetics model may be calculated from the spontaneous fission prompt neutron multiplicity distribution P(ν). In the case of {sup 238}U very few measurements of P(ν) are available and the derived values, especially for the higher factorial moments, are not known with high accuracy. In this work, we report the measurement of the triples rate per gram of {sup 238}U based on the analysis of a set of measurements in which a collection of 10 cylinders of UO{sub 2}F{sub 2}, each containing about 230 g of compound, were measured individually and in groups. Special care was taken to understand and compensate the recorded multiplicity histograms for the effect of random cosmic-ray induced background neutrons, which, because they also come in bursts and mimic fissions but with a different and harder multiplicity distribution. We compare our fully corrected (deadtime, background, efficiency, multiplication) experimental results with first principles expectations based on evaluated nuclear data. Based on our results we suspect that the current evaluated nuclear data is biased, which points to a need to undertake new basic measurements of the {sup 238}U prompt neutron multiplicity distribution.
Automatic cell counting with ImageJ.
Grishagin, Ivan V
2015-03-15
Cell counting is an important routine procedure. However, to date there is no comprehensive, easy to use, and inexpensive solution for routine cell counting, and this procedure usually needs to be performed manually. Here, we report a complete solution for automatic cell counting in which a conventional light microscope is equipped with a web camera to obtain images of a suspension of mammalian cells in a hemocytometer assembly. Based on the ImageJ toolbox, we devised two algorithms to automatically count these cells. This approach is approximately 10 times faster and yields more reliable and consistent results compared with manual counting.
Peronio, P.; Acconcia, G.; Rech, I.; Ghioni, M. [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)
2015-11-15
Time-Correlated Single Photon Counting (TCSPC) has been long recognized as the most sensitive method for fluorescence lifetime measurements, but often requiring “long” data acquisition times. This drawback is related to the limited counting capability of the TCSPC technique, due to pile-up and counting loss effects. In recent years, multi-module TCSPC systems have been introduced to overcome this issue. Splitting the light into several detectors connected to independent TCSPC modules proportionally increases the counting capability. Of course, multi-module operation also increases the system cost and can cause space and power supply problems. In this paper, we propose an alternative approach based on a new detector and processing electronics designed to reduce the overall system dead time, thus enabling efficient photon collection at high excitation rate. We present a fast active quenching circuit for single-photon avalanche diodes which features a minimum dead time of 12.4 ns. We also introduce a new Time-to-Amplitude Converter (TAC) able to attain extra-short dead time thanks to the combination of a scalable array of monolithically integrated TACs and a sequential router. The fast TAC (F-TAC) makes it possible to operate the system towards the upper limit of detector count rate capability (∼80 Mcps) with reduced pile-up losses, addressing one of the historic criticisms of TCSPC. Preliminary measurements on the F-TAC are presented and discussed.
Peronio, P.; Acconcia, G.; Rech, I.; Ghioni, M.
2015-11-01
Time-Correlated Single Photon Counting (TCSPC) has been long recognized as the most sensitive method for fluorescence lifetime measurements, but often requiring "long" data acquisition times. This drawback is related to the limited counting capability of the TCSPC technique, due to pile-up and counting loss effects. In recent years, multi-module TCSPC systems have been introduced to overcome this issue. Splitting the light into several detectors connected to independent TCSPC modules proportionally increases the counting capability. Of course, multi-module operation also increases the system cost and can cause space and power supply problems. In this paper, we propose an alternative approach based on a new detector and processing electronics designed to reduce the overall system dead time, thus enabling efficient photon collection at high excitation rate. We present a fast active quenching circuit for single-photon avalanche diodes which features a minimum dead time of 12.4 ns. We also introduce a new Time-to-Amplitude Converter (TAC) able to attain extra-short dead time thanks to the combination of a scalable array of monolithically integrated TACs and a sequential router. The fast TAC (F-TAC) makes it possible to operate the system towards the upper limit of detector count rate capability (˜80 Mcps) with reduced pile-up losses, addressing one of the historic criticisms of TCSPC. Preliminary measurements on the F-TAC are presented and discussed.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Assessing Rotation-Invariant Feature Classification for Automated Wildebeest Population Counts.
Colin J Torney
Full Text Available Accurate and on-demand animal population counts are the holy grail for wildlife conservation organizations throughout the world because they enable fast and responsive adaptive management policies. While the collection of image data from camera traps, satellites, and manned or unmanned aircraft has advanced significantly, the detection and identification of animals within images remains a major bottleneck since counting is primarily conducted by dedicated enumerators or citizen scientists. Recent developments in the field of computer vision suggest a potential resolution to this issue through the use of rotation-invariant object descriptors combined with machine learning algorithms. Here we implement an algorithm to detect and count wildebeest from aerial images collected in the Serengeti National Park in 2009 as part of the biennial wildebeest count. We find that the per image error rates are greater than, but comparable to, two separate human counts. For the total count, the algorithm is more accurate than both manual counts, suggesting that human counters have a tendency to systematically over or under count images. While the accuracy of the algorithm is not yet at an acceptable level for fully automatic counts, our results show this method is a promising avenue for further research and we highlight specific areas where future research should focus in order to develop fast and accurate enumeration of aerial count data. If combined with a bespoke image collection protocol, this approach may yield a fully automated wildebeest count in the near future.
Discrete calculus methods for counting
Mariconda, Carlo
2016-01-01
This book provides an introduction to combinatorics, finite calculus, formal series, recurrences, and approximations of sums. Readers will find not only coverage of the basic elements of the subjects but also deep insights into a range of less common topics rarely considered within a single book, such as counting with occupancy constraints, a clear distinction between algebraic and analytical properties of formal power series, an introduction to discrete dynamical systems with a thorough description of Sarkovskii’s theorem, symbolic calculus, and a complete description of the Euler-Maclaurin formulas and their applications. Although several books touch on one or more of these aspects, precious few cover all of them. The authors, both pure mathematicians, have attempted to develop methods that will allow the student to formulate a given problem in a precise mathematical framework. The aim is to equip readers with a sound strategy for classifying and solving problems by pursuing a mathematically rigorous yet ...
Photon counting compressive depth mapping
Howland, Gregory A; Ware, Matthew R; Howell, John C
2013-01-01
We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 x 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 x 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second.
STUDY OF HERPES ZOSTER IN HIV PATIENTS IN RELATION TO CD4 COUNT
Nivedita Devi
2015-09-01
Full Text Available AIMS AND OBJECTIVES: 1. To study herpes zoster in HIV positive patients in relation to CD4 count. 2. To study the various clinical presentations, common sites, and demographic characteristics of herpes zoster in HIV. MATERIALS AND METHODS: A study was conducted on 94 HIV patients with a clinical diagnosis herpes zoster attending DVL OPD of Government General Hospital, Kakinada. Severity of rash was graded depending on the number of lesions as mild (50.Assessment of intensity of pain was done using visual analog scale (VAS, a numerical rating scale marked from 0 to 10 in increasing order of severity. The relation of CD4 count and herpes zoster and complications of zoster were recorded. RESULTS : The maximum incidence of herpes zoster was found in the sexually active age group with higher incidence in males53.1% and ur ban people 55.3%. Patients with severe rash were 57.4% moderate rash 31.9% and mild rash 10.6%. At the time of presentation, majority 51.06% had vesicular rash. The most common symptom was pricking pain followed by burning sensation and stabbing pain. Most of the patients had thoracic dermatome involvement in 51, cer vical in 18, trigeminal nerve in 16and lumbar in 9. Ophthalmic branch the relation of CD4 count and is involved in 7, maxillary in 5 and mandibular in 4. Majority 18 (19.14% were in the CD4 ra nge 200 - 249, 15 between CD4 150 - 199 and 11 between CD4 350 - 399. Complications noted were post herpetic neuralgia, secondary bacterial infection, scarring, conjuntivis, facial palasy and keratitis. Multidermatomal involvement is seen in 15.95%.
Expectation Maximization for Hard X-ray Count Modulation Profiles
Benvenuto, Federico; Piana, Michele; Massone, Anna Maria
2013-01-01
This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI)} instrument. Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized for the analysis of count modulation profiles in solar hard X-ray imaging based on Rotating Modulation Collimators. The algorithm described in this paper solves the maximum likelihood problem iteratively and encoding a positivity constraint into the iterative optimization scheme. The result is therefore a classical Expectation Maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, ...
Undercooling and nodule count in thin walled ductile iron castings
Pedersen, Karl Martin; Tiedje, Niels Skat
2007-01-01
Casting experiments have been performed with eutectic and hypereutectic castings with plate thick¬nesses from 2 to 8 mm involving both temperature measurements during solidification and micro¬structural examination afterwards. The nodule count was the same for the eutectic and hypereutectic...... castings in the thin plates ( 4.3 mm) while in the 8 mm plate the nodule count was higher in the hypereutectic than in the eutectic castings. The minimum temperature prior to the eutectic recalescence (Tmin) was 15 to 20C lower for the eutectic than the hypereutectic castings. This is due to nucleation...... of graphite nodules which begins at a lover temperature in the eutectic than in the hypereutectic castings The recalescence (Trec) was however also larger for the eutectic casting and in the thin plates the maximum temperature after recalescence (Tmax) was the same in the eutectic and hypereutectic plates...
Undercooling and nodule count in thin walled ductile iron castings
Pedersen, Karl Martin; Tiedje, Niels Skat
2007-01-01
Casting experiments have been performed with eutectic and hypereutectic castings with plate thicknesses from 2 to 8 mm involving both temperature measurements during solidification and microstructural examination afterwards. The nodule count was the same for the eutectic and hypereutectic castings...... in the thin plates (≤4.3 mm) while in the 8 mm plate the nodule count was higher in the hypereutectic than in the eutectic castings. The minimum temperature before the eutectic recalescence (Tmin) was 15 to 20ºC lower for the eutectic than for the hypereutectic castings. This is due to nucleation of graphite...... nodules which begins at a lower temperature in the eutectic than in the hypereutectic castings. The recalescence ∆Trec was however also larger for the eutectic casting and in the thin plates the maximum temperature after recalescence (Tmax) was the same in the eutectic and hypereutectic plates...
Single ion counting with a MCP (microchannel plate) detector
Tawara, Hiroko; Sasaki, Shinichi; Miyajima, Mitsuhiro [National Lab. for High Energy Physics, Tsukuba, Ibaraki (Japan); Shibamura, Eido
1996-07-01
In this study, a single-ion-counting method using alpha-particle-impact ionization of Ar atoms is demonstrated and the preliminary {epsilon}{sub mcp} for Ar ions with incident energies of 3 to 4.7 keV is determined. The single-ion counting by the MCP is aimed to be performed under experimental conditions as follows: (1) A signal from the MCP is reasonably identified as incidence of single Ar-ion. (2) The counting rate of Ar ions is less than 1 s{sup -1}. (3) The incident Ar ions are not focused on a small part of an active area of the MCP, namely, {epsilon}{sub mcp} is determined with respect to the whole active area of the MCP. So far, any absolute detection efficiency has not been reported under these conditions. (J.P.N.)
Design and construction of a photon counting system
Pérez, F. R.; Del Valle, C.; Reyes, L.; Tobón, J.; Barrero, C.; Velásquez, A.
2007-03-01
This article describes the design and implementation of a photon counting system, which is made using low cost electronic devices. The system is connected to a spectrometer in order to study events related to low levels of luminance intensity. It uses a photo-multiplier tube (PMT) for photon detection. The counting photon system is conformed by 5 stages, which are: the detector, a pre-amplifier, a pulse comparator, a pulses counter and a communications interface in a PC. Data acquisition is done through the serial port. The system allows the detection of radiation coming from signals whose counting rates are several thousands pulses per second. As an application of the system, the Raman Stokes spectrum of the polystyrene as well as the fluorescence band of an organic pigment on a poly-vinyl matrix is showed.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Multiplicity counting from fission chamber signals in the current mode
Pázsit, I.; Pál, L.; Nagy, L.
2016-12-01
In nuclear safeguards, estimation of sample parameters using neutron-based non-destructive assay methods is traditionally based on multiplicity counting with thermal neutron detectors in the pulse mode. These methods in general require multi-channel analysers and various dead time correction methods. This paper proposes and elaborates on an alternative method, which is based on fast neutron measurements with fission chambers in the current mode. A theory of "multiplicity counting" with fission chambers is developed by incorporating Böhnel's concept of superfission [1] into a master equation formalism, developed recently by the present authors for the statistical theory of fission chamber signals [2,3]. Explicit expressions are derived for the first three central auto- and cross moments (cumulants) of the signals of up to three detectors. These constitute the generalisation of the traditional Campbell relationships for the case when the incoming events represent a compound Poisson distribution. Because now the expressions contain the factorial moments of the compound source, they contain the same information as the singles, doubles and triples rates of traditional multiplicity counting. The results show that in addition to the detector efficiency, the detector pulse shape also enters the formulas; hence, the method requires a more involved calibration than the traditional method of multiplicity counting. However, the method has some advantages by not needing dead time corrections, as well as having a simpler and more efficient data processing procedure, in particular for cross-correlations between different detectors, than the traditional multiplicity counting methods.
Radium-228 analysis of natural waters by Cherenkov counting of Actinium-228
Aleissa, Khalid A.; Almasoud, Fahad I.; Islam, Mohammed S. [Atomic Energy Research Institute, King Abdul Aziz City for Science and Technology, P.O. Box 6086, Riyadh 11442 (Saudi Arabia); L' Annunziata, Michael F. [IAEA Expert, Montague Group, P.O. Box 5033, Oceanside, CA 92052-5033 (United States)], E-mail: mlannunziata@cox.net
2008-12-15
The activities of {sup 228}Ra in natural waters were determined by the Cherenkov counting of the daughter nuclide {sup 228}Ac. The radium was pre-concentrated on MnO{sub 2} and the radium purified via ion exchange and, after a 2-day period of incubation to allow for secular equilibrium between the parent-daughter {sup 228}Ra({sup 228}Ac), the daughter nuclide {sup 228}Ac was isolated by ion exchange according to the method of Nour et al. [2004. Radium-228 determination of natural waters via concentration on manganese dioxide and separation using Diphonix ion exchange resin. Appl. Radiat. Isot. 61, 1173-1178]. The Cherenkov photons produced by {sup 228}Ac were counted directly without the addition of any scintillation reagents. The optimum Cherenkov counting window, sample volume, and vial type were determined experimentally to achieve optimum Cherenkov photon detection efficiency and lowest background count rates. An optimum detection efficiency of 10.9{+-}0.1% was measured for {sup 228}Ac by Cherenkov counting with a very low Cherenkov photon background of 0.317{+-}0.013 cpm. The addition of sodium salicylate into the sample counting vial at a concentration of 0.1 g/mL yielded a more than 3-fold increase in the Cherenkov detection efficiency of {sup 228}Ac to 38%. Tests of the Cherenkov counting technique were conducted with several water standards of known activity and the results obtained compared closely with a conventional liquid scintillation counting technique. The advantages and disadvantages of Cherenkov counting compared to liquid scintillation counting methods are discussed. Advantages include much lower Cherenkov background count rates and consequently lower minimal detectable activities for {sup 228}Ra and no need for expensive environmentally unfriendly liquid scintillation cocktails. The disadvantages of the Cherenkov counting method include the need to measure {sup 228}Ac Cherenkov photon detection efficiency and optimum Cherenkov counting volume
Radium-228 analysis of natural waters by Cherenkov counting of Actinium-228.
Aleissa, Khalid A; Almasoud, Fahad I; Islam, Mohammed S; L'Annunziata, Michael F
2008-12-01
The activities of (228)Ra in natural waters were determined by the Cherenkov counting of the daughter nuclide (228)Ac. The radium was pre-concentrated on MnO(2) and the radium purified via ion exchange and, after a 2-day period of incubation to allow for secular equilibrium between the parent-daughter (228)Ra((228)Ac), the daughter nuclide (228)Ac was isolated by ion exchange according to the method of Nour et al. [2004. Radium-228 determination of natural waters via concentration on manganese dioxide and separation using Diphonix ion exchange resin. Appl. Radiat. Isot. 61, 1173-1178]. The Cherenkov photons produced by (228)Ac were counted directly without the addition of any scintillation reagents. The optimum Cherenkov counting window, sample volume, and vial type were determined experimentally to achieve optimum Cherenkov photon detection efficiency and lowest background count rates. An optimum detection efficiency of 10.9+/-0.1% was measured for (228)Ac by Cherenkov counting with a very low Cherenkov photon background of 0.317+/-0.013cpm. The addition of sodium salicylate into the sample counting vial at a concentration of 0.1g/mL yielded a more than 3-fold increase in the Cherenkov detection efficiency of (228)Ac to 38%. Tests of the Cherenkov counting technique were conducted with several water standards of known activity and the results obtained compared closely with a conventional liquid scintillation counting technique. The advantages and disadvantages of Cherenkov counting compared to liquid scintillation counting methods are discussed. Advantages include much lower Cherenkov background count rates and consequently lower minimal detectable activities for (228)Ra and no need for expensive environmentally unfriendly liquid scintillation cocktails. The disadvantages of the Cherenkov counting method include the need to measure (228)Ac Cherenkov photon detection efficiency and optimum Cherenkov counting volume, which are not at all required when liquid
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Complete Blood Count and Retinal Vessel Calibers
Gerald Liew; Jie Jin Wang; Elena Rochtchina; Tien Yin Wong; Paul Mitchell
2014-01-01
OBJECTIVE: The influence of hematological indices such as complete blood count on microcirculation is poorly understood. Retinal microvasculature can be directly visualized and vessel calibers are associated with a range of ocular and systemic diseases. We examined the association of complete blood count with retinal vessel calibers. METHODS: Cross-sectional population-based Blue Mountains Eye Study, n = 3009, aged 49+ years. Complete blood count was measured from fasting blood samples taken ...
Improvement of Delayed Neutron Counting System
YUAN; Guo-jun; XIAO; Cai-jin; YANG; Wei; ZHANG; Gui-ying; JIN; Xiang-chun; WANG; Ping-sheng; NI; Bang-fa
2012-01-01
<正>A new delayed neutron counting system, which is good at qualitative and quantitative analysis of fissionable nuclide mixture, will be established at China Advanced Research Reactor (CARR). We use 3 He proportional counters to count the delayed neutrons after the samples irradiated by reactor neutrons, including U3O8-stantard, uranium ore and enriched uranium. Then, the counting efficiency and limit of this system were calculated.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Effects of lek count protocols on greater sage-grouse population trend estimates
Monroe, Adrian; Edmunds, David; Aldridge, Cameron L.
2016-01-01
Annual counts of males displaying at lek sites are an important tool for monitoring greater sage-grouse populations (Centrocercus urophasianus), but seasonal and diurnal variation in lek attendance may increase variance and bias of trend analyses. Recommendations for protocols to reduce observation error have called for restricting lek counts to within 30 minutes of sunrise, but this may limit the number of lek counts available for analysis, particularly from years before monitoring was widely standardized. Reducing the temporal window for conducting lek counts also may constrain the ability of agencies to monitor leks efficiently. We used lek count data collected across Wyoming during 1995−2014 to investigate the effect of lek counts conducted between 30 minutes before and 30, 60, or 90 minutes after sunrise on population trend estimates. We also evaluated trends across scales relevant to management, including statewide, within Working Group Areas and Core Areas, and for individual leks. To further evaluate accuracy and precision of trend estimates from lek count protocols, we used simulations based on a lek attendance model and compared simulated and estimated values of annual rate of change in population size (λ) from scenarios of varying numbers of leks, lek count timing, and count frequency (counts/lek/year). We found that restricting analyses to counts conducted within 30 minutes of sunrise generally did not improve precision of population trend estimates, although differences among timings increased as the number of leks and count frequency decreased. Lek attendance declined >30 minutes after sunrise, but simulations indicated that including lek counts conducted up to 90 minutes after sunrise can increase the number of leks monitored compared to trend estimates based on counts conducted within 30 minutes of sunrise. This increase in leks monitored resulted in greater precision of estimates without reducing accuracy. Increasing count
Vector perturbations of galaxy number counts
Durrer, Ruth; Tansella, Vittorio
2016-07-01
We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxy number counts.
Vector perturbations of galaxy number counts
Durrer, Ruth
2016-01-01
We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxy number counts.
郇浩; 陶选如; 陶然; 程小康; 董朝; 李鹏飞
2014-01-01
To reach a compromise between efficient dynamic performance and high tracking accuracy of carrier tracking loop in high-dynamic circumstance which results in large Doppler frequency and Doppler frequency rate-of-change, a fast maximum likelihood estimation method of Doppler frequency rate-of-change is proposed in this paper, and the estimation value is utilized to aid the carrier tracking loop. First, it is pointed out that the maximum likelihood estimation method of Doppler frequency and Doppler frequency rate-of-change is equivalent to the Fractional Fourier Fransform (FrFT). Second, the estimation method of Doppler frequency rate-of-change, which combines the instant self-correlation and the segmental Discrete Fourier Transform (DFT) is proposed to solve the large two-dimensional search calculation amount of the Doppler frequency and Doppler frequency rate-of-change, and the received coarse estimation value is applied to narrow down the search range. Finally, the estimation value is used in the carrier tracking loop to reduce the dynamic stress and improve the tracking accuracy. Theoretical analysis and computer simulation show that the search calculation amount falls to 5.25 percent of the original amount with Signal to Noise Ratio (SNR)-30 dB, and the Root Mean Sguare Error(RMSE) of frequency tracked is only 8.46 Hz/s, compared with the traditional carrier tracking method the tracking sensitivity can be improved more than 3 dB.%高动态环境下接收信号含有较大的多普勒频率及其变化率，传统载波跟踪方法难以在高动态应力和跟踪精度两方面取得较好折中，针对这一问题该文提出一种多普勒频率变化率快速最大似然估计方法，并利用估计值辅助载波跟踪环路。首先指出了多普勒频率及其变化率的最大似然估计可等效采用分数阶傅里叶变换(FrFT)来实现；其次，针对频率及其变化率2维搜索运算量大的问题，提出一种瞬时自相关与分段离
Genetic Regulatory Networks that count to 3
Lehmann, Martin; Sneppen, K.
2013-01-01
that contain repressive links, which we model by Michaelis-Menten terms. Interestingly, we find that counting to 3 does not require a hierarchy in Hill coefficients, in contrast to counting to 2, which is known from lambda phage. Furthermore, we find two main circuit architectures: one design also found...
Correcting Finger Counting to Snellen Acuity.
Karanjia, Rustum; Hwang, Tiffany Jean; Chen, Alexander Francis; Pouw, Andrew; Tian, Jack J; Chu, Edward R; Wang, Michelle Y; Tran, Jeffrey Show; Sadun, Alfredo A
2016-10-01
In this paper, the authors describe an online tool with which to convert and thus quantify count finger measurements of visual acuity into Snellen equivalents. It is hoped that this tool allows for the re-interpretation of retrospectively collected data that provide visual acuity in terms of qualitative count finger measurements.
Is It Counting, or Is It Adding?
Eisenhardt, Sara; Fisher, Molly H.; Thomas, Jonathan; Schack, Edna O.; Tassell, Janet; Yoder, Margaret
2014-01-01
The Common Core State Standards for Mathematics (CCSSI 2010) expect second grade students to "fluently add and subtract within 20 using mental strategies" (2.OA.B.2). Most children begin with number word sequences and counting approximations and then develop greater skill with counting. But do all teachers really understand how this…
It Is Time to Count Learning Communities
Henscheid, Jean M.
2015-01-01
As the modern learning community movement turns 30, it is time to determine just how many, and what type, of these programs exist at America's colleges and universities. This article first offers a rationale for counting learning communities followed by a description of how disparate counts and unclear definitions hamper efforts to embed these…
Is It Counting, or Is It Adding?
Eisenhardt, Sara; Fisher, Molly H.; Thomas, Jonathan; Schack, Edna O.; Tassell, Janet; Yoder, Margaret
2014-01-01
The Common Core State Standards for Mathematics (CCSSI 2010) expect second grade students to "fluently add and subtract within 20 using mental strategies" (2.OA.B.2). Most children begin with number word sequences and counting approximations and then develop greater skill with counting. But do all teachers really understand how this…
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Reliability of the total lymphocyte count as a parameter of nutrition.
Forse, R A; Rompre, C; Crosilla, P; O-Tuitt, D; Rhode, B; Shizgal, H M
1985-05-01
To evaluate the total lymphocyte count as a means of nutritional assessment, body composition studies (a proven method of nutritional assessment) and total lymphocyte determinations were performed simultaneously in 153 patients. The total lymphocyte count correlated poorly with both the body cell mass and the nutritional state measured by the Nae to Ke ratio. For diagnosing malnutrition, the total lymphocyte count had a false-positive rate of 34% and a false-negative rate of 50%. In a group of 78 patients who received total parenteral nutrition for 2 weeks, the total lymphocyte count did not accurately reflect the nutritional changes. Due to its poor sensitivity and specificity, the total lymphocyte count is of no value as a measure of the nutritional state.
Material Screening with HPGe Counting Station for PandaX Experiment
Wang, Xuming; Fu, Changbo; Ji, Xiangdong; Liu, Xiang; Mao, Yajun; Wang, Hongwei; Wang, Siguang; Xie, Pengwei; Zhang, Tao
2016-01-01
A gamma counting station based on high-purity germanium (HPGe) detector was set up for the material screening of the PandaX dark matter experiments in the China Jinping Underground Laboratory. Low background gamma rate of 2.6 counts/min within the energy range of 20 to 2700 keV is achieved due to the well-designed passive shield. The sentivities of the HPGe detetector reach mBq/kg level for isotopes like K, U, Th, and even better for Co and Cs, resulted from the low-background rate and the high relative detection efficiency of 175%. The structure and performance of the counting station are described in this article. Detailed counting results for the radioactivity in materials used by the PandaX dark-matter experiment are presented. The upgrading plan of the counting station is also discussed.
Material screening with HPGe counting station for PandaX experiment
Wang, X.; Chen, X.; Fu, C.; Ji, X.; Liu, X.; Mao, Y.; Wang, H.; Wang, S.; Xie, P.; Zhang, T.
2016-12-01
A gamma counting station based on high-purity germanium (HPGe) detector was set up for the material screening of the PandaX dark matter experiments in the China Jinping Underground Laboratory. Low background gamma rate of 2.6 counts/min within the energy range of 20 to 2700 keV is achieved due to the well-designed passive shield. The sentivities of the HPGe detetector reach mBq/kg level for isotopes like K, U, Th, and even better for Co and Cs, resulted from the low-background rate and the high relative detection efficiency of 175%. The structure and performance of the counting station are described in this article. Detailed counting results for the radioactivity in materials used by the PandaX dark-matter experiment are presented. The upgrading plan of the counting station is also discussed.
Suenimeire Vieira
2012-12-01
Full Text Available INTRODUÇÃO: Um dos benefícios promovidos pelo exercício físico parece ser a melhora da modulação do sistema nervoso autônomo sobre o coração. No entanto, o papel da atividade física como um fator determinante da variabilidade da frequência cardíaca (VFC não está bem estabelecido. Desta forma, o objetivo do estudo foi verificar se há correlação entre a frequência cardíaca de repouso e a carga máxima atingida no teste de esforço físico com os índices de VFC em homens idosos. MÉTODOS: Foram estudados 18 homens idosos com idades entre 60 e 70 anos. Foram feitas as seguintes avaliações: a teste de esforço máximo em cicloergômetro utilizando-se o protocolo de Balke para avaliação da capacidade aeróbia; b registro da frequência cardíaca (FC e dos intervalos R-R durante 15 minutos na condição de repouso em decúbito dorsal. Após a coleta, os dados foram analisados no domínio do tempo, calculando-se o índice RMSSD, e no domínio da frequência, calculando-se os índices de baixa frequência (BF, alta frequência (AF e razão BF/AF. Para verificar se existe associação entre a carga máxima atingida no teste de esforço e os índices de VFC foi aplicado o teste de correlação de Pearson (p 0,05. CONCLUSÃO: Os índices de variabilidade da frequência cardíaca temporal e espectrais estudados não são indicadores do nível de capacidade aeróbia de homens idosos avaliados em cicloergômetro.INTRODUCTION: One of the benefits provided by regular physical activities seems to be the improvement of cardiac autonomic nervous system modulation. However, the role of physical activity as a determinant factor of the heart rate variability (HRV is not well-established. Therefore, the aim of this study was to verify whether there was a correlation between resting heart rate and maximum workload reached in an exercise test with HRV indices in elderly men. METHODS: A study was carried out with 18 elderly men between the ages of
何宇梅
2015-01-01
目的：本文旨在评估炎性因子而对不同程度的糖尿病足病患者作出早期判断的策略。方法对68例不同程度的糖尿病足病患者入院分别进行血沉、白细胞计数和C反应蛋白的测定并作比较。结果 C反应蛋白在2级～4级的糖尿病足病患者中是最有价值的标记物，而血沉、白细胞计数无法预示其程度。结论在不同程度的糖尿病足病患者中C反应蛋白是一个有价值的预测因子。%Objectives This study aimed to assess the diagnostic accuracy of inflammatory factor markers as all aid to making this distinction.It is therefore of paramount importance to assess strategies for different degree of diabetic foot at an early stage.Methods There were 68 diabetic foot patients who received to test erythrocyte sedimentary rate (ESR),leucocyte counts and C-reactive protein (CRP)during admitted our hospital.Then all parameters were performed mr comparison.Results As a single lnarker,CRP was the most informative for diabetic foot from grade 2 to grade 4 comparing with non-diabetic foot(P<0.05)in conr-tast.ESR and leucyte counlts were not predictive Conclusions Measnrelilent of all ilmamnlator marker CRP could be a valuable predictie , factor for different degree in diabetic foot patients .
Coulon, R.
2010-11-10
Sodium-cooled Fast Reactors are under development for the fourth generation of nuclear reactor. Breeders reactors could gives solutions for the need of energy and the preservation of uranium resources. An other purpose is the radioactive wastes production reduction by transmutation and the control of non-proliferation using a closed-cycle. These thesis shows safety and profit advantages that could be obtained by a new generation of gamma spectrometry system for SFR. Now, the high count rate abilities, allow us to study new methods of accurate power measurement and fast clad failure detection. Simulations have been done and an experimental test has been performed at the French Phenix SFR of the CEA Marcoule showing promising results for these new measurements. (author) [French] Les reacteurs a neutrons rapides refroidis au sodium sont en developpement en vue d'assurer une quatrieme generation de reacteurs repondant a la demande energetique, tout en assurant la preservation des ressources d'uranium par un fonctionnement en surgenerateur. L'objectif de la filiere est egalement d'ameliorer la gestion de la radiotoxicite des dechets produits par transmutation des actinides mineurs et de controler la non-proliferation par un fonctionnement en cycle ferme. Une instrumentation de surveillance et de controle de ce type de reacteur a ete etudiee dans cette these. La spectrometrie gamma de nouvelle generation permet, par les hauts taux de traitement aujourd'hui accessibles, d'envisager de nouvelles approches pour suivre avec une precision accrue la puissance neutronique et de detecter plus precocement des ruptures de gaine combustible. Des simulations numeriques ont ete realisees et une campagne d'essai a ete menee a bien sur le reacteur Phenix de Marcoule. Des perspectives prometteuses ont ete mises en exergue pour ces deux problematiques
Anuran Point Count Species Abundance and Frequency 2000 Wertheim National Wildlife Refuge
US Fish and Wildlife Service, Department of the Interior — These are summary sheets outlining the point count species abundance and frequency rates of anuran at Wertheim National Wildlife Refuge during the spring and summer...
Anuran Point Count Species Abundance and Frequency 1999 Wertheim National Wildlife Refuge
US Fish and Wildlife Service, Department of the Interior — These are summary sheets outlining the point count species abundance and frequency rates of anuran at Wertheim National Wildlife Refuge during the summer of 1999.
Anuran Point Count Species Abundance and Frequency 2006 Wertheim National Wildlife Refuge
US Fish and Wildlife Service, Department of the Interior — These are summary sheets outlining the point count species abundance and frequency rates of anuran at Wertheim National Wildlife Refuge during the spring and summer...
Anuran Point Count Species Abundance and Frequency 2002 Wertheim National Wildlife Refuge
US Fish and Wildlife Service, Department of the Interior — These are summary sheets outlining the point count species abundance and frequency rates of anuran at Wertheim National Wildlife Refuge during the spring and summer...
Anuran Point Count Species Abundance and Frequency 2001 Wertheim National Wildlife Refuge
US Fish and Wildlife Service, Department of the Interior — These are summary sheets outlining the point count species abundance and frequency rates of anuran at Wertheim National Wildlife Refuge during the spring and summer...
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Ziheng YANG
2004-01-01
众所周知,物种分化年代的估计对分子钟(进化速率恒定)假定很敏感.另一方面,在远缘物种(例如哺乳纲不同目的动物)的比较中,分子钟几乎总是不成立的.这样在估计分化时间时考虑不同进化区系的速率差异至为重要.最大似然法可以很自然地考虑这种速率差异,并且可以同时分析多个基因位点的资料以及同时利用多重化石校正数据.以前提出的似然法需要研究者将进化树的树枝按速率分组,本文提出一个近似方法以使这个过程自动化.本方法综合了以前的似然法、贝斯法及近似速率平滑法的一些特征.此外,还对算法加以改进,以适应综合数据分析时某些基因在某些物种中缺乏资料的情形.应用新提出的方法来分析马达加斯加的倭狐猴的分化年代,并与以前的似然法及贝斯法的分析进行了比较[动物学报50(4):645-656,2004].%Estimation of species divergence times is well-known to be sensitive to violation of the molecular clock assumption (rate constancy over time). However, the molecular clock is almost always violated in comparisons of distantly related species, such as different orders of mammals. Thus it is important to take into account different rates among lineages when divergence times are estimated. The maximum likelihood method provides a framework for accommodating rate variation and can naturally accommodate heterogeneous datasets from multiple loci and fossil calibrations at multiple nodes.Previous implementations of the likelihood method require the researcher to assign branches to different rate classes. In this paper, I implement a heuristic rate-smoothing algorithm (the AHRS algorithm) to automate the assignment of branches to rate groups. The method combines features of previous likelihood, Bayesian and rate-smoothing methods. The likelihood algorithm is also improved to accommodate missing sequences at some loci in the combined analysis. The new
Photon counting arrays for AO wavefront sensors
Vallerga, J; McPhate, J; Mikulec, Bettina; Clark, Allan G; Siegmund, O; CERN. Geneva
2005-01-01
Future wavefront sensors for AO on large telescopes will require a large number of pixels and must operate at high frame rates. Unfortunately for CCDs, there is a readout noise penalty for operating faster, and this noise can add up rather quickly when considering the number of pixels required for the extended shape of a sodium laser guide star observed with a large telescope. Imaging photon counting detectors have zero readout noise and many pixels, but have suffered in the past with low QE at the longer wavelengths (>500 nm). Recent developments in GaAs photocathode technology, CMOS ASIC readouts and FPGA processing electronics have resulted in noiseless WFS detector designs that are competitive with silicon array detectors, though at ~40% the QE of CCDs. We review noiseless array detectors and compare their centroiding performance with CCDs using the best available characteristics of each. We show that for sub-aperture binning of 6x6 and greater that noiseless detectors have a smaller centroid error at flu...
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Haitao Zhao; Yuning Dong; Hui Zhang; Nanjie Liu; Hongbo Zhu
2013-01-01
This paper proposes an environment-aware best-retransmission count selected optimization control scheme over IEEE 802.11 multi-hop wireless networks. The proposed scheme predicts the wireless resources by using statistical channel state and provides maximum retransmission count optimization based on wireless channel environment state to improve the packet delivery success ratio. The media access control (MAC) layer selects the best-retransmission count by perceiving the types of packet loss in wireless link and using the wireless channel charac-teristics and environment information, and adjusts the packet for-warding adaptively aiming at improving the packet retransmission probability. Simulation results show that the best-retransmission count selected scheme achieves a higher packet successful de-livery percentage and a lower packet col ision probability than the corresponding traditional MAC transmission control protocols.
Marginalized zero-altered models for longitudinal count data.
Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A
2016-10-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.
Ingham, S C; Hu, Y; Ané, C
2011-08-01
The objective of this study was to evaluate possible claims by advocates of small-scale dairy farming that milk from smaller Wisconsin farms is of higher quality than milk from larger Wisconsin farms. Reported bulk tank standard plate count (SPC) and somatic cell count (SCC) test results for Wisconsin dairy farms were obtained for February to December, 2008. Farms were sorted into 3 size categories using available size-tracking criteria: small (≤118 cows; 12,866 farms), large (119-713 cattle; 1,565 farms), and confined animal feeding operations (≥714 cattle; 160 farms). Group means were calculated (group=farm size category) for the farms' minimum, median, mean, 90th percentile, and maximum SPC and SCC. Statistical analysis showed that group means for median, mean, 90th percentile, and maximum SPC and SCC were almost always significantly higher for the small farm category than for the large farm and confined animal feeding operations farm categories. With SPC and SCC as quality criteria and the 3 farm size categories of ≤118, 119 to 713, and ≥714 cattle, the claim of Wisconsin smaller farms producing higher quality milk than Wisconsin larger farms cannot be supported.
Rarity, J G; Wall, T E; Ridley, K D; Owens, P C; Tapster, P R
2000-12-20
We evaluate the performance of various commercially available InGaAs/InP avalanche photodiodes for photon counting in the infrared at temperatures that can be reached by Peltier cooling. We find that dark count rates are high, and this can partially saturate devices before optimum performance is achieved. At low temperatures the dark count rate rises because of a strong contribution from correlated afterpulses. We discuss ways of suppressing these afterpulses for different photon-counting applications.
Counting Polyominoes on Twisted Cylinders
Barequet, Gill; Moffie, Micha; Ribó, Ares; Rote, Günter
2005-01-01
International audience; We improve the lower bounds on Klarner's constant, which describes the exponential growth rate of the number of polyominoes (connected subsets of grid squares) with a given number of squares. We achieve this by analyzing polyominoes on a different surface, a so-called $\\textit{twisted cylinder}$ by the transfer matrix method. A bijective representation of the "states'' of partial solutions is crucial for allowing a compact representation of the successive iteration vec...
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Sun, Xiaoli; Krainak, Michael A.; Hasselbrack, William B.; La Rue, Ross A.
2007-05-01
We report the test results of a hybrid photomultiplier tube (HPMT) with a transfer electron (TE) InGaAsP photocathode and GaAs Schottky avalanche photodiode (APD) anode. Unlike Geiger mode InGaAsP APDs, these HPMTs (also known as intensified photodiode (IPD), vacuum APD, or hybrid photodetector) operate in linear mode without the need for quenching and gating. Their greatest advantages are wide dynamic range, high speed, large photosensitive area, and potential for photon counting and analog detection dual mode operation. The photon detection efficiency we measured was 25% at 1064 nm wavelength with a dark count rate of 60,000/s at -22 degrees Celsius. The output pulse width in response to a single photon detection is about 0.9 ns. The maximum count rate was 90 Mcts/s and was limited solely by the speed of the discriminator used in the measurement (10 ns dead time). The spectral response of these devices extended from 900 to 1300 nm. We also measured the HPMT response to 60 ps laser pulses. The average output pulse amplitude increased monotonically with the input pulse energy, which suggested that we can resolve photon number in an incident pulse. The jitter of the HPMT output was found to be about 0.5 ns standard deviation and depended on bias voltage applied to the TE photocathode. To our knowledge, these HPMTs are the most sensitive non gating photon detectors at 1064 nm wavelength, and they will have many applications in laser altimeters, atmospheric lidars, and free space laser communication systems.
An annual layer counted EDML time scale covering the past 16700 years
Vinther, B. M.; Clausen, H. B.; Kipfstuhl, S.; Fischer, H.; Bigler, M.; Oerter, H.; Wegner, A.; Wilhelms, F.; Severi, M.; Udisti, R.; Beer, J.; Steinhilber, F.; Muscheler, R.; Rasmussen, S. O.; Svensson, A.
2012-04-01
Using high resolution chemical impurity and dielectric profiling data annual layers have been counted on the EPICA ice core from Dronning Maud Land (EDML), Antarctica spanning the past 16700 years. The methodology used for counting Greenland ice cores and creating the Greenland Ice Core Chronology 2005 (GICC05) [Rasmussen et al., 2006] has also been implemented for the EDML counting. The estimated maximum counting error for the EDML counting is approx. 5%, but a preliminary volcanic matching with Greenland ice core records suggest differences of 1% or less during the Holocene between the EDML counting and GICC05. A comparison of cosmogenic isotope records from EDML and Greenland also suggests differences of less than 1% between the two annual layer counted chronologies. Reference: Rasmussen, S.O., Andersen, K.K., Svensson, A., Steffensen, J.P., Vinther, B.M., Clausen, H.B., Andersen, M.L.S., Johnsen, S.J., Larsen, L.B., Dahl-Jensen, D., Bigler, M., Röthlisberger R., Fischer H., Goto-Azuma K., Hansson M.E., Ruth U, A new Greenland ice core chronology for the last glacial termination, Journal of Geophysical Research Vol. 111, D06102, doi:10.1029/2005JD006079. 2006.