WorldWideScience

Sample records for integral counting method

  1. Determination of 222Rn in water samples from wells and springs in Tokyo by a modified integral counting method

    International Nuclear Information System (INIS)

    Homma, Y.; Murase, Y.; Handa, K.; Murakami, I.

    1997-01-01

    222 Rn in 2L-water samples was extracted with 30 mL toluene, and 21 mL of the toluene solution was transferred into a liquid scintillation vial, in which PPO - 2,5-diphenyloxazole was placed in advance. The total activity of 222 Rn in the water sample was calculated based on the Ostwald's coefficient of solubilities of 222 Rn in toluene and water at the temperature of the sample water and the volume of water and toluene. About 40% of 222 Rn dissolved in 2L-water sample can be collected. After allowing to stand for 3.5 h, the equilibrium mixture of 222 Rn and its daughters was measured with an Aloka liquid scintillation spectrometer using a modified integral counting method which extrapolates the integral counting curve not to the zero pulse-height, but to the zero detection threshold, an average energy required to produce a measurable pulse, of the liquid scintillation spectrometer. The general method which agitates water sample (usually about 10 mL) with a liquid scintillation cocktail is practical when the activity of 222 Rn is high. By adding 10 mL of water sample, however, it is possible also to add variable amounts of quencher. In some cases water sample is preserved with nitric acid. The slope of the integral counting rate curve increases as quench level of the sample increases. Therefore, it is clear that the modified integral counting method gives more accurate 222 Rn concentrations for water samples of strong quench than the conventional integral counting method. 222 Rn sample of 0.2 Bq/L can be determined within an overall uncertainty of 3.1%

  2. Limits of reliability for the measurement of integral count

    International Nuclear Information System (INIS)

    Erbeszkorn, L.

    1979-01-01

    A method is presented for exact and approximate calculation of reliability limits of measured nuclear integral count. The formulae are applicable in measuring conditions which assure the Poisson distribution of the counts. The coefficients of the approximate formulae for 90, 95, 98 and 99 per cent reliability levels are given. The exact reliability limits for 90 per cent reliability level are calculated up to 80 integral counts. (R.J.)

  3. Discrete calculus methods for counting

    CERN Document Server

    Mariconda, Carlo

    2016-01-01

    This book provides an introduction to combinatorics, finite calculus, formal series, recurrences, and approximations of sums. Readers will find not only coverage of the basic elements of the subjects but also deep insights into a range of less common topics rarely considered within a single book, such as counting with occupancy constraints, a clear distinction between algebraic and analytical properties of formal power series, an introduction to discrete dynamical systems with a thorough description of Sarkovskii’s theorem, symbolic calculus, and a complete description of the Euler-Maclaurin formulas and their applications. Although several books touch on one or more of these aspects, precious few cover all of them. The authors, both pure mathematicians, have attempted to develop methods that will allow the student to formulate a given problem in a precise mathematical framework. The aim is to equip readers with a sound strategy for classifying and solving problems by pursuing a mathematically rigorous yet ...

  4. Counting master integrals: Integration by parts vs. differential reduction

    International Nuclear Information System (INIS)

    Kalmykov, Mikhail Yu.; Kniehl, Bernd A.

    2011-01-01

    The techniques of integration by parts and differential reduction differ in the counting of master integrals. This is illustrated using as an example the two-loop sunset diagram with on-shell kinematics. A new algebraic relation between the master integrals of the two-loop sunset diagram that does not follow from the standard integration-by-parts technique is found.

  5. Counting master integrals. Integration by parts vs. differential reduction

    International Nuclear Information System (INIS)

    Kalmykov, Mikhail Yu; Kniehl, Bernd A.

    2011-05-01

    The techniques of integration by parts and differential reduction differ in the counting of master integrals. This is illustrated using as an example the two- loop sunset diagram with on-shell kinematics. A new algebraic relation between the master integrals of the two-loop sunset diagram that does not follow from the integration-by-parts technique is found. (orig.)

  6. System and method of liquid scintillation counting

    International Nuclear Information System (INIS)

    Rapkin, E.

    1977-01-01

    A method of liquid scintillation counting utilizing a combustion step to overcome quenching effects comprises novel features of automatic sequential introduction of samples into a combustion zone and automatic sequential collection and delivery of combustion products into a counting zone. 37 claims, 13 figures

  7. Counting addressing method: Command addressable element and extinguishing module

    Directory of Open Access Journals (Sweden)

    Ristić Jovan D.

    2009-01-01

    Full Text Available The specific requirements that appear in addressable fire detection and alarm systems and the shortcomings of the existing addressing methods were discussed. A new method of addressing of detectors was proposed. The basic principles of addressing and responding of a called element are stated. Extinguishing module is specific subsystem in classic fire detection and alarm systems. Appearing of addressable fire detection and alarm systems didn't caused essential change in the concept of extinguishing module because of long calling period of such systems. Addressable fire security system based on counting addressing method reaches high calling rates and enables integrating of the extinguishing module in addressable system. Solutions for command addressable element and integrated extinguishing module are given in this paper. The counting addressing method was developed for specific requirements in fire detection and alarm systems, yet its speed and reliability justifies its use in the acquisition of data on slowly variable parameters under industrial conditions. .

  8. The power counting theorem for Feynman integrals with massless propagators

    International Nuclear Information System (INIS)

    Lowenstein, J.H.

    2000-01-01

    Dyson's power counting theorem is extended to the case where some of the mass parameters vanish. Weinberg's ultraviolet convergence conditions are supplemented by infrared convergence conditions which combined are sufficient for the convergence of Feynman integrals. (orig.)

  9. The power counting theorem for Feynman integrals with massless propagators

    International Nuclear Information System (INIS)

    Lowenstein, J.H.

    1975-01-01

    Dyson's power counting theorem is extended to the case where some of the mass parameters vanish. Weinberg's ultraviolet convergence conditions are supplemented by infrared convergence conditions which combined are sufficient for the convergence of Feynman integrals. (orig.) [de

  10. Statistical Methods for Unusual Count Data

    DEFF Research Database (Denmark)

    Guthrie, Katherine A.; Gammill, Hilary S.; Kamper-Jørgensen, Mads

    2016-01-01

    microchimerism data present challenges for statistical analysis, including a skewed distribution, excess zero values, and occasional large values. Methods for comparing microchimerism levels across groups while controlling for covariates are not well established. We compared statistical models for quantitative...... microchimerism values, applied to simulated data sets and 2 observed data sets, to make recommendations for analytic practice. Modeling the level of quantitative microchimerism as a rate via Poisson or negative binomial model with the rate of detection defined as a count of microchimerism genome equivalents per...

  11. Ultra-fast photon counting with a passive quenching silicon photomultiplier in the charge integration regime

    Science.gov (United States)

    Zhang, Guoqing; Lina, Liu

    2018-02-01

    An ultra-fast photon counting method is proposed based on the charge integration of output electrical pulses of passive quenching silicon photomultipliers (SiPMs). The results of the numerical analysis with actual parameters of SiPMs show that the maximum photon counting rate of a state-of-art passive quenching SiPM can reach ~THz levels which is much larger than that of the existing photon counting devices. The experimental procedure is proposed based on this method. This photon counting regime of SiPMs is promising in many fields such as large dynamic light power detection.

  12. A power counting theorem for Feynman integrals on the lattice

    International Nuclear Information System (INIS)

    Reisz, T.

    1988-01-01

    A convergence theorem is proved, which states sufficient conditions for the existence of the continuum limit for a wide class of Feynman integrals on a space-time lattice. A new kind of a UV-divergence degree is introduced, which allows the formulation of the theorem in terms of power counting conditions. (orig.)

  13. Counting master integrals. Integration by parts vs. functional equations

    International Nuclear Information System (INIS)

    Kniehl, Bernd A.; Tarasov, Oleg V.

    2016-01-01

    We illustrate the usefulness of functional equations in establishing relationships between master integrals under the integration-by-parts reduction procedure by considering a certain two-loop propagator-type diagram as an example.

  14. Method of inspecting Raschig rings by neutron absorption counting

    International Nuclear Information System (INIS)

    Morris, R.N.; Murri, R.L.; Hume, M.W.

    1979-01-01

    A neutron counting method for inspecting borosilicate glass Raschig rings and an apparatus designed specifically for this method are discussed. The neutron count ratios for rings of a given thickness show a linear correlation to the boron oxide content of the rings. The count ratio also has a linear relationship to the thickness of rings of a given boron oxide content. Consequently, the experimentally-determined count ratio and physically-measured thickness of Raschig rings can be used to statistically predict their boron oxide content and determine whether or not they meet quality control acceptance criteria

  15. Advanced photon counting applications, methods, instrumentation

    CERN Document Server

    Kapusta, Peter; Erdmann, Rainer

    2015-01-01

    This volume focuses on Time-Correlated Single Photon Counting (TCSPC), a powerful tool allowing luminescence lifetime measurements to be made with high temporal resolution, even on single molecules. Combining spectrum and lifetime provides a "fingerprint" for identifying such molecules in the presence of a background. Used together with confocal detection, this permits single-molecule spectroscopy and microscopy in addition to ensemble measurements, opening up an enormous range of hot life science applications such as fluorescence lifetime imaging (FLIM) and measurement of Förster Resonant Energy Transfer (FRET) for the investigation of protein folding and interaction. Several technology-related chapters present both the basics and current state-of-the-art, in particular of TCSPC electronics, photon detectors and lasers. The remaining chapters cover a broad range of applications and methodologies for experiments and data analysis, including the life sciences, defect centers in diamonds, super-resolution micr...

  16. The Method of Recursive Counting: Can one go further?

    International Nuclear Information System (INIS)

    Creutz, M.; Horvath, I.

    1993-12-01

    After a short review of the Method of Recursive Counting we introduce a general algebraic description of recursive lattice building. This provides a rigorous framework for discussion of method's limitations

  17. Controlling a sample changer using the integrated counting system

    International Nuclear Information System (INIS)

    Deacon, S.; Stevens, M.P.

    1985-06-01

    Control of the Sample Changer from a counting system can be achieved by using a Scaler Timer type 6255 and Sample Changer Control Interface type 6263. The interface used, however, has quite complex circuitry. The application therefore lends itself to the use of another 6000 Series module the Integrated Counting System (ICS). Using this unit control is carried out through a control program written in BASIC for the Commodore PET (or any other device with an IEEE-488 interface). The ICS then controls the sample changer through an interface unit which is relatively simple. A brief description of how ICS controls the sample changer is given. The control program is then described, firstly the running options are given, followed by a program description listing and flowchart. (author)

  18. Controlling a sample changer using the integrated counting system

    Energy Technology Data Exchange (ETDEWEB)

    Deacon, S; Stevens, M P

    1985-06-01

    Control of the Sample Changer from a counting system can be achieved by using a Scaler Timer type 6255 and Sample Changer Control Interface type 6263. The interface used, however, has quite complex circuitry. The application therefore lends itself to the use of another 6000 Series module-the Integrated Counting System (ICS). Using this unit control is carried out through a control program written in BASIC for the Commodore PET (or any other device with an IEEE-488 interface). The ICS then controls the sample changer through an interface unit which is relatively simple. A brief description of how ICS controls the sample changer is given. The control program is then described; first the running options are given, followed by a program description listing and flowchart.

  19. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  20. Numerical counting ratemeter with variable time constant and integrated circuits

    International Nuclear Information System (INIS)

    Kaiser, J.; Fuan, J.

    1967-01-01

    We present here the prototype of a numerical counting ratemeter which is a special version of variable time-constant frequency meter (1). The originality of this work lies in the fact that the change in the time constant is carried out automatically. Since the criterion for this change is the accuracy in the annunciated result, the integration time is varied as a function of the frequency. For the prototype described in this report, the time constant varies from 1 sec to 1 millisec. for frequencies in the range 10 Hz to 10 MHz. This prototype is built entirely of MECL-type integrated circuits from Motorola and is thus contained in two relatively small boxes. (authors) [fr

  1. Application of RI-counting method for posttraumatic CSF rhinorrhea

    International Nuclear Information System (INIS)

    Itoh, Takahiko; Terai, Yoshinori; Fujimoto, Shunichiro; Kawauchi, Masamitsu.

    1987-01-01

    Numerous techniques and procedures have been reported for the evaluation of CSF fistulas. Especially metrizamide CT cisternography and radioisotope (RI) cisternography have been reported to be reliable for localizing the site of CSF leakage, however, it has been difficult to diagnose the existence and the site of CSF leakages in some cases. RI-counting method, measuring RI-count of intranasal cotton pledgets after the intrathecal injection of RI ( 111 In-DTPA) is thought to be the most reliable method for detecting the presence of CSF leakage in these cases. We used six cotton pledgets which were inserted into upper, middle, and lower meatuses on both side. Because the site of pledgets with ghest RI-count has anatomical relationship to the opening of the paranasal sinus, we can surmise the leakage of dural defect and CSF leakage. RI counting method was applied to two patients in whom it was difficult to diagnose the presence of CSF leakage with other procedures. The patients were free in position for four hours after the intrathecal injection of RI, and in the prone position for 30 minutes before RI-counting of intranasal cotton pledgets. After measuring the RI-counts of six pledgets, the counts were compared with each other. The cotton pledget inserted into left middle meatus showed the highest count in both cases. From this result and finding of conventional skull tomography we speculated the site of CSF leakage to be frontal sinus or ethmoid sinus. Operation demonstrated the opening of dura into the frontal sinus in one case, and ethmoid sinus in another case. As mentioned above, RI-counting method has the advantages of detecting the existence and the site of CSF leakage. (author)

  2. A Method for Counting Moving People in Video Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Mario Vento

    2010-01-01

    Full Text Available People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem. This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an ϵ-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  3. A Method for Counting Moving People in Video Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Conte Donatello

    2010-01-01

    Full Text Available People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem. This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an -SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  4. A Method for Counting Moving People in Video Surveillance Videos

    Science.gov (United States)

    Conte, Donatello; Foggia, Pasquale; Percannella, Gennaro; Tufano, Francesco; Vento, Mario

    2010-12-01

    People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem). This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an [InlineEquation not available: see fulltext.]-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  5. Bioassay method for Uranium in urine by Delay Neutron counting

    International Nuclear Information System (INIS)

    Suratman; Purwanto; Sukarman-Aminjoyo

    1996-01-01

    A bioassay method for uranium in urine by neutron counting has been studied. The aim of this research is to obtain a bioassay method for uranium in urine which is used for the determination of internal dose of radiation workers. The bioassay was applied to the artificially uranium contaminated urine. The weight of the contaminant was varied. The uranium in the urine was irradiated in the Kartini reactor core, through pneumatic system. The delayed neutron was counted by BF3 neutron counter. Recovery of the bioassay was between 69.8-88.8 %, standard deviation was less than 10 % and the minimum detection was 0.387 μg

  6. Investigation on Carbohydrate Counting Method in Type 1 Diabetic Patients

    Directory of Open Access Journals (Sweden)

    Osman Son

    2014-01-01

    Full Text Available Objective. The results from Diabetes Control and Complications Trial (DCCT have propounded the importance of the approach of treatment by medical nutrition when treating diabetes mellitus (DM. During this study, we tried to inquire carbohydrate (Kh count method’s positive effects on the type 1 DM treatment’s success as well as on the life quality of the patients. Methods. 22 of 37 type 1 DM patients who applied to Eskişehir Osmangazi University, Faculty of Medicine Hospital, Department of Endocrinology and Metabolism, had been treated by Kh count method and 15 of them are treated by multiple dosage intensive insulin treatment with applying standard diabetic diet as a control group and both of groups were under close follow-up for 6 months. Required approval was taken from the Ethical Committee of Eskişehir Osmangazi University, Medical Faculty, as well as informed consent from the patients. The body weight of patients who are treated by carbohydrate count method and multiple dosage intensive insulin treatment during the study beginning and after 6-month term, body mass index, and body compositions are analyzed. A short life quality and medical research survey applied. At statistical analysis, t-test, chi-squared test, and Mann-Whitney U test were used. Results. There had been no significant change determined at glycemic control indicators between the Kh counting group and the standard diabetic diet and multiple dosage insulin treatment group in our study. Conclusion. As a result, Kh counting method which offers a flexible nutrition plan to diabetic individuals is a functional method.

  7. Quality control methods in accelerometer data processing: identifying extreme counts.

    Directory of Open Access Journals (Sweden)

    Carly Rich

    Full Text Available Accelerometers are designed to measure plausible human activity, however extremely high count values (EHCV have been recorded in large-scale studies. Using population data, we develop methodological principles for establishing an EHCV threshold, propose a threshold to define EHCV in the ActiGraph GT1M, determine occurrences of EHCV in a large-scale study, identify device-specific error values, and investigate the influence of varying EHCV thresholds on daily vigorous PA (VPA.We estimated quantiles to analyse the distribution of all accelerometer positive count values obtained from 9005 seven-year old children participating in the UK Millennium Cohort Study. A threshold to identify EHCV was derived by differentiating the quantile function. Data were screened for device-specific error count values and EHCV, and a sensitivity analysis conducted to compare daily VPA estimates using three approaches to accounting for EHCV.Using our proposed threshold of ≥ 11,715 counts/minute to identify EHCV, we found that only 0.7% of all non-zero counts measured in MCS children were EHCV; in 99.7% of these children, EHCV comprised < 1% of total non-zero counts. Only 11 MCS children (0.12% of sample returned accelerometers that contained negative counts; out of 237 such values, 211 counts were equal to -32,768 in one child. The medians of daily minutes spent in VPA obtained without excluding EHCV, and when using a higher threshold (≥19,442 counts/minute were, respectively, 6.2% and 4.6% higher than when using our threshold (6.5 minutes; p<0.0001.Quality control processes should be undertaken during accelerometer fieldwork and prior to analysing data to identify monitors recording error values and EHCV. The proposed threshold will improve the validity of VPA estimates in children's studies using the ActiGraph GT1M by ensuring only plausible data are analysed. These methods can be applied to define appropriate EHCV thresholds for different accelerometer models.

  8. High sensitivity neutron activation analysis using coincidence counting method

    International Nuclear Information System (INIS)

    Suzuki, Shogo; Okada, Yukiko; Hirai, Shoji

    1999-01-01

    Four kinds of standard samples such as river sediment (NIES CRM No.16), Typical Japanese Diet, otoliths and river water were irradiated by TRIGA-II (100 kW, 3.7x10 12 n cm -2 s -1 ) for 6 h. After irradiation and cooling, they were analyzed by the coincidence counting method and a conventional γ-ray spectrometry. Se, Ba and Hf were determined by 75 Se 265 keV, 131 Ba 496 keV and 181 Hf 482 keV. On the river sediment sample, Ba and Hf showed the same values by two methods, but Se value contained Ta by the conventional method, although the coincidence counting method could analyze Se. On Typical Japanese Diet and otoliths, Se could be determined by two methods and Ba and Hf determined by the coincidence counting method but not determined by the conventional method. Se value in the river water agreed with the authorization value. (S.Y.)

  9. A new method of quench monitoring in liquid scintillation counting

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1978-01-01

    The quench level of different liquid scintillation counting samples is measured by comparing the responses (pulse heights) produced by the same energy electrons in each sample. The electrons utilized in the measurements are those of the maximum energy (Esub(max)) which are produced by the single Compton scattering process for the same energy gamma-rays in each sample. The Esub(max) response produced in any sample is related to the Esub(max) response produced in an unquenched, sealed standard. The difference in response on a logarithm response scale is defined as the ''H Number''. The H number is related to the counting efficiency of the desired radionuclide by measurement of a set of standards of known amounts of the radionuclide and different amounts of quench (standard quench curve). The concept of the H number has been shown to be theoretically valid. Based upon this proof, the features of the H number concept as embodied in the Beckman LS-8000 Series Liquid Scintillation Systems have been demonstrated. It has been shown that one H number is unique; it provides a method of instrument calibration and wide dynamic quench range measurements. Further, it has been demonstrated that the H number concept provides a universal quench parameter. Counting efficiency vs. H number plots are repeatable within the statistical limits of +-1% counting efficiency. By the use of the H number concept a very accurate method of automatic quench compensation (A.Q.C.) is possible. (T.G.)

  10. Design of time interval generator based on hybrid counting method

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yuan [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Zhaoqi [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Lu, Houbing [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei Electronic Engineering Institute, Hefei 230037 (China); Chen, Lian [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jin, Ge, E-mail: goldjin@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  11. Design of time interval generator based on hybrid counting method

    International Nuclear Information System (INIS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-01-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  12. A New Method for Calculating Counts in Cells

    Science.gov (United States)

    Szapudi, István

    1998-04-01

    In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.

  13. Analysis of electroperforated materials using the quadrat counts method

    Energy Technology Data Exchange (ETDEWEB)

    Miranda, E; Garzon, C; Garcia-Garcia, J [Departament d' Enginyeria Electronica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Barcelona (Spain); MartInez-Cisneros, C; Alonso, J, E-mail: enrique.miranda@uab.cat [Departament de Quimica AnalItica, Universitat Autonoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)

    2011-06-23

    The electroperforation distribution in thin porous materials is investigated using the quadrat counts method (QCM), a classical statistical technique aimed to evaluate the deviation from complete spatial randomness (CSR). Perforations are created by means of electrical discharges generated by needle-like tungsten electrodes. The objective of perforating a thin porous material is to enhance its air permeability, a critical issue in many industrial applications involving paper, plastics, textiles, etc. Using image analysis techniques and specialized statistical software it is shown that the perforation locations follow, beyond a certain length scale, a homogeneous 2D Poisson distribution.

  14. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  15. An improved method for chromosome counting in maize.

    Science.gov (United States)

    Kato, A

    1997-09-01

    An improved method for counting chromosomes in maize (Zea mays L.) is presented. Application of cold treatment (5C, 24 hr), heat treatment (42 C, 5 min) and a second cold treatment (5C, 24 hr) to root tips before fixation increased the number of condensed and dispersed countable metaphase chromosome figures. Fixed root tips were prepared by the enzymatic maceration-air drying method and preparations were stained with acetic orcein. Under favorable conditions, one preparation with 50-100 countable chromosome figures could be obtained in diploid maize using this method. Conditions affecting the dispersion of the chromosomes are described. This technique is especially useful for determining the somatic chromosome number in triploid and tetraploid maize lines.

  16. Change-Point Methods for Overdispersed Count Data

    National Research Council Canada - National Science Library

    Wilken, Brian A

    2007-01-01

    .... Although the Poisson model is often used to model count data, the two-parameter gamma-Poisson mixture parameterization of the negative binomial distribution is often a more adequate model for overdispersed count data...

  17. Integrating count and detection–nondetection data to model population dynamics

    Science.gov (United States)

    Zipkin, Elise F.; Rossman, Sam; Yackulic, Charles B.; Wiens, David; Thorson, James T.; Davis, Raymond J.; Grant, Evan H. Campbell

    2017-01-01

    There is increasing need for methods that integrate multiple data types into a single analytical framework as the spatial and temporal scale of ecological research expands. Current work on this topic primarily focuses on combining capture–recapture data from marked individuals with other data types into integrated population models. Yet, studies of species distributions and trends often rely on data from unmarked individuals across broad scales where local abundance and environmental variables may vary. We present a modeling framework for integrating detection–nondetection and count data into a single analysis to estimate population dynamics, abundance, and individual detection probabilities during sampling. Our dynamic population model assumes that site-specific abundance can change over time according to survival of individuals and gains through reproduction and immigration. The observation process for each data type is modeled by assuming that every individual present at a site has an equal probability of being detected during sampling processes. We examine our modeling approach through a series of simulations illustrating the relative value of count vs. detection–nondetection data under a variety of parameter values and survey configurations. We also provide an empirical example of the model by combining long-term detection–nondetection data (1995–2014) with newly collected count data (2015–2016) from a growing population of Barred Owl (Strix varia) in the Pacific Northwest to examine the factors influencing population abundance over time. Our model provides a foundation for incorporating unmarked data within a single framework, even in cases where sampling processes yield different detection probabilities. This approach will be useful for survey design and to researchers interested in incorporating historical or citizen science data into analyses focused on understanding how demographic rates drive population abundance.

  18. Proposals of counting method for bubble detectors and their intercomparisons

    International Nuclear Information System (INIS)

    Ramalho, Eduardo; Silva, Ademir X.; Bellido, Luis F.; Facure, Alessandro; Pereira, Mario

    2009-01-01

    The study of neutron's spectrometry and dosimetry has become significantly easier due to relatively new devices called bubble detectors. Insensitive to gamma rays and composed by superheated emulsions, they still are subjects of many researches in Radiation Physics and Nuclear Engineering. In bubble detectors, either exposed to more intense neutron fields or for a long time, when more bubbles are produced, the statistical uncertainty during the dosimetric and spectrometric processes is reduced. A proposal of this nature is set up in this work, which presents ways to perform counting processes for bubble detectors and an updated proceeding to get the irradiated detectors' images in order to make the manual counting easier. Twelve BDS detectors were irradiated by RDS111 cyclotron from IEN's (Instituto de Engenharia Nuclear) and photographed using an assembly specially designed for this experiment. Counting was proceeded manually in a first moment; simultaneously, ImagePro was used in order to perform counting automatically. The bubble counting values, either manual or automatic, were compared and the time to get them and their difficult levels as well. After the bubble counting, the detectors' standardizes responses were calculated in both cases, according to BDS's manual and they were also compared. Among the results, the counting on these devices really becomes very hard at a large number of bubbles, besides higher variations in counting of many bubbles. Because of the good agreement between manual counting and the custom program, the last one revealed a good alternative in practical and economical levels. Despite the good results, the custom program needs of more adjustments in order to achieve more accuracy on higher counting on bubble detectors for neutron measurement applications. (author)

  19. Pressure-based impact method to count bedload particles

    Science.gov (United States)

    Antico, Federica; Mendes, Luís; Aleixo, Rui; Ferreira, Rui M. L.

    2017-04-01

    -channel flow, was analysed. All tests featured a period of 90 s data collection. For a detailed description of the laboratory facilities and test conditions see Mendes et al. (2016). Results from MiCas system were compared with those of obtained from the analysis of a high-speed video footage. The obtained results shown a good agreement between both techniques. The measurements carried out allowed to determine that MiCas system is able to track particle impact in real-time within an error margin of 2.0%. From different tests with the same conditions it was possible to determine the repeatability of MiCas system. Derived quantities such as bedload transport rates, eulerian auto-correlation functions and structure functions are also in close agreement with measurements based on optical methods. The main advantages of MiCas system relatively to digital image processing methods are: a) independence from optical access, thus avoiding problems with light intensity variations and oscillating free surfaces; b) small volume of data associated to particle counting, which allows for the possibility of acquiring very long data series (hours, days) of particle impacts. In the considered cases, it would take more than two hours to generate 1 MB of data. For the current validation tests, 90 s acquisition time generated 25 Gb of images but 11 kB of MiCas data. On the other hand the time necessary to process the digital images may correspond to days, effectively limiting its usage to small time series. c) the possibility of real-time measurements, allowing for detection of problems during the experiments and minimizing some post-processing steps. This research was partially supported by Portuguese and European funds, within programs COMPETE2020 and PORL-FEDER, through project PTDC/ECM-HID/6387/2014 granted by the National Foundation for Science and Technology (FCT). References Mendes L., Antico F., Sanches P., Alegria F., Aleixo R., and Ferreira RML. (2016). A particle counting system for

  20. Edge detection of optical subaperture image based on improved differential box-counting method

    Science.gov (United States)

    Li, Yi; Hui, Mei; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-01-01

    Optical synthetic aperture imaging technology is an effective approach to improve imaging resolution. Compared with monolithic mirror system, the image of optical synthetic aperture system is often more complex at the edge, and as a result of the existence of gap between segments, which makes stitching becomes a difficult problem. So it is necessary to extract the edge of subaperture image for achieving effective stitching. Fractal dimension as a measure feature can describe image surface texture characteristics, which provides a new approach for edge detection. In our research, an improved differential box-counting method is used to calculate fractal dimension of image, then the obtained fractal dimension is mapped to grayscale image to detect edges. Compared with original differential box-counting method, this method has two improvements as follows: by modifying the box-counting mechanism, a box with a fixed height is replaced by a box with adaptive height, which solves the problem of over-counting the number of boxes covering image intensity surface; an image reconstruction method based on super-resolution convolutional neural network is used to enlarge small size image, which can solve the problem that fractal dimension can't be calculated accurately under the small size image, and this method may well maintain scale invariability of fractal dimension. The experimental results show that the proposed algorithm can effectively eliminate noise and has a lower false detection rate compared with the traditional edge detection algorithms. In addition, this algorithm can maintain the integrity and continuity of image edge in the case of retaining important edge information.

  1. Recommended methods for monitoring change in bird populations by counting and capture of migrants

    Science.gov (United States)

    David J. T. Hussell; C. John Ralph

    2005-01-01

    Counts and banding captures of spring or fall migrants can generate useful information on the status and trends of the source populations. To do so, the counts and captures must be taken and recorded in a standardized and consistent manner. We present recommendations for field methods for counting and capturing migrants at intensively operated sites, such as bird...

  2. Detector Motion Method to Increase Spatial Resolution in Photon-Counting Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Daehee; Park, Kyeongjin; Lim, Kyung Taek; Cho, Gyuseong [Korea Advanced Institute of Science and Technology, Daejon (Korea, Republic of)

    2017-03-15

    Medical imaging requires high spatial resolution of an image to identify fine lesions. Photoncounting detectors in medical imaging have recently been rapidly replacing energy-integrating detectors due to the former's high spatial resolution, high efficiency and low noise. Spatial resolution in a photon counting image is determined by the pixel size. Therefore, the smaller the pixel size, the higher the spatial resolution that can be obtained in an image. However, detector redesigning is required to reduce pixel size, and an expensive fine process is required to integrate a signal processing unit with reduced pixel size. Furthermore, as the pixel size decreases, charge sharing severely deteriorates spatial resolution. To increase spatial resolution, we propose a detector motion method using a large pixel detector that is less affected by charge sharing. To verify the proposed method, we utilized a UNO-XRI photon-counting detector (1-mm CdTe, Timepix chip) at the maximum X-ray tube voltage of 80 kVp. A similar spatial resolution of a 55-μm-pixel image was achieved by application of the proposed method to a 110-μm-pixel detector with a higher signal-to-noise ratio. The proposed method could be a way to increase spatial resolution without a pixel redesign when pixels severely suffer from charge sharing as pixel size is reduced.

  3. Statistical Methods in Integrative Genomics

    Science.gov (United States)

    Richardson, Sylvia; Tseng, George C.; Sun, Wei

    2016-01-01

    Statistical methods in integrative genomics aim to answer important biology questions by jointly analyzing multiple types of genomic data (vertical integration) or aggregating the same type of data across multiple studies (horizontal integration). In this article, we introduce different types of genomic data and data resources, and then review statistical methods of integrative genomics, with emphasis on the motivation and rationale of these methods. We conclude with some summary points and future research directions. PMID:27482531

  4. Diverse methods for integrable models

    NARCIS (Netherlands)

    Fehér, G.

    2017-01-01

    This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.

  5. The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures

    Science.gov (United States)

    Stephenson, W. Kirk

    2009-01-01

    A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes. (Contains 4 notes.)

  6. Characterization of Sphinx1 ASIC X-ray detector using photon counting and charge integration

    Science.gov (United States)

    Habib, A.; Arques, M.; Moro, J.-L.; Accensi, M.; Stanchina, S.; Dupont, B.; Rohr, P.; Sicard, G.; Tchagaspanian, M.; Verger, L.

    2018-01-01

    Sphinx1 is a novel pixel architecture adapted for X-ray imaging, it detects radiation by photon counting and charge integration. In photon counting mode, each photon is compensated by one or more counter-charges typically consisting of 100 electrons (e-) each. The number of counter-charges required gives a measure of the incoming photon energy, thus allowing spectrometric detection. Pixels can also detect radiation by integrating the charges deposited by all incoming photons during one image frame and converting this analog value into a digital response with a 100 electrons least significant bit (LSB), based on the counter-charge concept. A proof of concept test chip measuring 5 mm × 5 mm, with 200 μm × 200 μm pixels has been produced and characterized. This paper provides details on the architecture and the counter-charge design; it also describes the two modes of operation: photon counting and charge integration. The first performance measurements for this test chip are presented. Noise was found to be ~80 e-rms in photon counting mode with a power consumption of only 0.9 μW/pixel for the static analog part and 0.3 μW/pixel for the static digital part.

  7. Counting hard-to-count populations: the network scale-up method for public health

    Science.gov (United States)

    Bernard, H Russell; Hallett, Tim; Iovita, Alexandrina; Johnsen, Eugene C; Lyerla, Rob; McCarty, Christopher; Mahy, Mary; Salganik, Matthew J; Saliuk, Tetiana; Scutelniciuc, Otilia; Shelley, Gene A; Sirinirund, Petchsri; Weir, Sharon

    2010-01-01

    Estimating sizes of hidden or hard-to-reach populations is an important problem in public health. For example, estimates of the sizes of populations at highest risk for HIV and AIDS are needed for designing, evaluating and allocating funding for treatment and prevention programmes. A promising approach to size estimation, relatively new to public health, is the network scale-up method (NSUM), involving two steps: estimating the personal network size of the members of a random sample of a total population and, with this information, estimating the number of members of a hidden subpopulation of the total population. We describe the method, including two approaches to estimating personal network sizes (summation and known population). We discuss the strengths and weaknesses of each approach and provide examples of international applications of the NSUM in public health. We conclude with recommendations for future research and evaluation. PMID:21106509

  8. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'

    DEFF Research Database (Denmark)

    de Nijs, Robin

    2015-01-01

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed...... by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all...... methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics...

  9. Variants of the Borda count method for combining ranked classifier hypotheses

    NARCIS (Netherlands)

    van Erp, Merijn; Schomaker, Lambert; Schomaker, Lambert; Vuurpijl, Louis

    2000-01-01

    The Borda count is a simple yet effective method of combining rankings. In pattern recognition, classifiers are often able to return a ranked set of results. Several experiments have been conducted to test the ability of the Borda count and two variant methods to combine these ranked classifier

  10. A novel method for detecting and counting overlapping tracks in SSNTD by image processing techniques

    International Nuclear Information System (INIS)

    Ab Azar, N.; Babakhani, A.; Broumandnia, A.; Sepanloo, K.

    2016-01-01

    Overlapping object detection and counting is a challenge in image processing. A new method for detecting and counting overlapping circles is presented in this paper. This method is based on pattern recognition and feature extraction using “neighborhood values“ in an object image by implementation of image processing techniques. The junction points are detected by assignment of a value for each pixel in an image. As is shown, the neighborhood values for junction points are larger than the values for other points. This distinction of neighborhood values is the main feature which can be utilized to identify the junction points and to count the overlapping tracks. This method can be used for recognizing and counting charged particle tracks, blood cells and also cancer cells. The method is called “Track Counting based on Neighborhood Values” and is symbolized by “TCNV”. - Highlights: • A new method is introduced to recognize nuclear tracks by image processing. • The method is used to specify neighborhood pixels in junction points in overlapping tracks. • Enhanced method of counting overlapping tracks. • New counting system has linear behavior in counting tracks with density less than 300,000 tracks per cm"2. • In the new method, the overlap tracks can be recognized even to 10× tracks and more.

  11. K-edge energy-based calibration method for photon counting detectors

    Science.gov (United States)

    Ge, Yongshuai; Ji, Xu; Zhang, Ran; Li, Ke; Chen, Guang-Hong

    2018-01-01

    In recent years, potential applications of energy-resolved photon counting detectors (PCDs) in the x-ray medical imaging field have been actively investigated. Unlike conventional x-ray energy integration detectors, PCDs count the number of incident x-ray photons within certain energy windows. For PCDs, the interactions between x-ray photons and photoconductor generate electronic voltage pulse signals. The pulse height of each signal is proportional to the energy of the incident photons. By comparing the pulse height with the preset energy threshold values, x-ray photons with specific energies are recorded and sorted into different energy bins. To quantitatively understand the meaning of the energy threshold values, and thus to assign an absolute energy value to each energy bin, energy calibration is needed to establish the quantitative relationship between the threshold values and the corresponding effective photon energies. In practice, the energy calibration is not always easy, due to the lack of well-calibrated energy references for the working energy range of the PCDs. In this paper, a new method was developed to use the precise knowledge of the characteristic K-edge energy of materials to perform energy calibration. The proposed method was demonstrated using experimental data acquired from three K-edge materials (viz., iodine, gadolinium, and gold) on two different PCDs (Hydra and Flite, XCounter, Sweden). Finally, the proposed energy calibration method was further validated using a radioactive isotope (Am-241) with a known decay energy spectrum.

  12. Division of methods for counting helminths’ eggs and the problem of efficiency of these methods

    Directory of Open Access Journals (Sweden)

    Katarzyna Jaromin-Gleń

    2017-03-01

    Full Text Available From the sanitary and epidemiological aspects, information concerning the developmental forms of intestinal parasites, especially the eggs of helminths present in our environment in: water, soil, sandpits, sewage sludge, crops watered with wastewater are very important. The methods described in the relevant literature may be classified in various ways, primarily according to the methodology of the preparation of samples from environmental matrices prepared for analysis, and the sole methods of counting and chambers/instruments used for this purpose. In addition, there is a possibility to perform the classification of the research methods analyzed from the aspect of the method and time of identification of the individuals counted, or the necessity for staining them. Standard methods for identification of helminths’ eggs from environmental matrices are usually characterized by low efficiency, i.e. from 30% to approximately 80%. The efficiency of the method applied may be measured in a dual way, either by using the method of internal standard or the ‘Split/Spike’ method. While measuring simultaneously in an examined object the efficiency of the method and the number of eggs, the ‘actual’ number of eggs may be calculated by multiplying the obtained value of the discovered eggs of helminths by inverse efficiency.

  13. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    Science.gov (United States)

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  14. Hybrid statistics-simulations based method for atom-counting from ADF STEM images

    Energy Technology Data Exchange (ETDEWEB)

    De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2017-06-15

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.

  15. Accuracy of Platelet Counting by Optical and Impedance Methods in Patients with Thrombocytopaenia and Microcytosis

    Directory of Open Access Journals (Sweden)

    Mohamed-Rachid Boulassel

    2015-11-01

    Full Text Available Objectives: Obtaining accurate platelet counts in microcytic blood samples is challenging, even with the most reliable automated haematology analysers. The CELL-DYN™ Sapphire (Abbott Laboratories, Chicago, Illinois, USA analyser uses both optical density and electronic impedance methods for platelet counting. This study aimed to evaluate the accuracy of optical density and electrical impedance methods in determining true platelet counts in thrombocytopaenic samples with microcytosis as defined by low mean corpuscular volume (MCV of red blood cells. Additionally, the impact of microcytosis on platelet count accuracy was evaluated. Methods: This study was carried out between February and December 2014 at the Haematology Laboratory of the Sultan Qaboos University Hospital in Muscat, Oman. Blood samples were collected and analysed from 189 patients with thrombocytopaenia and MCV values of <76 femtolitres. Platelet counts were tested using both optical and impedance methods. Stained peripheral blood films for each sample were then reviewed as a reference method to confirm platelet counts. Results: The platelet counts estimated by the impedance method were on average 30% higher than those estimated by the optical method (P <0.001. The estimated intraclass correlation coefficient was 0.52 (95% confidence interval: 0.41–0.62, indicating moderate reliability between the methods. The degree of agreement between methods ranged from -85.5 to 24.3 with an estimated bias of -30, suggesting that these methods generate different platelet results. Conclusion: The impedance method significantly overestimated platelet counts in microcytic and thrombocytopaenic blood samples. Further attention is therefore needed to improve the accuracy of platelet counts, particularly for patients with conditions associated with microcytosis.

  16. A high count rate position decoding and energy measuring method for nuclear cameras using Anger logic detectors

    International Nuclear Information System (INIS)

    Wong, W.H.; Li, H.; Uribe, J.

    1998-01-01

    A new method for processing signals from Anger position-sensitive detectors used in gamma cameras and PET is proposed for very high count-rate imaging where multiple-event pileups are the norm. This method is designed to sort out and recover every impinging event from multiple-event pileups while maximizing the collection of scintillation signal for every event to achieve optimal accuracy in the measurement of energy and position. For every detected event, this method cancels the remnant signals from previous events, and excludes the pileup of signals from following events. The remnant subtraction is exact even for multiple pileup events. A prototype circuit for energy recovery demonstrated that the maximum count rates can be increased by more than 10 times comparing to the pulse-shaping method, and the energy resolution is as good as pulse shaping (or fixed integration) at low count rates. At 2 x 10 6 events/sec on NaI(Tl), the true counts acquired with this method is 3.3 times more than the delay-line clipping method (256 ns clipping) due to events recovered from pileups. Pulse-height spectra up to 3.5 x 10 6 events/sec have been studied. Monte Carlo simulation studies have been performed for image-quality comparisons between different processing methods

  17. An corrective method to correct of the inherent flaw of the asynchronization direct counting circuit

    International Nuclear Information System (INIS)

    Wang Renfei; Liu Congzhan; Jin Yongjie; Zhang Zhi; Li Yanguo

    2003-01-01

    As a inherent flaw of the Asynchronization Direct Counting Circuit, the crosstalk, which is resulted from the randomicity of the time-signal always exists between two adjacent channels. In order to reduce the counting error derived from the crosstalk, the author propose an effective method to correct the flaw after analysing the mechanism of the crosstalk

  18. Standardization of Tc-99 by three liquid scintillation counting methods

    International Nuclear Information System (INIS)

    Wyngaardt, W.M. van; Staden, M.J. van; Lubbe, J.; Simpson, B.R.S.

    2014-01-01

    The NMISA participated in the international key comparison of the pure beta-emitter Technetium-99, CCRI(II)-K2.Tc-99. The comparison solution was standardized using three methods, namely the TDCR efficiency calculation method, the CIEMAT/NIST efficiency tracing method and the 4π(LS)β–γ coincidence tracing method with Co-60 as tracer. Excellent agreement between results obtained with the three methods confirmed the applicability of the beta spectral shape given by the latest (2011) DDEP evaluation of Tc-99 decay data, rather than the earlier (2004) evaluation. - Highlights: • Activity concentration of Tc-99 solution measured using three LSC methods. • Methods used are TDCR, CNET and 4π(LS)β–γ coincidence tracing. • Beta spectral shape confirmed by agreement between three methods

  19. A method for the determination of counting efficiencies in γ-spectrometric measurements with HPGe detectors

    International Nuclear Information System (INIS)

    Bolivar, J.P.; Garcia-Leon, M.

    1996-01-01

    In this paper a general method for γ-ray efficiency calibration is presented. The method takes into account the differences of densities and counting geometry between the real sample and the calibration sample. It is based on the γ-transmission method and gives the correction factor f as a function of E γ , the density and counting geometry. Altough developed for soil samples, its underlying working philosophy is useful for any sample whose geometry can be adequately reproduced. (orig.)

  20. Integral equation methods for electromagnetics

    CERN Document Server

    Volakis, John

    2012-01-01

    This text/reference is a detailed look at the development and use of integral equation methods for electromagnetic analysis, specifically for antennas and radar scattering. Developers and practitioners will appreciate the broad-based approach to understanding and utilizing integral equation methods and the unique coverage of historical developments that led to the current state-of-the-art. In contrast to existing books, Integral Equation Methods for Electromagnetics lays the groundwork in the initial chapters so students and basic users can solve simple problems and work their way up to the mo

  1. The Goddard Integral Field Spectrograph at Apache Point Observatory: Current Status and Progress Towards Photon Counting

    Science.gov (United States)

    McElwain, Michael W.; Grady, Carol A.; Bally, John; Brinkmann, Jonathan V.; Bubeck, James; Gong, Qian; Hilton, George M.; Ketzeback, William F.; Lindler, Don; Llop Sayson, Jorge; Malatesta, Michael A.; Norton, Timothy; Rauscher, Bernard J.; Rothe, Johannes; Straka, Lorrie; Wilkins, Ashlee N.; Wisniewski, John P.; Woodgate, Bruce E.; York, Donald G.

    2015-01-01

    We present the current status and progress towards photon counting with the Goddard Integral Field Spectrograph (GIFS), a new instrument at the Apache Point Observatory's ARC 3.5m telescope. GIFS is a visible light imager and integral field spectrograph operating from 400-1000 nm over a 2.8' x 2.8' and 14' x 14' field of view, respectively. As an IFS, GIFS obtains over 1000 spectra simultaneously and its data reduction pipeline reconstructs them into an image cube that has 32 x 32 spatial elements and more than 200 spectral channels. The IFS mode can be applied to a wide variety of science programs including exoplanet transit spectroscopy, protostellar jets, the galactic interstellar medium probed by background quasars, Lyman-alpha emission line objects, and spectral imaging of galactic winds. An electron-multiplying CCD (EMCCD) detector enables photon counting in the high spectral resolution mode to be demonstrated at the ARC 3.5m in early 2015. The EMCCD work builds upon successful operational and characterization tests that have been conducted in the IFS laboratory at NASA Goddard. GIFS sets out to demonstrate an IFS photon-counting capability on-sky in preparation for future exoplanet direct imaging missions such as the AFTA-Coronagraph, Exo-C, and ATLAST mission concepts. This work is supported by the NASA APRA program under RTOP 10-APRA10-0103.

  2. Comparison of certain microbial counting methods which are ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-12-15

    Dec 15, 2009 ... tinuous food analyses. For example, the ... plate method for the enumeration of E. coli in foods. How- ... Although its preservation for 48 h is recommended in certain ..... Innovative Fungicide For Leather Industry: Essential Oil of.

  3. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method.

    Science.gov (United States)

    Huh, Kyung-Hoe; Baik, Jee-Seon; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-06-01

    This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm.

  4. Pulse-shape discrimination in radioanalytical methods. Part I. Delayed fission neutron counting

    International Nuclear Information System (INIS)

    Posta, S.; Vacik, J.; Hnatowicz, V.; Cervena, J.

    1999-01-01

    In this study the principle of pulse shape discrimination (PSD) has been employed in delayed fission neutron counting (DNC) method. Effective elimination of unwanted gamma background signals in measured radiation spectra has been proved. (author)

  5. Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method.

    Science.gov (United States)

    Veta, Mitko; van Diest, Paul J; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P W

    2016-01-01

    Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an "external" dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial

  6. A study of pile-up in integrated time-correlated single photon counting systems.

    Science.gov (United States)

    Arlt, Jochen; Tyndall, David; Rae, Bruce R; Li, David D-U; Richardson, Justin A; Henderson, Robert K

    2013-10-01

    Recent demonstration of highly integrated, solid-state, time-correlated single photon counting (TCSPC) systems in CMOS technology is set to provide significant increases in performance over existing bulky, expensive hardware. Arrays of single photon single photon avalanche diode (SPAD) detectors, timing channels, and signal processing can be integrated on a single silicon chip with a degree of parallelism and computational speed that is unattainable by discrete photomultiplier tube and photon counting card solutions. New multi-channel, multi-detector TCSPC sensor architectures with greatly enhanced throughput due to minimal detector transit (dead) time or timing channel dead time are now feasible. In this paper, we study the potential for future integrated, solid-state TCSPC sensors to exceed the photon pile-up limit through analytic formula and simulation. The results are validated using a 10% fill factor SPAD array and an 8-channel, 52 ps resolution time-to-digital conversion architecture with embedded lifetime estimation. It is demonstrated that pile-up insensitive acquisition is attainable at greater than 10 times the pulse repetition rate providing over 60 dB of extended dynamic range to the TCSPC technique. Our results predict future CMOS TCSPC sensors capable of live-cell transient observations in confocal scanning microscopy, improved resolution of near-infrared optical tomography systems, and fluorescence lifetime activated cell sorting.

  7. A comparative analysis of OTF, NPS, and DQE in energy integrating and photon counting digital x-ray detectors

    International Nuclear Information System (INIS)

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2010-01-01

    Purpose: One of the benefits of photon counting (PC) detectors over energy integrating (EI) detectors is the absence of many additive noise sources, such as electronic noise and secondary quantum noise. The purpose of this work is to demonstrate that thresholding voltage gains to detect individual x rays actually generates an unexpected source of white noise in photon counters. Methods: To distinguish the two detector types, their point spread function (PSF) is interpreted differently. The PSF of the energy integrating detector is treated as a weighting function for counting x rays, while the PSF of the photon counting detector is interpreted as a probability. Although this model ignores some subtleties of real imaging systems, such as scatter and the energy-dependent amplification of secondary quanta in indirect-converting detectors, it is useful for demonstrating fundamental differences between the two detector types. From first principles, the optical transfer function (OTF) is calculated as the continuous Fourier transform of the PSF, the noise power spectra (NPS) is determined by the discrete space Fourier transform (DSFT) of the autocovariance of signal intensity, and the detective quantum efficiency (DQE) is found from combined knowledge of the OTF and NPS. To illustrate the calculation of the transfer functions, the PSF is modeled as the convolution of a Gaussian with the product of rect functions. The Gaussian reflects the blurring of the x-ray converter, while the rect functions model the sampling of the detector. Results: The transfer functions are first calculated assuming outside noise sources such as electronic noise and secondary quantum noise are negligible. It is demonstrated that while OTF is the same for two detector types possessing an equivalent PSF, a frequency-independent (i.e., ''white'') difference in their NPS exists such that NPS PC ≥NPS EI and hence DQE PC ≤DQE EI . The necessary and sufficient condition for equality is that the PSF

  8. Validation parameters of instrumental method for determination of total bacterial count in milk

    Directory of Open Access Journals (Sweden)

    Nataša Mikulec

    2004-10-01

    Full Text Available The method of flow citometry as rapid, instrumental and routine microbiological method is used for determination of total bacterial count in milk. The results of flow citometry are expressed as individual bacterial cells count. Problems regarding the interpretation of the results of total bacterial count can be avoided by transformation of the results of flow citometry method onto the scale of reference method (HRN ISO 6610:2001.. The method of flow citometry, like any analitycal method, according to the HRN EN ISO/IEC 17025:2000 standard, requires validation and verification. This paper describes parameters of validation: accuracy, precision, specificity, range, robustness and measuring uncertainty for the method of flow citometry.

  9. Amplitude distributions of dark counts and photon counts in NbN superconducting single-photon detectors integrated with the HEMT readout

    Energy Technology Data Exchange (ETDEWEB)

    Kitaygorsky, J. [Kavli Institute of Nanoscience Delft, Delft University of Technology, 2600 GA Delft (Netherlands); Department of Electrical and Computer Engineering and Laboratory for Laser Energetics, University of Rochester, Rochester, NY 14627-0231 (United States); Słysz, W., E-mail: wslysz@ite.waw.pl [Institute of Electron Technology, PL-02 668 Warsaw (Poland); Shouten, R.; Dorenbos, S.; Reiger, E.; Zwiller, V. [Kavli Institute of Nanoscience Delft, Delft University of Technology, 2600 GA Delft (Netherlands); Sobolewski, Roman [Department of Electrical and Computer Engineering and Laboratory for Laser Energetics, University of Rochester, Rochester, NY 14627-0231 (United States)

    2017-01-15

    Highlights: • A new operation regime of NbN superconducting single-photon detectors (SSPDs). • A better understanding of the origin of dark counts generated by the detector. • A promise of PNR functionality in SSPD measurements. - Abstract: We present a new operation regime of NbN superconducting single-photon detectors (SSPDs) by integrating them with a low-noise cryogenic high-electron-mobility transistor and a high-load resistor. The integrated sensors are designed to get a better understanding of the origin of dark counts triggered by the detector, as our scheme allows us to distinguish the origin of dark pulses from the actual photon pulses in SSPDs. The presented approach is based on a statistical analysis of amplitude distributions of recorded trains of the SSPD photoresponse transients. It also enables to obtain information on energy of the incident photons, as well as demonstrates some photon-number-resolving capability of meander-type SSPDs.

  10. Liquid Scintillation counting Standardization of 22 NaCl by the CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Rodriguez Barquero, L.; Grau Carles, A.; Grau Malonda, A.

    1995-09-01

    We describe a procedure for preparing a stable solution of ''22 NaCl for liquid scintillation counting and its counting stability and spectral evolution in Insta-Gel''R is studied. The solution has been standardised in terms of activity concentration by the CIEMAT/NIST method with discrepancies between experimental and computed efficiencies lower than 0.4/% and an overall uncertainty of 0.35%

  11. Liquid Scintillation Counting Standardization of 22NaCl by te CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Rodriguez Barquero, L.; Grau Carles, A.; Grau Malonda, A.

    1995-01-01

    We describe a procedure for preparing a stable solution of ''22NaCl for liquid scintillation counting and its counting stability and spectral evolution in Insta-Gel''R is studied. The solution has been standardised in terms of activity concentration by the CIEMAT/NIST method with discrepancies between experimental and computed efficiencies lower than 0.4 % and an overall uncertainty of 0.35 %. (Author) 4 refs

  12. A statistical analysis of count normalization methods used in positron-emission tomography

    International Nuclear Information System (INIS)

    Holmes, T.J.; Ficke, D.C.; Snyder, D.L.

    1984-01-01

    As part of the Positron-Emission Tomography (PET) reconstruction process, annihilation counts are normalized for photon absorption, detector efficiency and detector-pair duty-cycle. Several normalization methods of time-of-flight and conventional systems are analyzed mathematically for count bias and variance. The results of the study have some implications on hardware and software complexity and on image noise and distortion

  13. Determination of plutonium-241 by liquid scintillation counting method and its application to environmental samples

    International Nuclear Information System (INIS)

    Watanabe, Miki; Amano, Hikaru

    1997-03-01

    Radionuclides are usually measured by gross counting mode in liquid scintillator counting (LSC) which measures both α and β pulses. This method can easily measure radioactivities, but its background counting is high. Recently reported α-β pulse shape discrimination method (α-β PSD method) in LSC which distinguishes α pulses from β pulses, shows low background counting, so it makes the detection limit lower. The aim of this research is to develop the best method for the determination of 241 Pu which is β-emitter, and Pu isotopes of α-emitters which have long half-lives and stay long in animal body. In this research, two LSC machines was carried out in different scintillators, vial volumes, measurement modes and so on. The following things were found. 1. The liquid scintillator based on naphthalene is proved to be the best separator of α-ray from β-ray, because it acts quickly in energy translation procedure between solvent and aromatic compounds. 2. α-β PSD method makes the background counting rate ten times lower than usual method. It makes the measurement performance better. 3. It is possible to determine 241 Pu in environmental samples around Chernobyl by the combination of LSC and radiochemical separation methods. (author)

  14. Simple analytical technique for liquid scintillation counting of environmental carbon-14 using gel suspension method

    International Nuclear Information System (INIS)

    Okai, Tomio; Wakabayashi, Genichiro; Nagao, Kenjiro; Matoba, Masaru; Ohura, Hirotaka; Momoshima, Noriyuki; Kawamura, Hidehisa

    2000-01-01

    A simple analytical technique for liquid scintillation counting of environmental 14 C was developed. Commercially available gelling agent, N-lauroyl-L -glutamic -α,γ-dibutylamide, was used for the gel-formation of the samples (gel suspension method) and for the subsequent liquid scintillation counting of 14 C in the form of CaCO 3 . Our procedure for sample preparation is much simpler than that of the conventional methods and requires no special equipment. Self absorption, stability and reproducibility of gel suspension samples were investigated in order to evaluate the characteristics of the gel suspension method for 14 C activity measurement. The self absorption factor is about 70% and slightly decrease as CaCO 3 weight increase. This is considered to be mainly due to the absorption of β-rays and scintillation light by the CaCO 3 sample itself. No change of the counting rate for the gel suspension sample was observed for more than 2 years after the sample preparation. Four samples were used for checking the reproducibility of the sample preparation method. The same values were obtained for the counting rate of 24 C activity within the counting error. No change of the counting rate was observed for the 're-gelated' sample. These results show that the gel suspension method is appropriate for the 14 C activity measurement by the liquid scintillation counting method and useful for a long-term preservation of the sample for repeated measurement. The above analytical technique was applied to actual environmental samples in Fukuoka prefecture, Japan. Results obtained were comparable with those by other researchers and appear to be reasonable. Therefore, the newly developed technique is useful for the routine monitoring of environmental 14 C. (author)

  15. Platelet Counts in Insoluble Platelet-Rich Fibrin Clots: A Direct Method for Accurate Determination

    Directory of Open Access Journals (Sweden)

    Yutaka Kitamura

    2018-02-01

    Full Text Available Platelet-rich fibrin (PRF clots have been used in regenerative dentistry most often, with the assumption that growth factor levels are concentrated in proportion to the platelet concentration. Platelet counts in PRF are generally determined indirectly by platelet counting in other liquid fractions. This study shows a method for direct estimation of platelet counts in PRF. To validate this method by determination of the recovery rate, whole-blood samples were obtained with an anticoagulant from healthy donors, and platelet-rich plasma (PRP fractions were clotted with CaCl2 by centrifugation and digested with tissue-plasminogen activator. Platelet counts were estimated before clotting and after digestion using an automatic hemocytometer. The method was then tested on PRF clots. The quality of platelets was examined by scanning electron microscopy and flow cytometry. In PRP-derived fibrin matrices, the recovery rate of platelets and white blood cells was 91.6 and 74.6%, respectively, after 24 h of digestion. In PRF clots associated with small and large red thrombi, platelet counts were 92.6 and 67.2% of the respective total platelet counts. These findings suggest that our direct method is sufficient for estimating the number of platelets trapped in an insoluble fibrin matrix and for determining that platelets are distributed in PRF clots and red thrombi roughly in proportion to their individual volumes. Therefore, we propose this direct digestion method for more accurate estimation of platelet counts in most types of platelet-enriched fibrin matrix.

  16. Use of sum-peak and coincidence counting methods for activity standardization of {sup 22}Na

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, E.M. de, E-mail: estela@ird.gov.br [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI/IRD/CNEN), Av. Salvador Allende, s/n, Recreio, CEP 22780-160 Rio de Janeiro (Brazil); Iwahara, A.; Poledna, R. [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI/IRD/CNEN), Av. Salvador Allende, s/n, Recreio, CEP 22780-160 Rio de Janeiro (Brazil); Silva, M.A.L. da [Coordenacao Geral de Instalacoes Nucleares/Comissao Nacional de Energia Nuclear, R. Gal. Severiano, 90 - Botafogo, CEP 22290-901 Rio de Janeiro (Brazil); Tauhata, L. [Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ), Av. Erasmo Braga, 118-6 Degree-Sign andar, CEP 20020-000 Centro, Rio de Janeiro (Brazil); Delgado, J.U. [Laboratorio Nacional de Metrologia das Radiacoes Ionizantes (LNMRI/IRD/CNEN), Av. Salvador Allende, s/n, Recreio, CEP 22780-160 Rio de Janeiro (Brazil); Lopes, R.T. [Laboratorio de Instrumentacao Nuclear (LIN/PEN/COPPE/UFRJ), Caixa Postal 68509, CEP 21945-970 Rio de Janeiro (Brazil)

    2012-09-21

    A solution containing the positron emitter {sup 22}Na has been absolutely standardized using the 4{pi}{beta}-{gamma} coincidence counting method and the sum-peak spectrometry counting method. In the 4{pi}{beta}-{gamma} coincidence method two ways for the activity concentration measurements were used: gating on the 1275 keV photopeak and on the 1786 keV sum-peak where the knowledge of the {beta}{sup +}-branching ratio is required. In the sum-peak method the measurements were carried out using three experimental arrangements: the first composed by a well type 5 in. Multiplication-Sign 5 in. NaI(Tl) scintillation crystal, the second by a 3 in. Multiplication-Sign 3 in. NaI(Tl) scintillation crystal placed on the top of the first, resulting in a 4{pi} counting geometry and the third arrangement is a high purity coaxial germanium detector. The results that are obtained by these two methods are compatible within the standard uncertainty values with a coverage factor of k=2 ({approx}95% of the confidence level). This means that the sum-peak counting with its more simple experimental setup than the complex coincidence 4{pi}{beta}-{gamma} counting system gives consistent results for the activity standardization of {sup 22}Na with smaller uncertainties. Besides, the time period involved to attain the result of the standardization was quite shorter than the coincidence measurements used in this work.

  17. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    Directory of Open Access Journals (Sweden)

    Min Hye Jang

    Full Text Available In spite of the usefulness of the Ki-67 labeling index (LI as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67 between the two methods and the ratio of the Ki-67 LIs (H/A ratio of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700. In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  18. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    Science.gov (United States)

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu; Park, So Yeon

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  19. A flowrate measurement method by counting of radioactive particles suspended in a liquid

    International Nuclear Information System (INIS)

    Daniel, G.

    1983-04-01

    By external counting of fine #betta# emitting radioactive particles suspended in a liquid, the flowrate in a system of pipes can be measured. The study comprises three phases: 1. - The hydraulic validity of the method is demonstrated in laminar as well as in turbulent flow under certain conditions of particles size and density and of liquid viscosity. 2. - Radioactive labelling of microspheres of serumalbumin or ion exchange resins with indium 113m delivered by a generator Tin 113 → Indium 113m. 3. - Counting with a scintillation detector: a method of threshold overstepping is experimented with a mechanical or electronic simulator; the statistical study of particle superposition under the detector enables a correction for the resulting counting losses to be proposed. The method provides absolute measurements, but is particularly suitable to measure relative flowrates in a hydraulic network. It can be continuous and does not perturb the flow and the network. The accuracy of the method is analysed in details [fr

  20. Ballistic deficit correction methods for large Ge detectors-high counting rate study

    International Nuclear Information System (INIS)

    Duchene, G.; Moszynski, M.

    1995-01-01

    This study presents different ballistic correction methods versus input count rate (from 3 to 50 kcounts/s) using four large Ge detectors of about 70 % relative efficiency. It turns out that the Tennelec TC245 linear amplifier in the BDC mode (Hinshaw method) is the best compromise for energy resolution throughout. All correction methods lead to narrow sum-peaks indistinguishable from single Γ lines. The full energy peak throughput is found representative of the pile-up inspection dead time of the corrector circuits. This work also presents a new and simple representation, plotting simultaneously energy resolution and throughput versus input count rate. (TEC). 12 refs., 11 figs

  1. 17 CFR 275.203(b)(3)-2 - Methods for counting clients in certain private funds.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Methods for counting clients....203(b)(3)-2 Methods for counting clients in certain private funds. (a) For purposes of section 203(b)(3) of the Act (15 U.S.C. 80b-3(b)(3)), you must count as clients the shareholders, limited partners...

  2. Methods of counting ribs on chest CT: the modified sternomanubrial approach

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Kyung Sik; Kim, Sung Jin; Jeon, Min Hee; Lee, Seung Young; Bae, Il Hun [Chungbuk National University, Cheongju (Korea, Republic of)

    2007-08-15

    The purpose of this study was to evaluate the accuracy of each method of counting ribs on chest CT and to propose a new method: the anterior approach with using the sternocostal joints. CT scans of 38 rib lesions of 27 patients were analyzed (fracture: 25, metastasis: 11, benign bone disease: 2). Each lesion was independently counted by three radiologists with using three different methods for counting ribs: the sternoclavicular approach, the xiphisternal approach and the modified sternomanubrial approach. The rib lesions were divided into three parts of evaluation of each method according to the location of the lesion as follows: the upper part (between the first and fourth thoracic vertebra), the middle part (between the fifth and eighth) and the lower part (between the ninth and twelfth). The most accurate method was a modified sternomanubrial approach (99.1%). The accuracies of a xiphisternal approach and a sternoclavicular approach were 95.6% and 88.6%, respectively. A modified sternomanubrial approach showed the highest accuracies in all three parts (100%, 100% and 97.9%, respectively). We propose a new method for counting ribs, the modified sternomanubrial approach, which was more accurate than the known methods in any parts of the bony thorax, and it may be an easier and quicker method than the others in clinical practice.

  3. Ion counting method and it's operational characteristics in gas chromatography-mass spectrometry

    International Nuclear Information System (INIS)

    Fujii, Toshihiro

    1976-01-01

    Ion counting method with continuous channel electron multiplier which affords the direct detection of very small ion currents and it's operational characteristics were studied in gas chromatography-mass spectrometry. Then this method was applied to the single ion detection technique of GC-MS. A detection limit was measured, using various standard samples of low level concentration. (auth.)

  4. Method Verification Requirements for an Advanced Imaging System for Microbial Plate Count Enumeration.

    Science.gov (United States)

    Jones, David; Cundell, Tony

    2018-01-01

    The Growth Direct™ System that automates the incubation and reading of membrane filtration microbial counts on soybean-casein digest, Sabouraud dextrose, and R2A agar differs only from the traditional method in that micro-colonies on the membrane are counted using an advanced imaging system up to 50% earlier in the incubation. Based on the recommendations in USP Validation of New Microbiological Testing Methods , the system may be implemented in a microbiology laboratory after simple method verification and not a full method validation. LAY ABSTRACT: The Growth Direct™ System that automates the incubation and reading of microbial counts on membranes on solid agar differs only from the traditional method in that micro-colonies on the membrane are counted using an advanced imaging system up to 50% earlier in the incubation time. Based on the recommendations in USP Validation of New Microbiological Testing Methods , the system may be implemented in a microbiology laboratory after simple method verification and not a full method validation. © PDA, Inc. 2018.

  5. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    Science.gov (United States)

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Planktonic primary production evaluation by means of the 14C method with liquid scintillation counting

    International Nuclear Information System (INIS)

    Frangopol, T.P.; Bologa, S.A.

    1979-05-01

    Preliminary results on the planktonic primary production obtained for the first time with the 14 C method off the Romanian Black Sea coast (1977, 1978) and in the Sinoe, Mamaia and Bicaz lakes (1978) are presented, along with a review of this method with special reference to liquid scintillation counting. 140 Refs. (author)

  7. A counting-weighted calibration method for a field-programmable-gate-array-based time-to-digital converter

    International Nuclear Information System (INIS)

    Chen, Yuan-Ho

    2017-01-01

    In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [−0.54, 0.24] and [−0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.

  8. A counting-weighted calibration method for a field-programmable-gate-array-based time-to-digital converter

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yuan-Ho, E-mail: chenyh@mail.cgu.edu.tw [Department of Electronic Engineering, Chang Gung University, Tao-Yuan 333, Taiwan (China); Department of Radiation Oncology, Chang Gung Memorial Hospital, Tao-Yuan 333, Taiwan (China); Center for Reliability Sciences and Technologies, Chang Gung University, Tao-Yuan 333, Taiwan (China)

    2017-05-11

    In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [−0.54, 0.24] and [−0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.

  9. Clustering method for counting passengers getting in a bus with single camera

    Science.gov (United States)

    Yang, Tao; Zhang, Yanning; Shao, Dapei; Li, Ying

    2010-03-01

    Automatic counting of passengers is very important for both business and security applications. We present a single-camera-based vision system that is able to count passengers in a highly crowded situation at the entrance of a traffic bus. The unique characteristics of the proposed system include, First, a novel feature-point-tracking- and online clustering-based passenger counting framework, which performs much better than those of background-modeling-and foreground-blob-tracking-based methods. Second, a simple and highly accurate clustering algorithm is developed that projects the high-dimensional feature point trajectories into a 2-D feature space by their appearance and disappearance times and counts the number of people through online clustering. Finally, all test video sequences in the experiment are captured from a real traffic bus in Shanghai, China. The results show that the system can process two 320×240 video sequences at a frame rate of 25 fps simultaneously, and can count passengers reliably in various difficult scenarios with complex interaction and occlusion among people. The method achieves high accuracy rates up to 96.5%.

  10. A Lossy Counting-Based State of Charge Estimation Method and Its Application to Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2015-12-01

    Full Text Available Estimating the residual capacity or state-of-charge (SoC of commercial batteries on-line without destroying them or interrupting the power supply, is quite a challenging task for electric vehicle (EV designers. Many Coulomb counting-based methods have been used to calculate the remaining capacity in EV batteries or other portable devices. The main disadvantages of these methods are the cumulative error and the time-varying Coulombic efficiency, which are greatly influenced by the operating state (SoC, temperature and current. To deal with this problem, we propose a lossy counting-based Coulomb counting method for estimating the available capacity or SoC. The initial capacity of the tested battery is obtained from the open circuit voltage (OCV. The charging/discharging efficiencies, used for compensating the Coulombic losses, are calculated by the lossy counting-based method. The measurement drift, resulting from the current sensor, is amended with the distorted Coulombic efficiency matrix. Simulations and experimental results show that the proposed method is both effective and convenient.

  11. A methodology for on line fatigue life monitoring : rainflow cycle counting method

    International Nuclear Information System (INIS)

    Mukhopadhyay, N.K.; Dutta, B.K.; Kushwaha, H.S.

    1992-01-01

    Green's function technique is used in on line fatigue life monitoring to convert plant data to stress versus time data. This technique converts plant data most efficiently to stress versus time data. To compute the fatigue usage factor the actual number of cycles experienced by the component is to be found out from stress versus time data. Using material fatigue properties the fatigue usage factor is to be computed from the number of cycles. Generally the stress response is very irregular in nature. To convert an irregular stress history to stress frequency spectra rainflow cycle counting method is used. This method is proved to be superior to other counting methods and yields best fatigue estimates. A code has been developed which computes the number of cycles experienced by the component from stress time history using rainflow cycle counting method. This postprocessor also computes the accumulated fatigue usage factor from material fatigue properties. The present report describes the development of a code to compute fatigue usage factor using rainflow cycle counting technique and presents a real life case study. (author). 10 refs., 10 figs

  12. A new analytical method for 32P. Liquid scintillation counting with solvent extraction

    International Nuclear Information System (INIS)

    Liyanage, J.A.; Yonezawa, C.

    2003-01-01

    Trace determination of phosphorus has been studied using neutron activation analysis. Radioactivity of 32 P in tri-n-octylamine phosphomolybdate complex was measured using liquid scintillation counting by extracting the complex into xylene. Phosphorus can be quantitatively determined from 16.7 to 600 μg/10 ml by using the radiochemical analysis method described. (author)

  13. MOSS-5: A Fast Method of Approximating Counts of 5-Node Graphlets in Large Graphs

    KAUST Repository

    Wang, Pinghui

    2017-09-26

    Counting 3-, 4-, and 5-node graphlets in graphs is important for graph mining applications such as discovering abnormal/evolution patterns in social and biology networks. In addition, it is recently widely used for computing similarities between graphs and graph classification applications such as protein function prediction and malware detection. However, it is challenging to compute these metrics for a large graph or a large set of graphs due to the combinatorial nature of the problem. Despite recent efforts in counting triangles (a 3-node graphlet) and 4-node graphlets, little attention has been paid to characterizing 5-node graphlets. In this paper, we develop a computationally efficient sampling method to estimate 5-node graphlet counts. We not only provide fast sampling methods and unbiased estimators of graphlet counts, but also derive simple yet exact formulas for the variances of the estimators which is of great value in practice-the variances can be used to bound the estimates\\' errors and determine the smallest necessary sampling budget for a desired accuracy. We conduct experiments on a variety of real-world datasets, and the results show that our method is several orders of magnitude faster than the state-of-the-art methods with the same accuracy.

  14. A matrix-inversion method for gamma-source mapping from gamma-count data - 59082

    International Nuclear Information System (INIS)

    Bull, Richard K.; Adsley, Ian; Burgess, Claire

    2012-01-01

    Gamma ray counting is often used to survey the distribution of active waste material in various locations. Ideally the output from such surveys would be a map of the activity of the waste. In this paper a simple matrix-inversion method is presented. This allows an array of gamma-count data to be converted to an array of source activities. For each survey area the response matrix is computed using the gamma-shielding code Microshield [1]. This matrix links the activity array to the count array. The activity array is then obtained via matrix inversion. The method was tested on artificially-created arrays of count-data onto which statistical noise had been added. The method was able to reproduce, quite faithfully, the original activity distribution used to generate the dataset. The method has been applied to a number of practical cases, including the distribution of activated objects in a hot cell and to activated Nimonic springs amongst fuel-element debris in vaults at a nuclear plant. (authors)

  15. Fringe counting method for synthetic phase with frequency-modulated laser diodes

    International Nuclear Information System (INIS)

    Onodera, Ribun; Sakuyama, Munechika; Ishii, Yukihiro

    2007-01-01

    Fringe counting method with laser diodes (LDs) for displacement measurement has been constructed. Two LDs are frequency modulated by mutually inverted sawtooth currents on an unbalanced two-beam interferometer. The mutually inverted sawtooth-current modulation of LDs produces interference fringe signals with opposite signs for respective wavelengths. The two fringe signals are fed to an electronic mixer to produce a synthetic fringe signal with a reduced sensitivity to the synthetic wavelength. Synthetic fringe pulses derived from the synthetic fringe signal make a fringe counting system possible for faster movement of the tested mirror

  16. Counting and integrating microelectronics development for direct conversion X-ray imaging

    International Nuclear Information System (INIS)

    Kraft, E.

    2008-02-01

    A novel signal processing concept for X-ray imaging with directly converting pixelated semiconductor sensors is presented. The novelty of this approach compared to existing concepts is the combination of charge integration and photon counting in every single pixel. Simultaneous operation of both signal processing chains extends the dynamic range beyond the limits of the individual schemes and allows determination of the mean photon energy. Medical applications such as X-ray computed tomography can benefit from this additional spectral information through improved contrast and the ability to determine the hardening of the tube spectrum due to attenuation by the scanned object. A prototype chip in 0.35-micrometer technology has been successfully tested. The pixel electronics are designed using a low-swing differential current mode logic. Key element is a configurable feedback circuit for the charge sensitive amplifier which provides continuous reset, leakage current compensation and replicates the input signal for the integrator. The thesis focusses on the electronic characterization of a second generation prototype chip and gives a detailed discussion of the circuit design. (orig.)

  17. Counting and integrating microelectronics development for direct conversion X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kraft, E.

    2008-02-15

    A novel signal processing concept for X-ray imaging with directly converting pixelated semiconductor sensors is presented. The novelty of this approach compared to existing concepts is the combination of charge integration and photon counting in every single pixel. Simultaneous operation of both signal processing chains extends the dynamic range beyond the limits of the individual schemes and allows determination of the mean photon energy. Medical applications such as X-ray computed tomography can benefit from this additional spectral information through improved contrast and the ability to determine the hardening of the tube spectrum due to attenuation by the scanned object. A prototype chip in 0.35-micrometer technology has been successfully tested. The pixel electronics are designed using a low-swing differential current mode logic. Key element is a configurable feedback circuit for the charge sensitive amplifier which provides continuous reset, leakage current compensation and replicates the input signal for the integrator. The thesis focusses on the electronic characterization of a second generation prototype chip and gives a detailed discussion of the circuit design. (orig.)

  18. Characterization of the imaging performance of the simultaneously counting and integrating X-ray detector CIX

    Energy Technology Data Exchange (ETDEWEB)

    Fink, Johannes

    2010-01-15

    The CIX detector is a direct converting hybrid pixel detector designed for medical X-ray imaging applications. Its de ning feature is the simultaneous operation of a photon counter as well as an integrator in every pixel cell. This novel approach o ers a dynamic range of more than five orders of magnitude, as well as the ability to directly obtain the average photon energy from the measured data. Several CIX 0.2 ASICs have been successfully connected to CdTe, CdZnTe and Si sensors. These detector modules were tested with respect to the imaging performance of the simultaneously counting and integrating concept under X-ray irradiation. Apart from a characterization of the intrinsic benefits of the CIX concept, the sensor performance was also investigated. Here, the two parallel signal processing concepts offer valuable insights into material related effects like polarization and temporal response. The impact of interpixel coupling effects like charge-sharing, Compton scattering and X-ray fluorescence was evaluated through simulations and measurements. (orig.)

  19. Characterization of the imaging performance of the simultaneously counting and integrating X-ray detector CIX

    International Nuclear Information System (INIS)

    Fink, Johannes

    2010-01-01

    The CIX detector is a direct converting hybrid pixel detector designed for medical X-ray imaging applications. Its de ning feature is the simultaneous operation of a photon counter as well as an integrator in every pixel cell. This novel approach o ers a dynamic range of more than five orders of magnitude, as well as the ability to directly obtain the average photon energy from the measured data. Several CIX 0.2 ASICs have been successfully connected to CdTe, CdZnTe and Si sensors. These detector modules were tested with respect to the imaging performance of the simultaneously counting and integrating concept under X-ray irradiation. Apart from a characterization of the intrinsic benefits of the CIX concept, the sensor performance was also investigated. Here, the two parallel signal processing concepts offer valuable insights into material related effects like polarization and temporal response. The impact of interpixel coupling effects like charge-sharing, Compton scattering and X-ray fluorescence was evaluated through simulations and measurements. (orig.)

  20. ASPECTS OF INTEGRATION MANAGEMENT METHODS

    Directory of Open Access Journals (Sweden)

    Artemy Varshapetian

    2015-10-01

    Full Text Available For manufacturing companies to succeed in today's unstable economic environment, it is necessary to restructure the main components of its activities: designing innovative product, production using modern reconfigurable manufacturing systems, a business model that takes into account the global strategy and management methods using modern management models and tools. The first three components are discussed in numerous publications, for example, (Koren, 2010 and is therefore not considered in the article. A large number of publications devoted to the methods and tools of production management, for example (Halevi, 2007. On the basis of what was said in the article discusses the possibility of the integration of only three methods have received in recent years, the most widely used, namely: Six Sigma method - SS (George et al., 2005 and supplements its-Design for six sigm? - DFSS (Taguchi, 2003; Lean production transformed with the development to the "Lean management" and further to the "Lean thinking" - Lean (Hirano et al., 2006; Theory of Constraints, developed E.Goldratt - TOC (Dettmer, 2001. The article investigates some aspects of this integration: applications in diverse fields, positive features, changes in management structure, etc.

  1. Method and system of simulating nuclear power plant count rate for training purposes

    International Nuclear Information System (INIS)

    Alliston, W.H.; Koenig, R.H.

    1975-01-01

    A method and system are described for the real-time simulation of the dynamic operation of a nuclear power plant in which nuclear flux rate counters are provided for monitoring the rate of nuclear fission of the reactor. The system utilizes apparatus that includes digital computer means for calculating data relating to the rate of nuclear fission of a simulated reactor model, which rate is controlled in accordance with the operation of control panel devices. A digital number from the computer corresponding to the flux rate controls an oscillator driven counter means to produce a pulse after a predetermined count. This pulse controls an oscillator driven polynomial counter to count a random number that controls a third counter in accordance with pulse from the first counter to produce a random fission count for operating the meters. (U.S.)

  2. A rapid method for counting nucleated erythrocytes on stained blood smears by digital image analysis

    Science.gov (United States)

    Gering, E.; Atkinson, C.T.

    2004-01-01

    Measures of parasitemia by intraerythrocytic hematozoan parasites are normally expressed as the number of infected erythrocytes per n erythrocytes and are notoriously tedious and time consuming to measure. We describe a protocol for generating rapid counts of nucleated erythrocytes from digital micrographs of thin blood smears that can be used to estimate intensity of hematozoan infections in nonmammalian vertebrate hosts. This method takes advantage of the bold contrast and relatively uniform size and morphology of erythrocyte nuclei on Giemsa-stained blood smears and uses ImageJ, a java-based image analysis program developed at the U.S. National Institutes of Health and available on the internet, to recognize and count these nuclei. This technique makes feasible rapid and accurate counts of total erythrocytes in large numbers of microscope fields, which can be used in the calculation of peripheral parasitemias in low-intensity infections.

  3. Methods for enhancing numerical integration

    International Nuclear Information System (INIS)

    Doncker, Elise de

    2003-01-01

    We give a survey of common strategies for numerical integration (adaptive, Monte-Carlo, Quasi-Monte Carlo), and attempt to delineate their realm of applicability. The inherent accuracy and error bounds for basic integration methods are given via such measures as the degree of precision of cubature rules, the index of a family of lattice rules, and the discrepancy of uniformly distributed point sets. Strategies incorporating these basic methods often use paradigms to reduce the error by, e.g., increasing the number of points in the domain or decreasing the mesh size, locally or uniformly. For these processes the order of convergence of the strategy is determined by the asymptotic behavior of the error, and may be too slow in practice for the type of problem at hand. For certain problem classes we may be able to improve the effectiveness of the method or strategy by such techniques as transformations, absorbing a difficult part of the integrand into a weight function, suitable partitioning of the domain, transformations and extrapolation or convergence acceleration. Situations warranting the use of these techniques (possibly in an 'automated' way) are described and illustrated by sample applications

  4. A gravimetric simplified method for nucleated marrow cell counting using an injection needle.

    Science.gov (United States)

    Saitoh, Toshiki; Fang, Liu; Matsumoto, Kiyoshi

    2005-08-01

    A simplified gravimetric marrow cell counting method for rats is proposed for a regular screening method. After fresh bone marrow was aspirated by an injection needle, the marrow cells were suspended in carbonate buffered saline. The nucleated marrow cell count (NMC) was measured by an automated multi-blood cell analyzer. When this gravimetric method was applied to rats, the NMC of the left and right femurs had essentially identical values due to careful handling. The NMC at 4 to 10 weeks of age in male and female Crj:CD(SD)IGS rats was 2.72 to 1.96 and 2.75 to 1.98 (x10(6) counts/mg), respectively. More useful information for evaluation could be obtained by using this gravimetric method in addition to myelogram examination. However, some difficulties with this method include low NMC due to blood contamination and variation of NMC due to handling. Therefore, the utility of this gravimetric method for screening will be clarified by the accumulation of the data on myelotoxicity studies with this method.

  5. Accuracy of single count methods of WL determination for open-pit uranium mines

    International Nuclear Information System (INIS)

    Solomon, S.B.; Kennedy, K. N.

    1983-01-01

    A study of single count methods of WL determination was made using a database respresentative of Australian open pit uranium mine conditions. The aim of the study was to check the existence of the optimum time delay coresponding to the Rolle method, to determine the accuracy of the conversion factor for Australian conditions and to examine any systematic use of data bases of representative radon daughter concentration

  6. An integrating factor matrix method to find first integrals

    International Nuclear Information System (INIS)

    Saputra, K V I; Quispel, G R W; Van Veen, L

    2010-01-01

    In this paper we develop an integrating factor matrix method to derive conditions for the existence of first integrals. We use this novel method to obtain first integrals, along with the conditions for their existence, for two- and three-dimensional Lotka-Volterra systems with constant terms. The results are compared to previous results obtained by other methods.

  7. Selection of non-destructive assay methods: Neutron counting or calorimetric assay?

    International Nuclear Information System (INIS)

    Cremers, T.L.; Wachter, J.R.

    1994-01-01

    The transition of DOE facilities from production to D ampersand D has lead to more measurements of product, waste, scrap, and other less attractive materials. Some of these materials are difficult to analyze by either neutron counting or calorimetric assay. To determine the most efficacious analysis method, variety of materials, impure salts and hydrofluorination residues have been assayed by both calorimetric assay and neutron counting. New data will be presented together with a review of published data. The precision and accuracy of these measurements are compared to chemistry values and are reported. The contribution of the gamma ray isotopic determination measurement to the overall error of the calorimetric assay or neutron assay is examined and discussed. Other factors affecting selection of the most appropriate non-destructive assay method are listed and considered

  8. Analytical method of Kr-85 determination, using cryogenic concentration and separation and liquid scintillation counting

    International Nuclear Information System (INIS)

    Heras Iniquez, M.C.; Perez Garcia, M.M.; Grau Malonda, A.

    1983-01-01

    The method used in the Laboratory of the JEN for the determination of Kr-85 levels in gaseous effluents of nuclear power and in the atmosphere is described. Samples of air, collected in metallic cylinders, are introduced into a gas-solid chromatographic separation system which resolves Kr from the other air components. The separated Kr ia dissolved in a toluene based scintillation cocktail, and the Kr-85 content is determined by liquid scintillation counting. (Author)

  9. Meningiomas: Objective assessment of proliferative indices by immunohistochemistry and automated counting method.

    Science.gov (United States)

    Chavali, Pooja; Uppin, Megha S; Uppin, Shantveer G; Challa, Sundaram

    2017-01-01

    The most reliable histological correlate of recurrence risk in meningiomas is increased mitotic activity. Proliferative index with Ki-67 immunostaining is a helpful adjunct to manual counting. However, both show considerable inter-observer variability. A new immunohistochemical method for counting mitotic figures, using antibody against the phosphohistone H3 (PHH3) protein was introduced. Similarly, a computer based automated counting for Ki-67 labelling index (LI) is available. To study the use of these new techniques in the objective assessment of proliferation indices in meningiomas. This was a retrospective study of intracranial meningiomas diagnosed during the year 2013.The hematoxylin and eosin (H and E) sections and immunohistochemistry (IHC) with Ki-67 were reviewed by two pathologists. Photomicrographs of the representative areas were subjected to Ki-67 analysis by Immunoratio (IR) software. Mean Ki-67 LI, both manual and by IR were calculated. IHC with PHH3 was performed. PHH3 positive nuclei were counted and mean values calculated. Data analysis was done using SPSS software. A total of 64 intracranial meningiomas were diagnosed. Evaluation on H and E, PHH3, Ki-67 LI (both manual and IR) were done in 32 cases (22 grade I and 10 grade II meningiomas). Statistically significant correlation was seen between the mitotic count in each grade and PHH3 values and also between the grade of the tumor and values of Ki-67 and PHH3. Both the techniques used in the study had advantage over, as well as, correlated well with the existing techniques and hence, can be applied to routine use.

  10. Assessment guidance of carbohydrate counting method in patients with type 2 diabetes mellitus.

    Science.gov (United States)

    Martins, Michelle R; Ambrosio, Ana Cristina T; Nery, Marcia; Aquino, Rita de Cássia; Queiroz, Marcia S

    2014-04-01

    We evaluated the application of the method of carbohydrate counting performed by 21 patients with type 2 diabetes, 1 year later attending a guidance course. Participants answered a questionnaire to assess patients' adhesion to carbohydrate counting as well as to identify habit changes and the method's applicability, and values of glycated hemoglobin were also analyzed. Most participants (76%) were females, and 25% of them had obesity degree III. There was a statistically significant decrease in glycated hemoglobin from 8.42±0.02% to 7.66±0.01% comparing values before and after counseling. We observed that although patients stated that the method was difficult they understood that carbohydrate counting could allow them make choices and have more freedom in their meals; we also verified if they understood accurately how to replace some foods used regularly in their diets and most patients correctly chose replacements for the groups of bread (76%), beans (67%) and noodles (67%). We concluded that participation in the course led to improved blood glucose control with a significant reduction of glycated hemoglobin, better understanding of food groups and the adoption of healthier eating habits. Copyright © 2013 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.

  11. Statistical method for resolving the photon-photoelectron-counting inversion problem

    International Nuclear Information System (INIS)

    Wu Jinlong; Li Tiejun; Peng, Xiang; Guo Hong

    2011-01-01

    A statistical inversion method is proposed for the photon-photoelectron-counting statistics in quantum key distribution experiment. With the statistical viewpoint, this problem is equivalent to the parameter estimation for an infinite binomial mixture model. The coarse-graining idea and Bayesian methods are applied to deal with this ill-posed problem, which is a good simple example to show the successful application of the statistical methods to the inverse problem. Numerical results show the applicability of the proposed strategy. The coarse-graining idea for the infinite mixture models should be general to be used in the future.

  12. Count rate balance method of measuring sediment transport of sand beds by radioactive tracers

    International Nuclear Information System (INIS)

    Sauzay, G.

    1968-01-01

    Radioactive tracers are applied to the direct measurement of the sediment transport rate of sand beds. The theoretical measurement formula is derived: the variation of the count rate balance is inverse of that of the transport thickness. Simultaneously the representativeness of the tracer is critically studied. The minimum quantity of tracer which has to be injected in order to obtain a correct statistical definition of count rate given by a low number of grains 'seen' by the detector is then studied. A field experiment was made and has let to study the technological conditions for applying this method: only the treatment of results is new, the experiment itself is carried out with conventional techniques applied with great care. (author) [fr

  13. Integral methods in low-frequency electromagnetics

    CERN Document Server

    Solin, Pavel; Karban, Pavel; Ulrych, Bohus

    2009-01-01

    A modern presentation of integral methods in low-frequency electromagnetics This book provides state-of-the-art knowledge on integral methods in low-frequency electromagnetics. Blending theory with numerous examples, it introduces key aspects of the integral methods used in engineering as a powerful alternative to PDE-based models. Readers will get complete coverage of: The electromagnetic field and its basic characteristics An overview of solution methods Solutions of electromagnetic fields by integral expressions Integral and integrodifferential methods

  14. Three counting methods agree on cell and neuron number in chimpanzee primary visual cortex

    Directory of Open Access Journals (Sweden)

    Daniel James Miller

    2014-05-01

    Full Text Available Determining the cellular composition of specific brain regions is crucial to our understanding of the function of neurobiological systems. It is therefore useful to identify the extent to which different methods agree when estimating the same properties of brain circuitry. In this study, we estimated the number of neuronal and non-neuronal cells in the primary visual cortex (area 17 or V1 of both hemispheres from a single chimpanzee. Specifically, we processed samples distributed across V1 of the right hemisphere after cortex was flattened into a sheet using two variations of the isotropic fractionator cell and neuron counting method. We processed the left hemisphere as serial brain slices for stereological investigation. The goal of this study was to evaluate the agreement between these methods in the most direct manner possible by comparing estimates of cell density across one brain region of interest in a single individual. In our hands, these methods produced similar estimates of the total cellular population (approximately 1 billion as well as the number of neurons (approximately 675 million in chimpanzee V1, providing evidence that both techniques estimate the same parameters of interest. In addition, our results indicate the strengths of each distinct tissue preparation procedure, highlighting the importance of attention to anatomical detail. In summary, we found that the isotropic fractionator and the stereological optical fractionator produced concordant estimates of the cellular composition of V1, and that this result supports the conclusion that chimpanzees conform to the primate pattern of exceptionally high packing density in V1. Ultimately, our data suggest that investigators can optimize their experimental approach by using any of these counting methods to obtain reliable cell and neuron counts.

  15. A gamma camera count rate saturation correction method for whole-body planar imaging

    Science.gov (United States)

    Hobbs, Robert F.; Baechler, Sébastien; Senthamizhchelvan, Srinivasan; Prideaux, Andrew R.; Esaias, Caroline E.; Reinhardt, Melvin; Frey, Eric C.; Loeb, David M.; Sgouros, George

    2010-02-01

    Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating

  16. Optimization of statistical methods for HpGe gamma-ray spectrometer used in wide count rate ranges

    Energy Technology Data Exchange (ETDEWEB)

    Gervino, G., E-mail: gervino@to.infn.it [UNITO - Università di Torino, Dipartimento di Fisica, Turin (Italy); INFN - Istituto Nazionale di Fisica Nucleare, Sez. Torino, Turin (Italy); Mana, G. [INRIM - Istituto Nazionale di Ricerca Metrologica, Turin (Italy); Palmisano, C. [UNITO - Università di Torino, Dipartimento di Fisica, Turin (Italy); INRIM - Istituto Nazionale di Ricerca Metrologica, Turin (Italy)

    2016-07-11

    The need to perform γ-ray measurements with HpGe detectors is a common technique in many fields such as nuclear physics, radiochemistry, nuclear medicine and neutron activation analysis. The use of HpGe detectors is chosen in situations where isotope identification is needed because of their excellent resolution. Our challenge is to obtain the “best” spectroscopy data possible in every measurement situation. “Best” is a combination of statistical (number of counts) and spectral quality (peak, width and position) over a wide range of counting rates. In this framework, we applied Bayesian methods and the Ellipsoidal Nested Sampling (a multidimensional integration technique) to study the most likely distribution for the shape of HpGe spectra. In treating these experiments, the prior information suggests to model the likelihood function with a product of Poisson distributions. We present the efforts that have been done in order to optimize the statistical methods to HpGe detector outputs with the aim to evaluate to a better order of precision the detector efficiency, the absolute measured activity and the spectra background. Reaching a more precise knowledge of statistical and systematic uncertainties for the measured physical observables is the final goal of this research project.

  17. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  18. Antiretroviral treatment cohort analysis using time-updated CD4 counts: assessment of bias with different analytic methods.

    Directory of Open Access Journals (Sweden)

    Katharina Kranzer

    Full Text Available Survival analysis using time-updated CD4+ counts during antiretroviral therapy is frequently employed to determine risk of clinical events. The time-point when the CD4+ count is assumed to change potentially biases effect estimates but methods used to estimate this are infrequently reported.This study examined the effect of three different estimation methods: assuming i a constant CD4+ count from date of measurement until the date of next measurement, ii a constant CD4+ count from the midpoint of the preceding interval until the midpoint of the subsequent interval and iii a linear interpolation between consecutive CD4+ measurements to provide additional midpoint measurements. Person-time, tuberculosis rates and hazard ratios by CD4+ stratum were compared using all available CD4+ counts (measurement frequency 1-3 months and 6 monthly measurements from a clinical cohort. Simulated data were used to compare the extent of bias introduced by these methods.The midpoint method gave the closest fit to person-time spent with low CD4+ counts and for hazard ratios for outcomes both in the clinical dataset and the simulated data.The midpoint method presents a simple option to reduce bias in time-updated CD4+ analysis, particularly at low CD4 cell counts and rapidly increasing counts after ART initiation.

  19. New procedure for the determination of radium in water by extraction of radon and application of integral counting with a liquid scintillation counter

    Energy Technology Data Exchange (ETDEWEB)

    Horiuchi, K [Tokyo Metropolitan Univ. (Japan). Faculty of Science; Murakami, Y [Kitasato Univ. (Japan). School of Hygienic Sciences

    1981-05-01

    A new Ra determination method is devised, storing the sample in a glass bottle with a Teflon stopper in an upside-down position, extracting Rn with liquid scintillator solution and combining integral counting with a liquid scintillation counter. This method realizes a high sensitivity of 5 x 10/sup -13/ Ci Ra, eliminates the tedious procedure of transferring Rn through the vacuum system to the detector and makes possible repeated determinations of Ra on the same sample without any further chemical treatment except extraction.

  20. Enhanced coulomb counting method for estimating state-of-charge and state-of-health of lithium-ion batteries

    International Nuclear Information System (INIS)

    Ng, Kong Soon; Moo, Chin-Sien; Chen, Yi-Ping; Hsieh, Yao-Ching

    2009-01-01

    The coulomb counting method is expedient for state-of-charge (SOC) estimation of lithium-ion batteries with high charging and discharging efficiencies. The charging and discharging characteristics are investigated and reveal that the coulomb counting method is convenient and accurate for estimating the SOC of lithium-ion batteries. A smart estimation method based on coulomb counting is proposed to improve the estimation accuracy. The corrections are made by considering the charging and operating efficiencies. Furthermore, the state-of-health (SOH) is evaluated by the maximum releasable capacity. Through the experiments that emulate practical operations, the SOC estimation method is verified to demonstrate the effectiveness and accuracy.

  1. Effect of the carbohydrate counting method on glycemic control in patients with type 1 diabetes

    Directory of Open Access Journals (Sweden)

    Dias Viviane M

    2010-08-01

    Full Text Available Abstract Background The importance of achieving and maintaining an appropriate metabolic control in patients with type 1 diabetes mellitus (DM1 has been established in many studies aiming to prevent the development of chronic complications. The carbohydrate counting method can be recommended as an additional tool in the nutritional treatment of diabetes, allowing patients with DM1 to have more flexible food choices. This study aimed to evaluate the influence of nutrition intervention and the use of multiple short-acting insulin according to the carbohydrate counting method on clinical and metabolic control in patients with DM1. Methods Our sample consisted of 51 patients with DM1, 32 females, aged 25.3 ± 1.55 years. A protocol of nutritional status evaluation was applied and laboratory analysis was performed at baseline and after a three-month intervention. After the analysis of the food records, a balanced diet was prescribed using the carbohydrate counting method, and short-acting insulin was prescribed based on the total amount of carbohydrate per meal (1 unit per 15 g of carbohydrate. Results A significant decrease in A1c levels was observed from baseline to the three-month evaluation after the intervention (10.40 ± 0.33% and 9.52 ± 0.32%, respectively, p = 0.000. It was observed an increase in daily insulin dose after the intervention (0.99 ± 0.65 IU/Kg and 1.05 ± 0.05 IU/Kg, respectively, p = 0.003. No significant differences were found regarding anthropometric evaluation (BMI, waist, hip or abdominal circumferences and waist to hip ratio after the intervention period. Conclusions The use of short-acting insulin based on the carbohydrate counting method after a short period of time resulted in a significant improvement of the glycemic control in patients with DM1 with no changes in body weight despite increases in the total daily insulin doses.

  2. Adaptive and automatic red blood cell counting method based on microscopic hyperspectral imaging technology

    Science.gov (United States)

    Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting

    2017-12-01

    Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.

  3. Combustion water purification techniques influence on OBT analysing using liquid scintillation counting method

    Energy Technology Data Exchange (ETDEWEB)

    Varlam, C.; Vagner, I.; Faurescu, I.; Faurescu, D. [National Institute for Cryogenics and Isotopic Technologies, Valcea (Romania)

    2015-03-15

    In order to determine organically bound tritium (OBT) from environmental samples, these must be converted into water, measurable by liquid scintillation counting (LSC). For this purpose we conducted some experiments to determine OBT level of a grass sample collected from an uncontaminated area. The studied grass sample was combusted in a Parr bomb. However usual interfering phenomena were identified: color or chemical quench, chemiluminescence, overlap over tritium spectrum because of other radionuclides presence as impurities ({sup 14}C from organically compounds, {sup 36}Cl as chloride and free chlorine, {sup 40}K as potassium cations) and emulsion separation. So the purification of the combustion water before scintillation counting appeared to be essential. 5 purification methods were tested: distillation with chemical treatment (Na{sub 2}O{sub 2} and KMnO{sub 4}), lyophilization, chemical treatment (Na{sub 2}O{sub 2} and KMnO{sub 4}) followed by lyophilization, azeotropic distillation with toluene and treatment with a volcanic tuff followed by lyophilization. After the purification step each sample was measured and the OBT measured concentration, together with physico-chemical analysis of the water analyzed, revealed that the most efficient method applied for purification of the combustion water was the method using chemical treatment followed by lyophilization.

  4. Optimised method for the routine determination of Technetium-99 in environmental samples by liquid scintillation counting

    International Nuclear Information System (INIS)

    Wigley, F.; Warwick, P.E.; Croudace, I.W.; Caborn, J.; Sanchez, A.L.

    1999-01-01

    A method has been developed for the routine determination of 99 Tc in a range of environmental matrices using 99m Tc (t 1/2 =6.06 h) as an internal yield monitor. Samples are ignited stepwise to 550C and the 99 Tc is extracted from the ignited residue with 8 M nitric acid. Many contaminants are co-precipitated with Fe(OH) 3 and the Tc in the supernatant is pre-concentrated and further purified using anion exchange chromatography. Final separation of Tc from Ru is achieved by extraction of Tc into 5% tri-n-octylamine in xylene from 2 M sulphuric acid. The xylene fraction is mixed directly with a commercial liquid scintillant cocktail. The chemical yield is determined through the measurement of 99m Tc by gamma spectrometry and the 99 Tc activity is measured using liquid scintillation counting after a further two weeks to allow decay of the 99m Tc activity. Typical recoveries for this method are in the order 70-95%. The method has a detection limit of 1.7 Bq kg -1 based on a 2 h count time and a 10 g sample size. The chemical separation for 24 samples of sediment or marine biota can be completed by one analyst in a working week. A further week is required to allow the samples to decay before determination. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  5. Combustion water purification techniques influence on OBT analysing using liquid scintillation counting method

    International Nuclear Information System (INIS)

    Varlam, C.; Vagner, I.; Faurescu, I.; Faurescu, D.

    2015-01-01

    In order to determine organically bound tritium (OBT) from environmental samples, these must be converted into water, measurable by liquid scintillation counting (LSC). For this purpose we conducted some experiments to determine OBT level of a grass sample collected from an uncontaminated area. The studied grass sample was combusted in a Parr bomb. However usual interfering phenomena were identified: color or chemical quench, chemiluminescence, overlap over tritium spectrum because of other radionuclides presence as impurities ( 14 C from organically compounds, 36 Cl as chloride and free chlorine, 40 K as potassium cations) and emulsion separation. So the purification of the combustion water before scintillation counting appeared to be essential. 5 purification methods were tested: distillation with chemical treatment (Na 2 O 2 and KMnO 4 ), lyophilization, chemical treatment (Na 2 O 2 and KMnO 4 ) followed by lyophilization, azeotropic distillation with toluene and treatment with a volcanic tuff followed by lyophilization. After the purification step each sample was measured and the OBT measured concentration, together with physico-chemical analysis of the water analyzed, revealed that the most efficient method applied for purification of the combustion water was the method using chemical treatment followed by lyophilization

  6. An integral whole circuit of amplifying and discriminating suited to high counting rate

    International Nuclear Information System (INIS)

    Dong Chengfu; Su Hong; Wu Ming; Li Xiaogang; Peng Yu; Qian Yi; Liu Yicai; Xu Sijiu; Ma Xiaoli

    2007-01-01

    A hybrid circuit consists of charge sensitive preamplifier, main amplifier, discriminator and shaping circuit was described. This instrument has characteristics of low power consumption, small volume, high sensitivity, potable and so on, and is convenient for use in field. The output pulse of this instrument may directly consist with CMOS or TTL logic level. This instrument was mainly used for count measurement, for example, for high sensitive 3 He neutron detector, meanwhile also may used for other heavy ion detectors, the highest counting rate can reach 10 6 /s. (authors)

  7. Rapid bioassay method for estimation of 90Sr in urine samples by liquid scintillation counting

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Chaudhary, Seema; Sawant, Pramilla D.

    2018-01-01

    Radiostrontium (Sr) is a by-product of the nuclear fission of uranium and plutonium in nuclear reactors and is an important radionuclide in spent nuclear fuel and radioactive waste. Rapid bioassay methods are required for estimating Sr in urine following internal contamination. Decision regarding medical intervention, if any can be based upon the results of urinalysis. The present method used at Bioassay Laboratory, Trombay is by Solid Extraction Chromatography (SEC) technique. The Sr separated from urine sample is precipitated as SrCO 3 and analyzed gravimetrically. However, gravimetric procedure is time consuming and therefore, in the present study, feasibility of Liquid Scintillation Counting for direct detection of radiostrontium in effluent was explored. The results obtained in the present study were compared with those obtained using gravimetric method

  8. Current automated 3D cell detection methods are not a suitable replacement for manual stereologic cell counting

    Directory of Open Access Journals (Sweden)

    Christoph eSchmitz

    2014-05-01

    Full Text Available Stereologic cell counting has had a major impact on the field of neuroscience. A major bottleneck in stereologic cell counting is that the user must manually decide whether or not each cell is counted according to three-dimensional (3D stereologic counting rules by visual inspection within hundreds of microscopic fields-of-view per investigated brain or brain region. Reliance on visual inspection forces stereologic cell counting to be very labor-intensive and time-consuming, and is the main reason why biased, non-stereologic two-dimensional (2D cell counting approaches have remained in widespread use. We present an evaluation of the performance of modern automated cell detection and segmentation algorithms as a potential alternative to the manual approach in stereologic cell counting. The image data used in this study were 3D microscopic images of thick brain tissue sections prepared with a variety of commonly used nuclear and cytoplasmic stains. The evaluation compared the numbers and locations of cells identified unambiguously and counted exhaustively by an expert observer with those found by three automated 3D cell detection algorithms: nuclei segmentation from the FARSIGHT toolkit, nuclei segmentation by 3D multiple level set methods, and the 3D object counter plug-in for ImageJ. Of these methods, FARSIGHT performed best, with true-positive detection rates between 38–99% and false-positive rates from 3.6–82%. The results demonstrate that the current automated methods suffer from lower detection rates and higher false-positive rates than are acceptable for obtaining valid estimates of cell numbers. Thus, at present, stereologic cell counting with manual decision for object inclusion according to unbiased stereologic counting rules remains the only adequate method for unbiased cell quantification in histologic tissue sections.

  9. Can't Count or Won't Count? Embedding Quantitative Methods in Substantive Sociology Curricula: A Quasi-Experiment.

    Science.gov (United States)

    Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby

    2016-06-01

    This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through 'doing' quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a 'magic bullet' and that a wider programme of content and assessment diversification across the curriculum is preferential.

  10. Can’t Count or Won’t Count? Embedding Quantitative Methods in Substantive Sociology Curricula: A Quasi-Experiment

    Science.gov (United States)

    Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby

    2015-01-01

    This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through ‘doing’ quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a ‘magic bullet’ and that a wider programme of content and assessment diversification across the curriculum is preferential. PMID:27330225

  11. Mitosis counting in breast cancer : object-level interobserver agreement and comparison to an automatic method

    NARCIS (Netherlands)

    Veta, M.; van Diest, P.J.; Jiwa, M.; Al-Janabi, S.; Pluim, J.P.W.

    2016-01-01

    BACKGROUND: Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor

  12. Mitosis Counting in Breast Cancer : Object-Level Interobserver Agreement and Comparison to an Automatic Method

    NARCIS (Netherlands)

    Veta, Mitko; van Diest, Paul J; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, JPW

    2016-01-01

    BACKGROUND: Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor

  13. Automatic numerical integration methods for Feynman integrals through 3-loop

    International Nuclear Information System (INIS)

    De Doncker, E; Olagbemi, O; Yuasa, F; Ishikawa, T; Kato, K

    2015-01-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities. (paper)

  14. NMT - A new individual ion counting method: Comparison to a Faraday cup

    Science.gov (United States)

    Burton, Michael; Gorbunov, Boris

    2018-03-01

    Two sample detectors used to analyze the emission from Gas Chromatography (GC) columns are the Flame Ionization Detector (FID) and the Electron Capture Detector (ECD). Both of these detectors involve ionization of the sample molecules and then measuring electric current in the gas using a Faraday cup. In this paper a newly discovered method of ion counting, Nanotechnology Molecular Tagging (NMT) is tested as a replacement to the Faraday cup in GCs. In this method the effective physical volume of individual molecules is enlarged up to 1 billion times enabling them to be detected by an optical particle counter. It was found that the sensitivity of NMT was considerably greater than the Faraday cup. The background in the NMT was circa 200 ions per cm3, corresponding to an extremely low electric current ∼10-17 A.

  15. A 'delayed' counting method to determine indoor Rn-222 levels indirectly

    CERN Document Server

    Iannopollo, V; Trimarchi, M; Tripepi, M G; Vermiglio, G

    2001-01-01

    A new indirect and 'delayed' way is presented to determine indoor concentration of Rn-222 by best-fitting methods. If a rapid knowledge of Rn-222 levels is required and if a detection system is not available in situ, it is possible to obtain concentration of radioactive gas by determining of 'delayed' counts of Po-214. The 'delay' time consists of two or three hours. The method is based on the use of cellulose filters for particulate collection and on the analysis of samples by alpha spectroscopy. It is also possible to obtain concentrations of short-lived radon daughters Po-218, Pb-214, Bi-214, which are very important quantities in a medical framework.

  16. Short Communication: A Simple Method for Performing Worm-Egg Counts on Sodium Acetate Formaldehyde-Preserved Samples

    Directory of Open Access Journals (Sweden)

    Wayne Melrose

    2012-01-01

    Full Text Available The Kato Katz method is the most common way of performing worm-egg counts on human faecal samples, but it must be done in the field using freshly collected samples. This makes it difficult to use in remote, poorly accessible situations. This paper describes a simple method for egg counts on preserved samples collected in the field and sent to a central location for further processing.

  17. Tower counts

    Science.gov (United States)

    Woody, Carol Ann; Johnson, D.H.; Shrier, Brianna M.; O'Neal, Jennifer S.; Knutzen, John A.; Augerot, Xanthippe; O'Neal, Thomas A.; Pearsons, Todd N.

    2007-01-01

    Counting towers provide an accurate, low-cost, low-maintenance, low-technology, and easily mobilized escapement estimation program compared to other methods (e.g., weirs, hydroacoustics, mark-recapture, and aerial surveys) (Thompson 1962; Siebel 1967; Cousens et al. 1982; Symons and Waldichuk 1984; Anderson 2000; Alaska Department of Fish and Game 2003). Counting tower data has been found to be consistent with that of digital video counts (Edwards 2005). Counting towers do not interfere with natural fish migration patterns, nor are fish handled or stressed; however, their use is generally limited to clear rivers that meet specific site selection criteria. The data provided by counting tower sampling allow fishery managers to determine reproductive population size, estimate total return (escapement + catch) and its uncertainty, evaluate population productivity and trends, set harvest rates, determine spawning escapement goals, and forecast future returns (Alaska Department of Fish and Game 1974-2000 and 1975-2004). The number of spawning fish is determined by subtracting subsistence, sport-caught fish, and prespawn mortality from the total estimated escapement. The methods outlined in this protocol for tower counts can be used to provide reasonable estimates ( plus or minus 6%-10%) of reproductive salmon population size and run timing in clear rivers. 

  18. Comparison of multinomial and binomial proportion methods for analysis of multinomial count data.

    Science.gov (United States)

    Galyean, M L; Wester, D B

    2010-10-01

    Simulation methods were used to generate 1,000 experiments, each with 3 treatments and 10 experimental units/treatment, in completely randomized (CRD) and randomized complete block designs. Data were counts in 3 ordered or 4 nominal categories from multinomial distributions. For the 3-category analyses, category probabilities were 0.6, 0.3, and 0.1, respectively, for 2 of the treatments, and 0.5, 0.35, and 0.15 for the third treatment. In the 4-category analysis (CRD only), probabilities were 0.3, 0.3, 0.2, and 0.2 for treatments 1 and 2 vs. 0.4, 0.4, 0.1, and 0.1 for treatment 3. The 3-category data were analyzed with generalized linear mixed models as an ordered multinomial distribution with a cumulative logit link or by regrouping the data (e.g., counts in 1 category/sum of counts in all categories), followed by analysis of single categories as binomial proportions. Similarly, the 4-category data were analyzed as a nominal multinomial distribution with a glogit link or by grouping data as binomial proportions. For the 3-category CRD analyses, empirically determined type I error rates based on pair-wise comparisons (F- and Wald chi(2) tests) did not differ between multinomial and individual binomial category analyses with 10 (P = 0.38 to 0.60) or 50 (P = 0.19 to 0.67) sampling units/experimental unit. When analyzed as binomial proportions, power estimates varied among categories, with analysis of the category with the greatest counts yielding power similar to the multinomial analysis. Agreement between methods (percentage of experiments with the same results for the overall test for treatment effects) varied considerably among categories analyzed and sampling unit scenarios for the 3-category CRD analyses. Power (F-test) was 24.3, 49.1, 66.9, 83.5, 86.8, and 99.7% for 10, 20, 30, 40, 50, and 100 sampling units/experimental unit for the 3-category multinomial CRD analyses. Results with randomized complete block design simulations were similar to those with the CRD

  19. A simple method for calibration of Lucas scintillation cell counting system for measurement of 226Ra and 222Rn

    Directory of Open Access Journals (Sweden)

    N.K. Sethy

    2014-10-01

    Full Text Available Known quantity of radium from high grade ore solution was chemically separated and carefully kept inside the cavity of a Lucas Cell (LC. The 222Rn gradually builds up and attain secular equilibrium with its parent 226Ra. This gives a steady count after a suitable buildup period (>25 days. This secondary source was used to calibrate the radon counting system. The method is validated in by comparison with identical measurement with AlphaGuard Aquakit. The radon counting system was used to evaluate dissolved radon in ground water sample by gross alpha counting in LC. Radon counting system measures the collected radon after a delay of >180 min by gross alpha counting. Simultaneous measurement also carried out by AlphaGuard Aquakit in identical condition. AlphaGuard measures dissolved radon from water sample by constant aeration in a closed circuit without giving any delay. Both the methods are matching with a correlation coefficient of >0.9. This validates the calibration of Lucas scintillation cell counting system by designed encapsulated source. This study provides an alternative for calibration in absence of costly Radon source available in the market.

  20. A method for estimating abundance of mobile populations using telemetry and counts of unmarked animals

    Science.gov (United States)

    Clement, Matthew; O'Keefe, Joy M; Walters, Brianne

    2015-01-01

    While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.

  1. Load assumption for fatigue design of structures and components counting methods, safety aspects, practical application

    CERN Document Server

    Köhler, Michael; Pötter, Kurt; Zenner, Harald

    2017-01-01

    Understanding the fatigue behaviour of structural components under variable load amplitude is an essential prerequisite for safe and reliable light-weight design. For designing and dimensioning, the expected stress (load) is compared with the capacity to withstand loads (fatigue strength). In this process, the safety necessary for each particular application must be ensured. A prerequisite for ensuring the required fatigue strength is a reliable load assumption. The authors describe the transformation of the stress- and load-time functions which have been measured under operational conditions to spectra or matrices with the application of counting methods. The aspects which must be considered for ensuring a reliable load assumption for designing and dimensioning are discussed in detail. Furthermore, the theoretical background for estimating the fatigue life of structural components is explained, and the procedures are discussed for numerous applications in practice. One of the prime intentions of the authors ...

  2. Petrifilm rapid S. aureus Count Plate method for rapid enumeration of Staphylococcus aureus in selected foods: collaborative study.

    Science.gov (United States)

    Silbernagel, K M; Lindberg, K G

    2001-01-01

    A rehydratable dry-film plating method for Staphylococcus aureus in foods, the 3M Petrifilm Rapid S. aureus Count Plate method, was compared with AOAC Official Method 975.55 (Staphylococcus aureus in Foods). Nine foods-instant nonfat dried milk, dry seasoned vegetable coating, frozen hash browns, frozen cooked chicken patty, frozen ground raw pork, shredded cheddar cheese, fresh green beans, pasta filled with beef and cheese, and egg custard-were analyzed for S. aureus by 13 collaborating laboratories. For each food tested, the collaborators received 8 blind test samples consisting of a control sample and 3 levels of inoculated test sample, each in duplicate. The mean log counts for the methods were comparable for pasta filled with beef and cheese; frozen hash browns; cooked chicken patty; egg custard; frozen ground raw pork; and instant nonfat dried milk. The repeatability and reproducibility variances of the Petrifilm Rapid S. aureus Count Plate method were similar to those of the standard method.

  3. Subdomain Precise Integration Method for Periodic Structures

    Directory of Open Access Journals (Sweden)

    F. Wu

    2014-01-01

    Full Text Available A subdomain precise integration method is developed for the dynamical responses of periodic structures comprising many identical structural cells. The proposed method is based on the precise integration method, the subdomain scheme, and the repeatability of the periodic structures. In the proposed method, each structural cell is seen as a super element that is solved using the precise integration method, considering the repeatability of the structural cells. The computational efforts and the memory size of the proposed method are reduced, while high computational accuracy is achieved. Therefore, the proposed method is particularly suitable to solve the dynamical responses of periodic structures. Two numerical examples are presented to demonstrate the accuracy and efficiency of the proposed method through comparison with the Newmark and Runge-Kutta methods.

  4. Standardization of iodine-129 by the TDCR liquid scintillation method and 4π β-γ coincidence counting

    Science.gov (United States)

    Cassette, P.; Bouchard, J.; Chauvenet, B.

    1994-01-01

    Iodine-129 is a long-lived fission product, with physical and chemical properties that make it a good candidate for evaluating the environmental impact of the nuclear energy fuel cycle. To avoid solid source preparation problems, liquid scintillation has been used to standardize this nuclide for a EUROMET intercomparison. Two methods were used to measure the iodine-129 activity: triple-to-double-coincidence ratio liquid scintillation counting and 4π β-γ coincidence counting; the results are in good agreement.

  5. Comparison of platelet counts by sysmex XE 2100 and LH-750 with the international flow reference method in thrombocytopenic patients

    Directory of Open Access Journals (Sweden)

    Tina Dadu

    2013-01-01

    Full Text Available Background: There are several methods for counting platelets, of which the international flow reference method (IRM is considered to be the gold standard. We compared the platelet count given by this method to the count given by automated analyzers using other methods, such as optical fluorescence and impedance. Aims: The aim of this study is to compare the platelet counts obtained by Sysmex XE 2100 by Impedance (Sysmex-I, optical florescence (Sysmex-O and reported (Sysmex-R based on the switching algorithm and LH-750 by Impedance (LH-750 with the IRM in thrombocytopenic blood samples. To calculate the sensitivity, specificity, positive predictive value (PPV and negative predictive value (NPV of various technologies at the clinically relevant transfusion thresholds of 10 × 10 9 /l and 20 × 10 9 /l. Materials and Methods: A total of 118 blood samples with platelet count of <50 × 10 9 /l were selected for the study. Platelet counts of all samples were analyzed by all methods using the Sysmex analyzer, LH-750 and IRM in parallel within 6 h of collection. Statistical Analysis Used: Pearson correlation, bland Altman analysis, sensitivity and specificity, PPV and NPV. Results and Conclusions: Sysmex-R had the least Bias and 95% limits of agreement (95%LA range and thus correlated best with IRM values. LH-750 had a higher Bias compared to Sysmex-O and Sysmex-R, but a strikingly similar 95% LA ensures similar results in all three methods. In fact, in the oncology subset, it had the narrowest 95% LA, which made it the best performer in this subgroup. Of the three Sysmex results, Sysmex-I had the highest bias, widest 95% LA and highest potential risk of over transfusion. Hence, Sysmex-R and LH-750 were found to be reliable tools for estimation of platelet count in thrombocytopenic patients.

  6. A robust method for estimating motorbike count based on visual information learning

    Science.gov (United States)

    Huynh, Kien C.; Thai, Dung N.; Le, Sach T.; Thoai, Nam; Hamamoto, Kazuhiko

    2015-03-01

    Estimating the number of vehicles in traffic videos is an important and challenging task in traffic surveillance, especially with a high level of occlusions between vehicles, e.g.,in crowded urban area with people and/or motorbikes. In such the condition, the problem of separating individual vehicles from foreground silhouettes often requires complicated computation [1][2][3]. Thus, the counting problem is gradually shifted into drawing statistical inferences of target objects density from their shape [4], local features [5], etc. Those researches indicate a correlation between local features and the number of target objects. However, they are inadequate to construct an accurate model for vehicles density estimation. In this paper, we present a reliable method that is robust to illumination changes and partial affine transformations. It can achieve high accuracy in case of occlusions. Firstly, local features are extracted from images of the scene using Speed-Up Robust Features (SURF) method. For each image, a global feature vector is computed using a Bag-of-Words model which is constructed from the local features above. Finally, a mapping between the extracted global feature vectors and their labels (the number of motorbikes) is learned. That mapping provides us a strong prediction model for estimating the number of motorbikes in new images. The experimental results show that our proposed method can achieve a better accuracy in comparison to others.

  7. A numerical method for resonance integral calculations

    International Nuclear Information System (INIS)

    Tanbay, Tayfun; Ozgener, Bilge

    2013-01-01

    A numerical method has been proposed for resonance integral calculations and a cubic fit based on least squares approximation to compute the optimum Bell factor is given. The numerical method is based on the discretization of the neutron slowing down equation. The scattering integral is approximated by taking into account the location of the upper limit in energy domain. The accuracy of the method has been tested by performing computations of resonance integrals for uranium dioxide isolated rods and comparing the results with empirical values. (orig.)

  8. Automated air-void system characterization of hardened concrete: Helping computers to count air-voids like people count air-voids---Methods for flatbed scanner calibration

    Science.gov (United States)

    Peterson, Karl

    Since the discovery in the late 1930s that air entrainment can improve the durability of concrete, it has been important for people to know the quantity, spacial distribution, and size distribution of the air-voids in their concrete mixes in order to ensure a durable final product. The task of air-void system characterization has fallen on the microscopist, who, according to a standard test method laid forth by the American Society of Testing and Materials, must meticulously count or measure about a thousand air-voids per sample as exposed on a cut and polished cross-section of concrete. The equipment used to perform this task has traditionally included a stereomicroscope, a mechanical stage, and a tally counter. Over the past 30 years, with the availability of computers and digital imaging, automated methods have been introduced to perform the same task, but using the same basic equipment. The method described here replaces the microscope and mechanical stage with an ordinary flatbed desktop scanner, and replaces the microscopist and tally counter with a personal computer; two pieces of equipment much more readily available than a microscope with a mechanical stage, and certainly easier to find than a person willing to sit for extended periods of time counting air-voids. Most laboratories that perform air-void system characterization typically have cabinets full of prepared samples with corresponding results from manual operators. Proponents of automated methods often take advantage of this fact by analyzing the same samples and comparing the results. A similar iterative approach is described here where scanned images collected from a significant number of samples are analyzed, the results compared to those of the manual operator, and the settings optimized to best approximate the results of the manual operator. The results of this calibration procedure are compared to an alternative calibration procedure based on the more rigorous digital image accuracy

  9. Counting the number of master integrals for sunrise diagrams via the Mellin-Barnes representation

    International Nuclear Information System (INIS)

    Kalmykov, Mikhail Yu.; Kniehl, Bernd A.

    2017-06-01

    A number of irreducible master integrals for L-loop sunrise and bubble Feynman diagrams with generic values of masses and external momenta are explicitly evaluated via the Mellin-Barnes representation.

  10. A comparison of point counts with a new acoustic sampling method ...

    African Journals Online (AJOL)

    We showed that the estimates of species richness, abundance and community composition based on point counts and post-hoc laboratory listening to acoustic samples are very similar, especially for a distance limited up to 50 m. Species that were frequently missed during both point counts and listening to acoustic samples ...

  11. A method for the measurement of the intrinsic dead time of a counting system

    International Nuclear Information System (INIS)

    Wyllie, H.A.

    1989-01-01

    Equations are derived for (a) the determination of the intrinsic dead time of a counting system in the components preceding the paralysis unit which imposes the set dead time, and (b) a more accurate correction of count rates in a single-channel system, taking into account the extension of the set dead time by the intrinsic dead time. (author)

  12. Efficient orbit integration by manifold correction methods.

    Science.gov (United States)

    Fukushima, Toshio

    2005-12-01

    Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.

  13. Impact of advanced and basic carbohydrate counting methods on metabolic control in patients with type 1 diabetes.

    Science.gov (United States)

    Souto, Débora Lopes; Zajdenverg, Lenita; Rodacki, Melanie; Rosado, Eliane Lopes

    2014-03-01

    Diets based on carbohydrate counting remain a key strategy for improving glycemic control in patients with type 1 diabetes. However, these diets may promote weight gain because of the flexibility in food choices. The aim of this study was to compare carbohydrate counting methods regarding anthropometric, biochemical, and dietary variables in individuals with type 1 diabetes, as well as to evaluate their knowledge about nutrition. Participants were allocated in basic or advanced groups. After 3 mo of the nutritional counseling, dietary intake, anthropometric variables, lipemia, and glycemic control were compared between groups. A questionnaire regarding carbohydrate counting, sucrose intake, nutritional knowledge, and diabetes and nutrition taboos also was administered. Ten (30%) participants had already used advanced carbohydrate counting before the nutritional counseling and these individuals had a higher body mass index (BMI) (P 1) and waist circumference (WC) (P = 0.01) than others (n = 23; 69.7%). After 3 mo of follow-up, although participants in the advanced group (n = 17; 51.52%) presented higher BMI (P 1) and WC (P = 0.03), those in the basic group (n = 16; 48.48%) showed a higher fat intake (P 1). The majority of participants reported no difficulty in following carbohydrate counting (62.5% and 88% for basic and advanced groups, respectively) and a greater flexibility in terms of food choices (>90% with both methods). Advanced carbohydrate counting did not affect lipemic and glycemic control in individuals with type 1 diabetes, however, it may increase food intake, and consequently the BMI and WC, when compared to basic carbohydrate counting. Furthermore, carbohydrate counting promoted greater food flexibility. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Standard test method for determining nodularity and nodule count in ductile iron using image analysis

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    1.1 This test method is used to determine the percent nodularity and the nodule count per unit area (that is, number of nodules per mm2) using a light microscopical image of graphite in nodular cast iron. Images generated by other devices, such as a scanning electron microscope, are not specifically addressed, but can be utilized if the system is calibrated in both x and y directions. 1.2 Measurement of secondary or temper carbon in other types of cast iron, for example, malleable cast iron or in graphitic tool steels, is not specifically included in this standard because of the different graphite shapes and sizes inherent to such grades 1.3 This standard deals only with the recommended test method and nothing in it should be construed as defining or establishing limits of acceptability or fitness for purpose of the material tested. 1.4 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.5 This standard does not purport to address al...

  15. Validation of analytical methods in GMP: the disposable Fast Read 102® device, an alternative practical approach for cell counting

    Directory of Open Access Journals (Sweden)

    Gunetti Monica

    2012-05-01

    Full Text Available Abstract Background The quality and safety of advanced therapy products must be maintained throughout their production and quality control cycle to ensure their final use in patients. We validated the cell count method according to the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use and European Pharmacopoeia, considering the tests’ accuracy, precision, repeatability, linearity and range. Methods As the cell count is a potency test, we checked accuracy, precision, and linearity, according to ICH Q2. Briefly our experimental approach was first to evaluate the accuracy of Fast Read 102® compared to the Bürker chamber. Once the accuracy of the alternative method was demonstrated, we checked the precision and linearity test only using Fast Read 102®. The data were statistically analyzed by average, standard deviation and coefficient of variation percentages inter and intra operator. Results All the tests performed met the established acceptance criteria of a coefficient of variation of less than ten percent. For the cell count, the precision reached by each operator had a coefficient of variation of less than ten percent (total cells and under five percent (viable cells. The best range of dilution, to obtain a slope line value very similar to 1, was between 1:8 and 1:128. Conclusions Our data demonstrated that the Fast Read 102® count method is accurate, precise and ensures the linearity of the results obtained in a range of cell dilution. Under our standard method procedures, this assay may thus be considered a good quality control method for the cell count as a batch release quality control test. Moreover, the Fast Read 102® chamber is a plastic, disposable device that allows a number of samples to be counted in the same chamber. Last but not least, it overcomes the problem of chamber washing after use and so allows a cell count in a clean environment such as that in a

  16. A pulse stacking method of particle counting applied to position sensitive detection

    International Nuclear Information System (INIS)

    Basilier, E.

    1976-03-01

    A position sensitive particle counting system is described. A cyclic readout imaging device serves as an intermediate information buffer. Pulses are allowed to stack in the imager at very high counting rates. Imager noise is completely discriminated to provide very wide dynamic range. The system has been applied to a detector using cascaded microchannel plates. Pulse height spread produced by the plates causes some loss of information. The loss is comparable to the input loss of the plates. The improvement in maximum counting rate is several hundred times over previous systems that do not permit pulse stacking. (Auth.)

  17. Bacteriocidal activity of sanitizers against Enterococcus faecium attached to stainless steel as determined by plate count and impedance methods.

    Science.gov (United States)

    Andrade, N J; Bridgeman, T A; Zottola, E A

    1998-07-01

    Enterococcus faecium attached to stainless steel chips (100 mm2) was treated with the following sanitizers: sodium hypochlorite, peracetic acid (PA), peracetic acid plus an organic acid (PAS), quaternary ammonium, organic acid, and anionic acid. The effectiveness of sanitizer solutions on planktonic cells (not attached) was evaluated by the Association of Official Analytical Chemists (AOAC) suspension test. The number of attached cells was determined by impedance measurement and plate count method after vortexing. The decimal reduction (DR) in numbers of the E. faecium population was determined for the three methods and was analyzed by analysis of variance (P plate count method after vortexing, and impedance measurement, respectively. Plate count and impedance methods showed a difference (P measurement was the best method to measure adherent cells. Impedance measurement required the development of a quadratic regression. The equation developed from 82 samples is as follows: log CFU/chip = 0.2385T2-0.96T + 9.35, r2 = 0.92, P plate count method after vortexing. These data suggest that impedance measurement is the method of choice when evaluating the number of bacterial cells adhered to a surface.

  18. Statistical Methods for Unusual Count Data: Examples From Studies of Microchimerism

    Science.gov (United States)

    Guthrie, Katherine A.; Gammill, Hilary S.; Kamper-Jørgensen, Mads; Tjønneland, Anne; Gadi, Vijayakrishna K.; Nelson, J. Lee; Leisenring, Wendy

    2016-01-01

    Natural acquisition of small amounts of foreign cells or DNA, referred to as microchimerism, occurs primarily through maternal-fetal exchange during pregnancy. Microchimerism can persist long-term and has been associated with both beneficial and adverse human health outcomes. Quantitative microchimerism data present challenges for statistical analysis, including a skewed distribution, excess zero values, and occasional large values. Methods for comparing microchimerism levels across groups while controlling for covariates are not well established. We compared statistical models for quantitative microchimerism values, applied to simulated data sets and 2 observed data sets, to make recommendations for analytic practice. Modeling the level of quantitative microchimerism as a rate via Poisson or negative binomial model with the rate of detection defined as a count of microchimerism genome equivalents per total cell equivalents tested utilizes all available data and facilitates a comparison of rates between groups. We found that both the marginalized zero-inflated Poisson model and the negative binomial model can provide unbiased and consistent estimates of the overall association of exposure or study group with microchimerism detection rates. The negative binomial model remains the more accessible of these 2 approaches; thus, we conclude that the negative binomial model may be most appropriate for analyzing quantitative microchimerism data. PMID:27769989

  19. Validation of analytical methods in GMP: the disposable Fast Read 102® device, an alternative practical approach for cell counting.

    Science.gov (United States)

    Gunetti, Monica; Castiglia, Sara; Rustichelli, Deborah; Mareschi, Katia; Sanavio, Fiorella; Muraro, Michela; Signorino, Elena; Castello, Laura; Ferrero, Ivana; Fagioli, Franca

    2012-05-31

    The quality and safety of advanced therapy products must be maintained throughout their production and quality control cycle to ensure their final use in patients. We validated the cell count method according to the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use and European Pharmacopoeia, considering the tests' accuracy, precision, repeatability, linearity and range. As the cell count is a potency test, we checked accuracy, precision, and linearity, according to ICH Q2. Briefly our experimental approach was first to evaluate the accuracy of Fast Read 102® compared to the Bürker chamber. Once the accuracy of the alternative method was demonstrated, we checked the precision and linearity test only using Fast Read 102®. The data were statistically analyzed by average, standard deviation and coefficient of variation percentages inter and intra operator. All the tests performed met the established acceptance criteria of a coefficient of variation of less than ten percent. For the cell count, the precision reached by each operator had a coefficient of variation of less than ten percent (total cells) and under five percent (viable cells). The best range of dilution, to obtain a slope line value very similar to 1, was between 1:8 and 1:128. Our data demonstrated that the Fast Read 102® count method is accurate, precise and ensures the linearity of the results obtained in a range of cell dilution. Under our standard method procedures, this assay may thus be considered a good quality control method for the cell count as a batch release quality control test. Moreover, the Fast Read 102® chamber is a plastic, disposable device that allows a number of samples to be counted in the same chamber. Last but not least, it overcomes the problem of chamber washing after use and so allows a cell count in a clean environment such as that in a Cell Factory. In a good manufacturing practice setting the disposable

  20. MOSS-5: A Fast Method of Approximating Counts of 5-Node Graphlets in Large Graphs

    KAUST Repository

    Wang, Pinghui; Zhao, Junzhou; Zhang, Xiangliang; Li, Zhenguo; Cheng, Jiefeng; Lui, John C.S.; Towsley, Don; Tao, Jing; Guan, Xiaohong

    2017-01-01

    Counting 3-, 4-, and 5-node graphlets in graphs is important for graph mining applications such as discovering abnormal/evolution patterns in social and biology networks. In addition, it is recently widely used for computing similarities between

  1. Optimal staining methods for delineation of cortical areas and neuron counts in human brains.

    Science.gov (United States)

    Uylings, H B; Zilles, K; Rajkowska, G

    1999-04-01

    For cytoarchitectonic delineation of cortical areas in human brain, the Gallyas staining for somata with its sharp contrast between cell bodies and neuropil is preferable to the classical Nissl staining, the more so when an image analysis system is used. This Gallyas staining, however, does not appear to be appropriate for counting neuron numbers in pertinent brain areas, due to the lack of distinct cytological features between small neurons and glial cells. For cell counting Nissl is preferable. In an optimal design for cell counting at least both the Gallyas and the Nissl staining must be applied, the former staining for cytoarchitectural delineaton of cortical areas and the latter for counting the number of neurons in the pertinent cortical areas. Copyright 1999 Academic Press.

  2. Bioassay method for Uranium in urine by Delay Neutron counting; Metoda Bioassay Uranium dalam urin dengan pencacahan Netron Kasip

    Energy Technology Data Exchange (ETDEWEB)

    Suratman,; Purwanto,; Sukarman-Aminjoyo, [Yogyakarta Nuclear Research Centre, National Atomic Energy Agency, Yogyakarta (Indonesia)

    1996-04-15

    A bioassay method for uranium in urine by neutron counting has been studied. The aim of this research is to obtain a bioassay method for uranium in urine which is used for the determination of internal dose of radiation workers. The bioassay was applied to the artificially uranium contaminated urine. The weight of the contaminant was varied. The uranium in the urine was irradiated in the Kartini reactor core, through pneumatic system. The delayed neutron was counted by BF3 neutron counter. Recovery of the bioassay was between 69.8-88.8 %, standard deviation was less than 10 % and the minimum detection was 0.387 {mu}g.

  3. Neutron radiography imaging with 2-dimensional photon counting method and its problems

    International Nuclear Information System (INIS)

    Ikeda, Y.; Kobayashi, H.; Niwa, T.; Kataoka, T.

    1988-01-01

    A ultra sensitive neutron imaging system has been deviced with a 2-dimensional photon counting camara (ARGUS 100). The imaging system is composed by a 2-dimensional single photon counting tube and a low background vidicon followed with an image processing unit and frame memories. By using the imaging system, electronic neutron radiography (NTV) has been possible under the neutron flux less than 3 x 10 4 n/cm 2 ·s. (author)

  4. Making the Climate Count: Climate Policy Integration and Coherence in Finland

    OpenAIRE

    Kivimaa, Paula; Mickwitz, Per

    2009-01-01

    Tackling climate change in Finland and other industrialised countries requires major changes in production processes and consumption patterns. These changes will not take place unless climate change becomes a crucial factor in general and sector-specific policy-making. In this report climate policy integration in Finland is studied at different levels of policy-making: at the national level, regionally in Kymenlakso and the Metropolitan Area, as well as in the city of Helsinki and the town of...

  5. Standardization of I-125 solution by extrapolation of an efficiency wave obtained by coincidence X-(X-γ) counting method

    International Nuclear Information System (INIS)

    Iwahara, A.

    1989-01-01

    The activity concentration of 125 I was determined by X-(X-α) coincidence counting method and efficiency extrapolation curve. The measurement system consists of 2 thin NaI(T1) scintillation detectors which are horizontally movable on a track. The efficiency curve is obtained by symmetricaly changing the distance between the source and the detectors and the activity is determined by applying a linear efficiency extrapolation curve. All sum-coincidence events are included between 10 and 100 KeV window counting and the main source of uncertainty is coming from poor counting statistic around zero efficiency. The consistence of results with other methods shows that this technique can be applied to photon cascade emitters and are not discriminating by the detectors. It has been also determined the 35,5 KeV gamma-ray emission probability of 125 I by using a Gamma-X type high purity germanium detector. (author) [pt

  6. Comparison of McMaster and FECPAKG2 methods for counting nematode eggs in the faeces of alpacas.

    Science.gov (United States)

    Rashid, Mohammed H; Stevenson, Mark A; Waenga, Shea; Mirams, Greg; Campbell, Angus J D; Vaughan, Jane L; Jabbar, Abdul

    2018-05-02

    This study aimed to compare the FECPAK G2 and the McMaster techniques for counting of gastrointestinal nematode eggs in the faeces of alpacas using two floatation solutions (saturated sodium chloride and sucrose solutions). Faecal eggs counts from both techniques were compared using the Lin's concordance correlation coefficient and Bland and Altman statistics. Results showed moderate to good agreement between the two methods, with better agreement achieved when saturated sugar is used as a floatation fluid, particularly when faecal egg counts are less than 1000 eggs per gram of faeces. To the best of our knowledge this is the first study to assess agreement of measurements between McMaster and FECPAK G2 methods for estimating faecal eggs in South American camelids.

  7. Integral Methods in Science and Engineering

    CERN Document Server

    Constanda, Christian

    2011-01-01

    An enormous array of problems encountered by scientists and engineers are based on the design of mathematical models using many different types of ordinary differential, partial differential, integral, and integro-differential equations. Accordingly, the solutions of these equations are of great interest to practitioners and to science in general. Presenting a wealth of cutting-edge research by a diverse group of experts in the field, Integral Methods in Science and Engineering: Computational and Analytic Aspects gives a vivid picture of both the development of theoretical integral techniques

  8. Method of manufacturing Josephson junction integrated circuits

    International Nuclear Information System (INIS)

    Jillie, D.W. Jr.; Smith, L.N.

    1985-01-01

    Josephson junction integrated circuits of the current injection type and magnetically controlled type utilize a superconductive layer that forms both Josephson junction electrode for the Josephson junction devices on the integrated circuit as well as a ground plane for the integrated circuit. Large area Josephson junctions are utilized for effecting contact to lower superconductive layers and islands are formed in superconductive layers to provide isolation between the groudplane function and the Josephson junction electrode function as well as to effect crossovers. A superconductor-barrier-superconductor trilayer patterned by local anodization is also utilized with additional layers formed thereover. Methods of manufacturing the embodiments of the invention are disclosed

  9. Variational method for integrating radial gradient field

    Science.gov (United States)

    Legarda-Saenz, Ricardo; Brito-Loeza, Carlos; Rivera, Mariano; Espinosa-Romero, Arturo

    2014-12-01

    We propose a variational method for integrating information obtained from circular fringe pattern. The proposed method is a suitable choice for objects with radial symmetry. First, we analyze the information contained in the fringe pattern captured by the experimental setup and then move to formulate the problem of recovering the wavefront using techniques from calculus of variations. The performance of the method is demonstrated by numerical experiments with both synthetic and real data.

  10. Evaluation of ICT filariasis card test using whole capillary blood: comparison with Knott's concentration and counting chamber methods.

    Science.gov (United States)

    Njenga, S M; Wamae, C N

    2001-10-01

    An immunochromatographic card test (ICT) that uses fingerprick whole blood instead of serum for diagnosis of bancroftian filariasis has recently been developed. The card test was validated in the field in Kenya by comparing its sensitivity to the combined sensitivity of Knott's concentration and counting chamber methods. A total of 102 (14.6%) and 117 (16.7%) persons was found to be microfilaremic by Knott's concentration and counting chamber methods, respectively. The geometric mean intensities (GMI) were 74.6 microfilariae (mf)/ml and 256.5 mf/ml by Knott's concentration and counting chamber methods, respectively. All infected individuals detected by both Knott's concentration and counting chamber methods were also antigen positive by the ICT filariasis card test (100% sensitivity). Further, of 97 parasitologically amicrofilaremic persons, 24 (24.7%) were antigen positive by the ICT. The overall prevalence of antigenemia was 37.3%. Of 100 nonendemic area control persons, none was found to be filarial antigen positive (100% specificity). The results show that the new version of the ICT filariasis card test is a simple, sensitive, specific, and rapid test that is convenient in field settings.

  11. Mining method selection by integrated AHP and PROMETHEE method.

    Science.gov (United States)

    Bogdanovic, Dejan; Nikolic, Djordje; Ilic, Ivana

    2012-03-01

    Selecting the best mining method among many alternatives is a multicriteria decision making problem. The aim of this paper is to demonstrate the implementation of an integrated approach that employs AHP and PROMETHEE together for selecting the most suitable mining method for the "Coka Marin" underground mine in Serbia. The related problem includes five possible mining methods and eleven criteria to evaluate them. Criteria are accurately chosen in order to cover the most important parameters that impact on the mining method selection, such as geological and geotechnical properties, economic parameters and geographical factors. The AHP is used to analyze the structure of the mining method selection problem and to determine weights of the criteria, and PROMETHEE method is used to obtain the final ranking and to make a sensitivity analysis by changing the weights. The results have shown that the proposed integrated method can be successfully used in solving mining engineering problems.

  12. Integrating chronological uncertainties for annually laminated lake sediments using layer counting, independent chronologies and Bayesian age modelling (Lake Ohau, South Island, New Zealand)

    Science.gov (United States)

    Vandergoes, Marcus J.; Howarth, Jamie D.; Dunbar, Gavin B.; Turnbull, Jocelyn C.; Roop, Heidi A.; Levy, Richard H.; Li, Xun; Prior, Christine; Norris, Margaret; Keller, Liz D.; Baisden, W. Troy; Ditchburn, Robert; Fitzsimons, Sean J.; Bronk Ramsey, Christopher

    2018-05-01

    Annually resolved (varved) lake sequences are important palaeoenvironmental archives as they offer a direct incremental dating technique for high-frequency reconstruction of environmental and climate change. Despite the importance of these records, establishing a robust chronology and quantifying its precision and accuracy (estimations of error) remains an essential but challenging component of their development. We outline an approach for building reliable independent chronologies, testing the accuracy of layer counts and integrating all chronological uncertainties to provide quantitative age and error estimates for varved lake sequences. The approach incorporates (1) layer counts and estimates of counting precision; (2) radiometric and biostratigrapic dating techniques to derive independent chronology; and (3) the application of Bayesian age modelling to produce an integrated age model. This approach is applied to a case study of an annually resolved sediment record from Lake Ohau, New Zealand. The most robust age model provides an average error of 72 years across the whole depth range. This represents a fractional uncertainty of ∼5%, higher than the <3% quoted for most published varve records. However, the age model and reported uncertainty represent the best fit between layer counts and independent chronology and the uncertainties account for both layer counting precision and the chronological accuracy of the layer counts. This integrated approach provides a more representative estimate of age uncertainty and therefore represents a statistically more robust chronology.

  13. Method and apparstus for determining random coincidence count rate in a scintillation counter utilizing the coincidence technique

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1980-01-01

    A method and apparatus for the reliable determination of a random coincidence count attributable to chance coincidences of single-photon events which are each detected in only a single detector of a scintillation counter utilizing two detectors in a coincidence counting technique are described. A firstdelay device is employed to delay output pulses from one detector, and then the delayed signal is compared with the undelayed signal from the other detector in a coincidence circuit, to obtain an approximate random coincidence count. The output of the coincidence circuit is applied to an anti-coincidence circuit, where it is corrected by elimination of pulses coincident with, and attributable to, conventionally detected real coincidences, and by elimination of pulses coincident with, and attributable to, real coincidences that have been delayed by a second delay device having the same time parameter as the first. 8 claims

  14. Evaluation of Bacteriological Quality of Ready-to-eat Chicken Products by Total Viable Count Method

    OpenAIRE

    Ramiz Raja; Asif Iqbal; Yasir Hafiz; Mehboob Willayet; Shakoor Bhat; Mudasir Rather

    2012-01-01

    The present investigation describes the total viable count of ready-to-eat chicken products (chicken patties and chicken rolls) in Srinagar city during two seasons viz. autumn and winter. A total of 120 ready-to-eat chicken products comprising of 60 chicken patties and 60 chicken rolls were tested. The mean bacterial count of 60 chicken patties and 60 chicken rolls was 5.1281 and 4.9395 log10 cfu/g. Bacillus cereus strains were isolated from 25 of chicken patties and 22 of the chicken rolls r...

  15. Comparison of photon counting versus charge integration micro-CT within the irradiation setup PIXSCAN

    International Nuclear Information System (INIS)

    Ouamara, H.

    2013-01-01

    The pathway that has been followed by the imXgam team at CPPM was to adapt the hybrid pixel technology XPAD to biomedical imaging. It is in this context that the micro-CT PIXSCAN II based on the new generation of hybrid pixel detectors called XPAD3 has been developed. This thesis describes the process undertaken to assess the contribution of the hybrid pixel technology in X-ray computed tomography in terms of contrast and dose and to explore new opportunities for biomedical imaging at low doses. Performance evaluation as well as the validation of the results obtained with data acquired with the detector XPAD3 were compared to results obtained with the CCD camera DALSA XR-4 similar to detectors used in most conventional micro-CT systems. The detector XPAD3 allows to obtain reconstructed images of satisfactory quality close to that of images from the DALSA XR-4 camera, but with a better spatial resolution. At low doses, the images from the detector XPAD3 have a better quality that is those from CCD camera. From an instrumentation point of view, this project demonstrated the proper operations of the device PIXSCAN II for mouse imaging. We were able to reproduce an image quality similar to that obtained with a charge integration detector such as a CCD camera. To improve the performance of the detector XPAD3, we will have to optimize the stability of the thresholds and in order to obtain more homogeneous response curves of the pixels as a function as energy by using a denser sensor such as CdTe. (author)

  16. Application of statistical methods to the testing of nuclear counting assemblies

    International Nuclear Information System (INIS)

    Gilbert, J.P.; Friedling, G.

    1965-01-01

    This report describes the application of the hypothesis test theory to the control of the 'statistical purity' and of the stability of the counting batteries used for measurements on activation detectors in research reactors. The principles involved and the experimental results obtained at Cadarache on batteries operating with the reactors PEGGY and AZUR are given. (authors) [fr

  17. Study of alternative methods for the management of liquid scintillation counting wastes

    International Nuclear Information System (INIS)

    Roche-Farmer, L.

    1980-02-01

    The Nuclear Engineering Waste Disposal Site in Richland, Washington, is the only radioactive waste disposal facility that will accept liquid scintillation counting wastes (LSCW) for disposal. That site is scheduled to discontinue receiving LSCW by the end of 1982. This document explores alternatives presently available for management of LSCW: evaporation, distillation, solidification, conversion, and combustion

  18. Visual versus mechanised leucocyte differential counts: costing and evaluation of traditional and Hemalog D methods.

    Science.gov (United States)

    Hudson, M J; Green, A E

    1980-11-01

    Visual differential counts were examined for efficiency, cost effectiveness, and staff acceptability within our laboratory. A comparison with the Hemalog D system was attempted. The advantages and disadvantages of each system are enumerated and discussed in the context of a large general hospital.

  19. Visual versus mechanised leucocyte differential counts: costing and evaluation of traditional and Hemalog D methods.

    OpenAIRE

    Hudson, M J; Green, A E

    1980-01-01

    Visual differential counts were examined for efficiency, cost effectiveness, and staff acceptability within our laboratory. A comparison with the Hemalog D system was attempted. The advantages and disadvantages of each system are enumerated and discussed in the context of a large general hospital.

  20. Water quality - Determination of tritium activity concentration - Liquid scintillation counting method (International Standard Publication ISO 9698:1989)

    International Nuclear Information System (INIS)

    Stefanik, J.

    1999-01-01

    This International Standard specifies a method for the determination of tritiated water ([ 3 H]H 2 O) activity concentration in water by liquid scintillation counting. The method is applicable to all types of water including seawater with tritium activity concentrations of up to 10 6 Bq/m 3 when using 20 ml counting vials. Below tritium activity concentrations of about 5 x 10 4 Bq/m 3[ 8], a prior enrichment step and/or the measurement of larger sample volumes can significantly improve the accuracy of the determination and lower the limit of detection. Tritium activity concentrations higher than 10 6 Bq/m 3 may be determined after appropriate dilution with distilled water of proven low tritium content. An alternative method for the determination of these higher activities involves increasing the tritium activity concentrations of the internal standard solution. The method is not applicable to the analysis of organically bound tritium; its determination requires an oxidative digestion

  1. Nonlinear structural analysis using integrated force method

    Indian Academy of Sciences (India)

    A new formulation termed the Integrated Force Method (IFM) was proposed by Patnaik ... nated ``Structure (nY m)'' where (nY m) are the force and displacement degrees of ..... Patnaik S N, Yadagiri S 1976 Frequency analysis of structures.

  2. RBC count

    Science.gov (United States)

    ... by kidney disease) RBC destruction ( hemolysis ) due to transfusion, blood vessel injury, or other cause Leukemia Malnutrition Bone ... slight risk any time the skin is broken) Alternative Names Erythrocyte count; Red blood cell count; Anemia - RBC count Images Blood test ...

  3. Counting probe

    International Nuclear Information System (INIS)

    Matsumoto, Haruya; Kaya, Nobuyuki; Yuasa, Kazuhiro; Hayashi, Tomoaki

    1976-01-01

    Electron counting method has been devised and experimented for the purpose of measuring electron temperature and density, the most fundamental quantities to represent plasma conditions. Electron counting is a method to count the electrons in plasma directly by equipping a probe with the secondary electron multiplier. It has three advantages of adjustable sensitivity, high sensitivity of the secondary electron multiplier, and directional property. Sensitivity adjustment is performed by changing the size of collecting hole (pin hole) on the incident front of the multiplier. The probe is usable as a direct reading thermometer of electron temperature because it requires to collect very small amount of electrons, thus it doesn't disturb the surrounding plasma, and the narrow sweep width of the probe voltage is enough. Therefore it can measure anisotropy more sensitively than a Langmuir probe, and it can be used for very low density plasma. Though many problems remain on anisotropy, computer simulation has been carried out. Also it is planned to provide a Helmholtz coil in the vacuum chamber to eliminate the effect of earth magnetic field. In practical experiments, the measurement with a Langmuir probe and an emission probe mounted to the movable structure, the comparison with the results obtained in reverse magnetic field by using a Helmholtz coil, and the measurement of ionic sound wave are scheduled. (Wakatsuki, Y.)

  4. Indirect methods for wake potential integration

    International Nuclear Information System (INIS)

    Zagorodnov, I.

    2006-05-01

    The development of the modern accelerator and free-electron laser projects requires to consider wake fields of very short bunches in arbitrary three dimensional structures. To obtain the wake numerically by direct integration is difficult, since it takes a long time for the scattered fields to catch up to the bunch. On the other hand no general algorithm for indirect wake field integration is available in the literature so far. In this paper we review the know indirect methods to compute wake potentials in rotationally symmetric and cavity-like three dimensional structures. For arbitrary three dimensional geometries we introduce several new techniques and test them numerically. (Orig.)

  5. Numerical methods for engine-airframe integration

    International Nuclear Information System (INIS)

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment

  6. Resolving time of scintillation camera-computer system and methods of correction for counting loss, 2

    International Nuclear Information System (INIS)

    Iinuma, Takeshi; Fukuhisa, Kenjiro; Matsumoto, Toru

    1975-01-01

    Following the previous work, counting-rate performance of camera-computer systems was investigated for two modes of data acquisition. The first was the ''LIST'' mode in which image data and timing signals were sequentially stored on magnetic disk or tape via a buffer memory. The second was the ''HISTOGRAM'' mode in which image data were stored in a core memory as digital images and then the images were transfered to magnetic disk or tape by the signal of frame timing. Firstly, the counting-rates stored in the buffer memory was measured as a function of display event-rates of the scintillation camera for the two modes. For both modes, stored counting-rated (M) were expressed by the following formula: M=N(1-Ntau) where N was the display event-rates of the camera and tau was the resolving time including analog-to-digital conversion time and memory cycle time. The resolving time for each mode may have been different, but it was about 10 μsec for both modes in our computer system (TOSBAC 3400 model 31). Secondly, the date transfer speed from the buffer memory to the external memory such as magnetic disk or tape was considered for the two modes. For the ''LIST'' mode, the maximum value of stored counting-rates from the camera was expressed in terms of size of the buffer memory, access time and data transfer-rate of the external memory. For the ''HISTOGRAM'' mode, the minimum time of the frame was determined by size of the buffer memory, access time and transfer rate of the external memory. In our system, the maximum value of stored counting-rates were about 17,000 counts/sec. with the buffer size of 2,000 words, and minimum frame time was about 130 msec. with the buffer size of 1024 words. These values agree well with the calculated ones. From the author's present analysis, design of the camera-computer system becomes possible for quantitative dynamic imaging and future improvements are suggested. (author)

  7. Gender counts: A systematic review of evaluations of gender-integrated health interventions in low- and middle-income countries.

    Science.gov (United States)

    Schriver, Brittany; Mandal, Mahua; Muralidharan, Arundati; Nwosu, Anthony; Dayal, Radhika; Das, Madhumita; Fehringer, Jessica

    2017-11-01

    As a result of new global priorities, there is a growing need for high-quality evaluations of gender-integrated health programmes. This systematic review examined 99 peer-reviewed articles on evaluations of gender-integrated (accommodating and transformative) health programmes with regard to their theory of change (ToC), study design, gender integration in data collection, analysis, and gender measures used. Half of the evaluations explicitly described a ToC or conceptual framework (n = 50) that guided strategies for their interventions. Over half (61%) of the evaluations used quantitative methods exclusively; 11% used qualitative methods exclusively; and 28% used mixed methods. Qualitative methods were not commonly detailed. Evaluations of transformative interventions were less likely than those of accommodating interventions to employ randomised control trials. Two-thirds of the reviewed evaluations reported including at least one specific gender-related outcome (n = 18 accommodating, n = 44 transformative). To strengthen evaluations of gender-integrated programmes, we recommend use of ToCs, explicitly including gender in the ToC, use of gender-sensitive measures, mixed-method designs, in-depth descriptions of qualitative methods, and attention to gender-related factors in data collection logistics. We also recommend further research to develop valid and reliable gender measures that are globally relevant.

  8. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs.

  9. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K. [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiments. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. 11 refs., 6 figs., 8 tabs. (Author)

  10. Radiation counting statistics

    International Nuclear Information System (INIS)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H.

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs

  11. Permutation statistical methods an integrated approach

    CERN Document Server

    Berry, Kenneth J; Johnston, Janis E

    2016-01-01

    This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

  12. Analysis Method for Integrating Components of Product

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)

    2017-04-15

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  13. Analysis Method for Integrating Components of Product

    International Nuclear Information System (INIS)

    Choi, Jun Ho; Lee, Kun Sang

    2017-01-01

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  14. First integral method for an oscillator system

    Directory of Open Access Journals (Sweden)

    Xiaoqian Gong

    2013-04-01

    Full Text Available In this article, we consider the nonlinear Duffing-van der Pol-type oscillator system by means of the first integral method. This system has physical relevance as a model in certain flow-induced structural vibration problems, which includes the van der Pol oscillator and the damped Duffing oscillator etc as particular cases. Firstly, we apply the Division Theorem for two variables in the complex domain, which is based on the ring theory of commutative algebra, to explore a quasi-polynomial first integral to an equivalent autonomous system. Then, through solving an algebraic system we derive the first integral of the Duffing-van der Pol-type oscillator system under certain parametric condition.

  15. Traditional method of fish treatment, microbial count and palatability studies on spoiled fish

    Directory of Open Access Journals (Sweden)

    Abd Aziz, N. A.

    2013-01-01

    Full Text Available Aims: To evaluate the microbial count and palatability acceptance of spoiled fish after treatment with traditionally used naturalsolution.Methodology and results: To compare microbial count of spoiled fish before and after treatment with natural solution practicedby local people in Malaysia, 10 g of spoiled fish was respectively rinsed with 100 mL of 0.1% of natural solution such as Averrhoabilimbi extract, rice rinsed water, rice vinegar, Citrus aurantifolia extract, salt, flour, and Tamarindus indica extract. Flesh of fishrinsed with rice vinegar was found to be able to reduce microbial count (CFU/mL = 0.37 X 107 more than 4.5 times whencompared to spoiled fish (CFU/mL=1.67x 107. Spoiled fish that was treated with rice vinegar was prepared into a cutlet and fried.The cutlet was subjected to palatability acceptance study by a group of residents in Palm Court Condominium, Brickfields, KualaLumpur. The palatability study from the Cronbach alpha shown that the taste have the reliability of 0.802, the aroma has thereliability of 0.888, colour with the reliability of 0.772, texture or mouth feel have reliability of 0.840 and physical structure of thecutlet is 0.829.Conclusion, significance and impact of study: Treatment of spoiled fish using rice vinegar as practice by local peopletraditionally shown a significant reduction in microbial count and the vinegar-treated fish could be developed into a product that issafe and acceptable by the consumer.

  16. Liquid scintillation counting standardization of Na129I by the CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Rodriguez Barquero, L.; Grau Carles, A.; Grau Malonda, A.

    1996-01-01

    We describe a sample preparation procedure for liquid scintillation measurements of stable solution of Na''129I. The counting stability and spectral evolution of this solution is studied in HiSafe''tmII, Ultima-Gold''tm and Insta-Gel''r. The liquid scintillation measurements have been carried efficiencies lower than 0.4%. the solution has been standardized in terms of activity concentration to an overall uncertainty of 0.46% (k=1 )

  17. The cosmological analysis of X-ray cluster surveys - I. A new method for interpreting number counts

    Science.gov (United States)

    Clerc, N.; Pierre, M.; Pacaud, F.; Sadibekova, T.

    2012-07-01

    We present a new method aimed at simplifying the cosmological analysis of X-ray cluster surveys. It is based on purely instrumental observable quantities considered in a two-dimensional X-ray colour-magnitude diagram (hardness ratio versus count rate). The basic principle is that even in rather shallow surveys, substantial information on cluster redshift and temperature is present in the raw X-ray data and can be statistically extracted; in parallel, such diagrams can be readily predicted from an ab initio cosmological modelling. We illustrate the methodology for the case of a 100-deg2XMM survey having a sensitivity of ˜10-14 erg s-1 cm-2 and fit at the same time, the survey selection function, the cluster evolutionary scaling relations and the cosmology; our sole assumption - driven by the limited size of the sample considered in the case study - is that the local cluster scaling relations are known. We devote special attention to the realistic modelling of the count-rate measurement uncertainties and evaluate the potential of the method via a Fisher analysis. In the absence of individual cluster redshifts, the count rate and hardness ratio (CR-HR) method appears to be much more efficient than the traditional approach based on cluster counts (i.e. dn/dz, requiring redshifts). In the case where redshifts are available, our method performs similar to the traditional mass function (dn/dM/dz) for the purely cosmological parameters, but constrains better parameters defining the cluster scaling relations and their evolution. A further practical advantage of the CR-HR method is its simplicity: this fully top-down approach totally bypasses the tedious steps consisting in deriving cluster masses from X-ray temperature measurements.

  18. Multistep Methods for Integrating the Solar System

    Science.gov (United States)

    1988-07-01

    Technical Report 1055 [Multistep Methods for Integrating the Solar System 0 Panayotis A. Skordos’ MIT Artificial Intelligence Laboratory DTIC S D g8...RMA ELEENT. PROECT. TASK Artific ial Inteligence Laboratory ARE1A G WORK UNIT NUMBERS 545 Technology Square Cambridge, MA 02139 IL. CONTROLLING...describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology, supported by the Advanced Research Projects

  19. Liquid scintillation counting standardization of ''125 I in organic and inorganic samples by the CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Rodriguez Barquero, L.; Grau Malonda, A.; Los Arcos Merino, J.M.; Grau Carles, A.

    1994-01-01

    The liquid scintillation counting standardization of organic and inorganic samples of ''125 I by the CIEMAT/NIST method using five different scintillators is described. The discrepancies between experimental and computed efficiencies are lower than 1.4% and 1.7%, for inorganic and organic samples, respectively, in the interval 421-226 of quenching parameter. Both organic and inorganic solutions have been standardized in terms of activity concentration to an overall uncertainty of 0.76%

  20. Liquid scintillation counting standardization of 125I in organic and inorganic samples by the CIEMAT/NIST method

    International Nuclear Information System (INIS)

    Rodriguez Barquero, L.; Grau Malonda, A.; Los Arcos Merino, J. M.; Grau Carles, A.

    1994-01-01

    The liquid scintillation counting standardization of organic and inorganic samples of ''I25I by the CIEMAT/NIST method using five different scintillators is described. The discrepancies between experimental and computed efficiencies are lower than 1.4% and 1.7%, for inorganic and organic samples, respectively, in the interval 421-226 of quenching parameter. Both organic and inorganic solutions have been standardized in terms of activity concentration to an overall uncertainty of 0.76%. (Author) 14 refs

  1. High-performance integrated pick-up circuit for SPAD arrays in time-correlated single photon counting

    Science.gov (United States)

    Acconcia, Giulia; Cominelli, Alessandro; Peronio, Pietro; Rech, Ivan; Ghioni, Massimo

    2017-05-01

    The analysis of optical signals by means of Single Photon Avalanche Diodes (SPADs) has been subject to a widespread interest in recent years. The development of multichannel high-performance Time Correlated Single Photon Counting (TCSPC) acquisition systems has undergone a fast trend. Concerning the detector performance, best in class results have been obtained resorting to custom technologies leading also to a strong dependence of the detector timing jitter from the threshold used to determine the onset of the photogenerated current flow. In this scenario, the avalanche current pick-up circuit plays a key role in determining the timing performance of the TCSPC acquisition system, especially with a large array of SPAD detectors because of electrical crosstalk issues. We developed a new current pick-up circuit based on a transimpedance amplifier structure able to extract the timing information from a 50-μm-diameter custom technology SPAD with a state-of-art timing jitter as low as 32ps and suitable to be exploited with SPAD arrays. In this paper we discuss the key features of this structure and we present a new version of the pick-up circuit that also provides quenching capabilities in order to minimize the number of interconnections required, an aspect that becomes more and more crucial in densely integrated systems.

  2. Comparison of Drive Counts and Mark-Resight As Methods of Population Size Estimation of Highly Dense Sika Deer (Cervus nippon Populations.

    Directory of Open Access Journals (Sweden)

    Kazutaka Takeshita

    Full Text Available Assessing temporal changes in abundance indices is an important issue in the management of large herbivore populations. The drive counts method has been frequently used as a deer abundance index in mountainous regions. However, despite an inherent risk for observation errors in drive counts, which increase with deer density, evaluations of the utility of drive counts at a high deer density remain scarce. We compared the drive counts and mark-resight (MR methods in the evaluation of a highly dense sika deer population (MR estimates ranged between 11 and 53 individuals/km2 on Nakanoshima Island, Hokkaido, Japan, between 1999 and 2006. This deer population experienced two large reductions in density; approximately 200 animals in total were taken from the population through a large-scale population removal and a separate winter mass mortality event. Although the drive counts tracked temporal changes in deer abundance on the island, they overestimated the counts for all years in comparison to the MR method. Increased overestimation in drive count estimates after the winter mass mortality event may be due to a double count derived from increased deer movement and recovery of body condition secondary to the mitigation of density-dependent food limitations. Drive counts are unreliable because they are affected by unfavorable factors such as bad weather, and they are cost-prohibitive to repeat, which precludes the calculation of confidence intervals. Therefore, the use of drive counts to infer the deer abundance needs to be reconsidered.

  3. An improved method for 85Kr analysis by liquid scintillation counting and its application to atmospheric 85Kr determination

    International Nuclear Information System (INIS)

    Momoshima, Noriyuki; Inoue, Fumio; Sugihara, Shinji; Shimada, Jun; Taniguchi, Makoto

    2010-01-01

    Atmospheric 85 Kr concentration at Fukuoka, Japan was determined by an improved 85 Kr analytical method using liquid scintillation counting (LSC). An average value of 1.54 ± 0.05 Bq m -3 was observed in 2008, which is about two times that measured in 1981 at Fukuoka, indicating a 29 mBq y -1 rate of increase as an average for these 27 years. The analytical method developed involves collecting Kr from air using activated charcoal at liquid N 2 temperature and purifying it using He at dry ice temperature, followed by Kr separation by gas chromatography. An overall Kr recovery of 76.4 ± 8.1% was achieved when Kr was analyzed in 500-1000 l of air. The Kr isolated by gas chromatography was collected on silica gel in a quartz glass vial cooled to liquid N 2 temperature and the activity of 85 Kr was measured with a low-background LS counter. The detection limit of 85 Kr activity by the present analytical method is 0.0015 Bq at a 95% confidence level, including all propagation errors, which is equivalent with 85 Kr in 1.3 l of the present air under the analytical conditions of 72.1% counting efficiency, 0.1597 cps background count rate, and 76.4% Kr recovery.

  4. Standard Test Method for Oxygen Content Using a 14-MeV Neutron Activation and Direct-Counting Technique

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 This test method covers the measurement of oxygen concentration in almost any matrix by using a 14-MeV neutron activation and direct-counting technique. Essentially, the same system may be used to determine oxygen concentrations ranging from over 50 % to about 10 g/g, or less, depending on the sample size and available 14-MeV neutron fluence rates. Note 1 - The range of analysis may be extended by using higher neutron fluence rates, larger samples, and higher counting efficiency detectors. 1.2 This test method may be used on either solid or liquid samples, provided that they can be made to conform in size, shape, and macroscopic density during irradiation and counting to a standard sample of known oxygen content. Several variants of this method have been described in the technical literature. A monograph is available which provides a comprehensive description of the principles of activation analysis using a neutron generator (1). 1.3 The values stated in either SI or inch-pound units are to be regarded...

  5. Method for determination of radon-222 in water by liquid scintillation counting

    International Nuclear Information System (INIS)

    Suomela, J.

    1993-06-01

    The procedure for the determination of radon-222 by liquid scintillation counting is quite specific for this radionuclide. Radon-222 is extracted readily from the water sample by an organic scintillant. The decay products of radon-222 will remain in the water phase whilst radon-222 will be extracted into the organic phase. Before measurement the sample is stored for three hours until equilibrium is reached between radon-222 and its alpha emitting decay products. The alpha activity from radon-222 and its decay products is measured in a liquid scintillation counter

  6. Oil Palm Counting and Age Estimation from WorldView-3 Imagery and LiDAR Data Using an Integrated OBIA Height Model and Regression Analysis

    Directory of Open Access Journals (Sweden)

    Hossein Mojaddadi Rizeei

    2018-01-01

    Full Text Available The current study proposes a new method for oil palm age estimation and counting. A support vector machine algorithm (SVM of object-based image analysis (OBIA was implemented for oil palm counting. It was integrated with height model and multiregression methods to accurately estimate the age of trees based on their heights in five different plantation blocks. Multiregression and multi-kernel size models were examined over five different oil palm plantation blocks to achieve the most optimized model for age estimation. The sensitivity analysis was conducted on four SVM kernel types (sigmoid (SIG, linear (LN, radial basis function (RBF, and polynomial (PL with associated parameters (threshold values, gamma γ, and penalty factor (c to obtain the optimal OBIA classification approaches for each plantation block. Very high-resolution imageries of WorldView-3 (WV-3 and light detection and range (LiDAR were used for oil palm detection and age assessment. The results of oil palm detection had an overall accuracy of 98.27%, 99.48%, 99.28%, 99.49%, and 97.49% for blocks A, B, C, D, and E, respectively. Moreover, the accuracy of age estimation analysis showed 90.1% for 3-year-old, 87.9% for 4-year-old, 88.0% for 6-year-old, 87.6% for 8-year-old, 79.1% for 9-year-old, and 76.8% for 22-year-old trees. Overall, the study revealed that remote sensing techniques can be useful to monitor and detect oil palm plantation for sustainable agricultural management.

  7. Passive non destructive assay of hull waste by gross neutron counting method

    International Nuclear Information System (INIS)

    Andola, Sanjay; Sur, Amit; Rawool, A.M.; Sharma, B.; Kaushik, T.C.; Gupta, S.C.; Basu, Sekhar; Raman Kumar; Agarwal, K.

    2014-01-01

    The special nuclear material accounting (SNMA) is an important and necessary issue now in nuclear waste management. The hull waste generated from dissolution of spent fuel contains small amounts of Uranium and Plutonium and other actinides due to undissolved trapped material inside zircoalloy tubes. We report here on the development of a Passive Hull monitoring system using gross neutron counting technique and its implementation with semiautomatic instrumentation. The overall sensitivity of the 3 He detector banks placed at 75 cm from the centre of loaded hull cask comes out to 5.2 x 10 -3 counts per neutron (c/n) while with standard Pu-Be source placed in same position it comes out to be 3.1 x 10 3 c/n. The difference in the efficiency is mainly because of the differences in the geometry and size of hull cask as well as difference in the energy spectrum of hull waste and Pu-Be source. This is accounted through Monte Carlo computations. The Pu mass in solid waste comes out as expected and varies with the surface dose rate of drum in almost a proportional manner. Being simple and less time consuming, this setup has been installed for routine assay of solid Hull waste at NRB, Tarapur

  8. Determining random counts in liquid scintillation counting

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1979-01-01

    During measurements involving coincidence counting techniques, errors can arise due to the detection of chance or random coincidences in the multiple detectors used. A method and the electronic circuits necessary are here described for eliminating this source of error in liquid scintillation detectors used in coincidence counting. (UK)

  9. Continual integration method in the polaron model

    International Nuclear Information System (INIS)

    Kochetov, E.A.; Kuleshov, S.P.; Smondyrev, M.A.

    1981-01-01

    The article is devoted to the investigation of a polaron system on the base of a variational approach formulated on the language of continuum integration. The variational method generalizing the Feynman one for the case of the system pulse different from zero has been formulated. The polaron state has been investigated at zero temperature. A problem of the bound state of two polarons exchanging quanta of a scalar field as well as a problem of polaron scattering with an external field in the Born approximation have been considered. Thermodynamics of the polaron system has been investigated, namely, high-temperature expansions for mean energy and effective polaron mass have been studied [ru

  10. Counting carbohydrates

    Science.gov (United States)

    Carb counting; Carbohydrate-controlled diet; Diabetic diet; Diabetes-counting carbohydrates ... Many foods contain carbohydrates (carbs), including: Fruit and fruit juice Cereal, bread, pasta, and rice Milk and milk products, soy milk Beans, legumes, ...

  11. Improvements of the integral transport theory method

    International Nuclear Information System (INIS)

    Kavenoky, A.; Lam-Hime, M.; Stankovski, Z.

    1979-01-01

    The integral transport theory is widely used in practical reactor design calculations however it is computer time consuming for two dimensional calculations of large media. In the first part of this report a new treatment is presented; it is based on the Galerkin method: inside each region the total flux is expanded over a three component basis. Numerical comparison shows that this method can considerably reduce the computing time. The second part of the this report is devoted to homogeneization theory: a straightforward calculation of the fundamental mode for an heterogeneous cell is presented. At first general presentation of the problem is given, then it is simplified to plane geometry and numerical results are presented

  12. Collaborative teaching of an integrated methods course

    Directory of Open Access Journals (Sweden)

    George Zhou

    2011-03-01

    Full Text Available With an increasing diversity in American schools, teachers need to be able to collaborate in teaching. University courses are widely considered as a stage to demonstrate or model the ways of collaboration. To respond to this call, three authors team taught an integrated methods course at an urban public university in the city of New York. Following a qualitative research design, this study explored both instructors‟ and pre-service teachers‟ experiences with this course. Study findings indicate that collaborative teaching of an integrated methods course is feasible and beneficial to both instructors and pre-service teachers. For instructors, this collaborative teaching was a reciprocal learning process where they were engaged in thinking about teaching in a broader and innovative way. For pre-service teachers, this collaborative course not only helped them understand how three different subjects could be related to each other, but also provided opportunities for them to actually see how collaboration could take place in teaching. Their understanding of collaborative teaching was enhanced after the course.

  13. Gamma-spectrometric and total alpha-beta counting methods for radioactivity analysis of deuterium depleted water

    International Nuclear Information System (INIS)

    Ferdes, Ov. S.; Mladin, C.; Vladu, Mihaela; Bulubasa, G.; Bidica, N.

    2008-01-01

    According to national regulations, as well as to the EU directive on the quality of drinking water, the radionuclide concentrations represent some of the drinking water quality parameters. Among the most important radioactivity content parameters are: the total alpha and total beta concentration (Bq/l); K-40 content, and the gamma-nuclides volume activities. The paper presents the measuring methods for low-level total alpha and/or beta counting of volume samples, as well as the high-resolution gamma-ray spectrometric method used to measure the volume activity of nuclides in drinking water. These methods are applied to monitor the radioactivity content and quality of the QLARIVIA brand of Deuterium depleted water (DDW). There are discussed the performances of these applied methods as well as some preliminary results. (authors)

  14. Cloud-point measurement for (sulphate salts + polyethylene glycol 15000 + water) systems by the particle counting method

    International Nuclear Information System (INIS)

    Imani, A.; Modarress, H.; Eliassi, A.; Abdous, M.

    2009-01-01

    The phase separation of (water + salt + polyethylene glycol 15000) systems was studied by cloud-point measurements using the particle counting method. The effect of three kinds of sulphate salt (Na 2 SO 4 , K 2 SO 4 , (NH 4 ) 2 SO 4 ) concentration, polyethylene glycol 15000 concentration, mass ratio of polymer to salt on the cloud-point temperature of these systems have been investigated. The results obtained indicate that the cloud-point temperatures decrease linearly with increase in polyethylene glycol concentrations for different salts. Also, the cloud points decrease with an increase in mass ratio of salt to polymer.

  15. A new method of passive counting of nuclear missile warheads -a white paper for the Defense Threat Reduction Agency

    Energy Technology Data Exchange (ETDEWEB)

    Morris, Christopher [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Durham, J. Matthew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Guardincerri, Elena [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bacon, Jeffrey Darnell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wang, Zhehui [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fellows, Shelby [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Poulson, Daniel Cris [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Plaud-Ramos, Kenie Omar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Daughton, Tess Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Johnson, Olivia Ruth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-07-31

    Cosmic ray muon imaging has been studied for the past several years as a possible technique for nuclear warhead inspection and verification as part of the New Strategic Arms Reduction Treaty between the United States and the Russian Federation. The Los Alamos team has studied two different muon imaging methods for this application, using detectors on two sides and one side of the object of interest. In this report we present results obtained on single sided imaging of configurations aimed at demonstrating the potential of this technique for counting nuclear warheads in place with detectors above the closed hatch of a ballistic missile submarine.

  16. A Trial-and-Error Method with Autonomous Vehicle-to-Infrastructure Traffic Counts for Cordon-Based Congestion Pricing

    Directory of Open Access Journals (Sweden)

    Zhiyuan Liu

    2017-01-01

    Full Text Available This study proposes a practical trial-and-error method to solve the optimal toll design problem of cordon-based pricing, where only the traffic counts autonomously collected on the entry links of the pricing cordon are needed. With the fast development and adoption of vehicle-to-infrastructure (V2I facilities, it is very convenient to autonomously collect these data. Two practical properties of the cordon-based pricing are further considered in this article: the toll charge on each entry of one pricing cordon is identical; the total inbound flow to one cordon should be restricted in order to maintain the traffic conditions within the cordon area. Then, the stochastic user equilibrium (SUE with asymmetric link travel time functions is used to assess each feasible toll pattern. Based on a variational inequality (VI model for the optimal toll pattern, this study proposes a theoretically convergent trial-and-error method for the addressed problem, where only traffic counts data are needed. Finally, the proposed method is verified based on a numerical network example.

  17. Parallel Jacobi EVD Methods on Integrated Circuits

    Directory of Open Access Journals (Sweden)

    Chi-Chia Sun

    2014-01-01

    Full Text Available Design strategies for parallel iterative algorithms are presented. In order to further study different tradeoff strategies in design criteria for integrated circuits, A 10 × 10 Jacobi Brent-Luk-EVD array with the simplified μ-CORDIC processor is used as an example. The experimental results show that using the μ-CORDIC processor is beneficial for the design criteria as it yields a smaller area, faster overall computation time, and less energy consumption than the regular CORDIC processor. It is worth to notice that the proposed parallel EVD method can be applied to real-time and low-power array signal processing algorithms performing beamforming or DOA estimation.

  18. Temperature Effect Study on Growth and Survival of Pathogenic Vibrio parahaemolyticus in Jinjiang Oyster (Crassostrea rivularis with Rapid Count Method

    Directory of Open Access Journals (Sweden)

    Yuan Wang

    2018-01-01

    Full Text Available The growth of Vibrio parahaemolyticus (V. parahaemolyticus in oysters during postharvest storage increases the possibility of its infection in humans. In this work, to investigate the growth or survival profiles in different media, pathogenic V. parahaemolyticus in APW, Jinjiang oyster (JO, Crassostrea rivularis slurry, and live JO were studied under different temperatures. All the strain populations were counted through our double-layer agar plate (DLAP method. In APW, the pathogenic V. parahaemolyticus showed continuous growth under 15, 25, and 35°C, while a decline in behavior was displayed under 5°C. The similar survival trend of pathogenic V. parahaemolyticus in JO slurry and live JO was observed under 5, 25, and 35°C, except the delayed growth or decline profile compared to APW. Under 15°C, they displayed decline and growth profile in JO slurry and live JO, respectively. These results indicate the different sensitivity of pathogenic V. parahaemolyticus in these matrices to temperature variation. Furthermore, nonpathogenic V. parahaemolyticus displayed little difference in survival profiles when inoculated in live JO under corresponding temperatures. The results indicate that inhibition or promotion effect could be regulated under different storage temperature for both pathogenic and nonpathogenic strains. Besides, the DLAP method showed the obvious quickness and efficiency during the bacteria count.

  19. Numerov iteration method for second order integral-differential equation

    International Nuclear Information System (INIS)

    Zeng Fanan; Zhang Jiaju; Zhao Xuan

    1987-01-01

    In this paper, Numerov iterative method for second order integral-differential equation and system of equations are constructed. Numerical examples show that this method is better than direct method (Gauss elimination method) in CPU time and memoy requireing. Therefore, this method is an efficient method for solving integral-differential equation in nuclear physics

  20. An Integrated Method for Airfoil Optimization

    Science.gov (United States)

    Okrent, Joshua B.

    Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal

  1. A spore counting method and cell culture model for chlorine disinfection studies of Encephalitozoon syn. Septata intestinalis.

    Science.gov (United States)

    Wolk, D M; Johnson, C H; Rice, E W; Marshall, M M; Grahn, K F; Plummer, C B; Sterling, C R

    2000-04-01

    suggest that chlorine treatment may be an effective water treatment for E. intestinalis and that spectrophotometric methods may be substituted for labor-intensive hemacytometer methods when spores are counted in laboratory-based chlorine disinfection studies.

  2. Relativistic rise measurement by cluster counting method in time expansion chamber

    International Nuclear Information System (INIS)

    Rehak, P.; Walenta, A.H.

    1979-10-01

    A new approach to the measurement of the ionization energy loss for the charged particle identification in the region of the relativistic rise was tested experimentally. The method consists of determining in a special drift chamber (TEC) the number of clusters of the primary ionization. The method gives almost the full relativistic rise and narrower landau distribution. The consequences for a practical detector are discussed

  3. Method validation for simultaneous counting of Total α , β in Drinking Water using Liquid Scintillation Counter

    International Nuclear Information System (INIS)

    Al-Masri, M. S.; Nashawati, A.

    2014-05-01

    In this work, Method Validation Methods and Pulse Shape Analysis were validated to determine gross Alpha and Beta Emitters in Drinking Water using Liquid Scintillation Counter Win spectral 1414. Validation parameters include Method Detection Limit, Method Quantitation Limit, Repeatability Limit, Intermediate Precision, Trueness) Bias), Recovery Coefficient, Linearity and Uncertainty Budget in analysis. The results show that the Method Detection Limit and Method Quantitation Limit were 0.07, 0.24 Bq/l for Alpha emitters respectively, and 0.42, 1.4 Bq/l for Beta emitters, respectively. The relative standard deviation of Repeatability Limit reached 2.81% for Alpha emitters and 3.96% for Beta emitters. In addition to, the relative standard deviation of Intermediate Precisionis was 0.54% for Alpha emitters and 1.17% for Beta emitters. Moreover, the trueness was - 7.7% for Alpha emitters and - 4.5% for Beta emitters. Recovery Coefficient ranged between 87 - 96% and 88-101 for Alpha and Beta emitters, respectively. Linearity reached 1 for both Alpha and Beta emitters. on the other hand, Uncertainty Budget for all continents was 96.65% ,83.14% for Alpha and Beta emitters, respectively (author).

  4. METHODS OF INTEGRATED OPTIMIZATION MAGLEV TRANSPORT SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. Lasher

    2013-09-01

    example, this research proved the sustainability of the proposed integrated optimization parameters of transport systems. This approach could be applied not only for MTS, but also for other transport systems. Originality. The bases of the complex optimization of transport presented are the new system of universal scientific methods and approaches that ensure high accuracy and authenticity of calculations with the simulation of transport systems and transport networks taking into account the dynamics of their development. Practical value. The development of the theoretical and technological bases of conducting the complex optimization of transport makes it possible to create the scientific tool, which ensures the fulfillment of the automated simulation and calculating of technical and economic structure and technology of the work of different objects of transport, including its infrastructure.

  5. Unified method to integrate and blend several, potentially related, sources of information for genetic evaluation.

    Science.gov (United States)

    Vandenplas, Jérémie; Colinet, Frederic G; Gengler, Nicolas

    2014-09-30

    A condition to predict unbiased estimated breeding values by best linear unbiased prediction is to use simultaneously all available data. However, this condition is not often fully met. For example, in dairy cattle, internal (i.e. local) populations lead to evaluations based only on internal records while widely used foreign sires have been selected using internally unavailable external records. In such cases, internal genetic evaluations may be less accurate and biased. Because external records are unavailable, methods were developed to combine external information that summarizes these records, i.e. external estimated breeding values and associated reliabilities, with internal records to improve accuracy of internal genetic evaluations. Two issues of these methods concern double-counting of contributions due to relationships and due to records. These issues could be worse if external information came from several evaluations, at least partially based on the same records, and combined into a single internal evaluation. Based on a Bayesian approach, the aim of this research was to develop a unified method to integrate and blend simultaneously several sources of information into an internal genetic evaluation by avoiding double-counting of contributions due to relationships and due to records. This research resulted in equations that integrate and blend simultaneously several sources of information and avoid double-counting of contributions due to relationships and due to records. The performance of the developed equations was evaluated using simulated and real datasets. The results showed that the developed equations integrated and blended several sources of information well into a genetic evaluation. The developed equations also avoided double-counting of contributions due to relationships and due to records. Furthermore, because all available external sources of information were correctly propagated, relatives of external animals benefited from the integrated

  6. Application of the activation analysis using the method of retarded fission neutrons counting for the determination of some fissionable nuclides

    International Nuclear Information System (INIS)

    Armelin, M.J.A.

    1984-01-01

    A system for the detection and counting of delayed neutrons which allows the analysis of some fissile and fertile nuclides, in samples of milligram size, was developed. This was applied for the analysis of natural uranium and thorium and also for determining the 235 U/ 238 U ratio in non-irradiated samples which contain uranium with different degrees of enrichment in 235 U. The spectrum of activated neutrons was varied in order to discriminate the nuclides, by covering or not the sample with a material (cadmium or boron) able to absorb low energy neutrons. Determination of 235 U/ 238 U ratios, through the number of delayed neutrons, was made by drawing a calibration curve using standards ranging from 0.5% to 93% on 235 U; the accuracy of the method was also examined. In a first step, conditions for a simultaneous and non-destructive analysis of uranium and thorium were developed. The interference between these two nuclides was studied, using simulated samples. Real samples were provided by Nuclemon and IAEA. For samples with uranium concentration in the range of percentages and thorium concentration of some ppm, uranium interferes in the determination of thorium through the non-destructive analytical method. For this case, a fast and quantitative chemical method was studied which allows for the separation of thorium from uranium before the determination of throrium concentration by counting the delayed fission neutrons. It was found that the results obtained by both destructive and non-destructive methods are very consistent and can be considered statistically equivalent within a confidence level of 95%. (Author) [pt

  7. Analysis of quick-count and exit-poll methods as a part of the public monitoring activities during the elections in Ukraine

    Directory of Open Access Journals (Sweden)

    D. Y. Arabadjyiev

    2015-09-01

    As the practice of quick-count implementation shows, it is not only a quick method of voting results obtaining for their later comparison with the official results of the elections. The method also helps citizens to monitor the quality of the electoral process, regardless of the official government communications. It also allows you to evaluate the activities of bodies and institutions responsible for organizing elections during the voting, counting and processing.

  8. A modern mathematical method for filtering noise in low count experiments

    Directory of Open Access Journals (Sweden)

    Medhat Moustafa E.

    2015-01-01

    Full Text Available In the proposed work, a novel application of a numerical and functional analysis based on the discrete wavelet transform is discussed. The mathematics of improving signals and removing noises are described. Results obtained show that the method used in a variety of gamma spectra is superior to other techniques.

  9. Counting Better? An Examination of the Impact of Quantitative Method Teaching on Statistical Anxiety and Confidence

    Science.gov (United States)

    Chamberlain, John Martyn; Hillier, John; Signoretta, Paola

    2015-01-01

    This article reports the results of research concerned with students' statistical anxiety and confidence to both complete and learn to complete statistical tasks. Data were collected at the beginning and end of a quantitative method statistics module. Students recognised the value of numeracy skills but felt they were not necessarily relevant for…

  10. The Absolute Standardization Method of 18F by Using 4π(PC)-γ Coincidence Counting System

    International Nuclear Information System (INIS)

    Pujadi

    2003-01-01

    The absolute standardization of 18 F radionuclide had been carried out using 4π(PC)-γ coincidence counting. The radionuclide 18 F fluoro 2 deoxyglucose (FDG) was the quantity used at nuclear medicine in diagnosis of oncology. The radionuclide 18 F had been produced by 18 O(p,n) 18 F reaction in a cyclotron. Source preparation had been done by gravimetry method after source was dissolved in aquadest. The samples have been measured using 4π(PC)-γ coincidence counting on different three window gamma energy, the gamma windows were set on 511 keV peak, 1022 keV and above 511 keV region. The result of measurement on 1022 keV peak and above 511 keV region, are fairly good with discrepancy about 0.15%, but the linearity of gamma window above 511 keV is the best with R 2 = 0.8287. The measurement on 511 keV gamma window region gave the result with difference 2.28% compared with another two region. (author)

  11. A Comparison of Platelet Count and Enrichment Percentages in the Platelet Rich Plasma (PRP) Obtained Following Preparation by Three Different Methods.

    Science.gov (United States)

    Sabarish, Ram; Lavu, Vamsi; Rao, Suresh Ranga

    2015-02-01

    Platelet rich plasma (PRP) represents an easily accessible and rich source of autologous growth factors. Different manual methods for the preparation of PRP have been suggested. Lacuna in knowledge exists about the efficacy of PRP preparation by these different manual methods. This study was performed to determine the effects of centrifugation rate revolutions per minute (RPM) and time on the platelet count and enrichment percentages in the concentrates obtained following the three different manual methods of PRP preparation. In vitro experimental study. This was an experimental study in which platelet concentration was assessed in the PRP prepared by three different protocols as suggested by Marx R (method 1), Okuda K (method 2) and Landesberg R (method 3). A total of 60 peripheral blood samples, (n=20 per method) were obtained from healthy volunteers. Baseline platelet count was assessed for all the subjects following which PRP was prepared. The platelet count in the PRP was determined using coulter counter (Sysmex XT 2000i). The mean of the platelet count obtained and their enrichment percentage were calculated and intergroup comparison was done (Tukey's HSD test). The number of platelets and enrichment percentage in PRP prepared by method 1 was higher compared to method 2 and method 3; this difference in platelet concentrates was found to be statistically significant (p < 0.05). The centrifugation rate and time appear to be important parameters, which influence the platelet yield. Method 1 which had lower centrifugation rate and time yielded a greater platelet count and enrichment percentage.

  12. Evaluation of the performance of a point-of-care method for total and differential white blood cell count in clozapine users.

    Science.gov (United States)

    Bui, H N; Bogers, J P A M; Cohen, D; Njo, T; Herruer, M H

    2016-12-01

    We evaluated the performance of the HemoCue WBC DIFF, a point-of-care device for total and differential white cell count, primarily to test its suitability for the mandatory white blood cell monitoring in clozapine use. Leukocyte count and 5-part differentiation was performed by the point-of-care device and by routine laboratory method in venous EDTA-blood samples from 20 clozapine users, 20 neutropenic patients, and 20 healthy volunteers. From the volunteers, also a capillary sample was drawn. Intra-assay reproducibility and drop-to-drop variation were tested. The correlation between both methods in venous samples was r > 0.95 for leukocyte, neutrophil, and lymphocyte counts. The correlation between point-of-care (capillary sample) and routine (venous sample) methods for these cells was 0.772; 0.817 and 0.798, respectively. Only for leukocyte and neutrophil counts, the intra-assay reproducibility was sufficient. The point-of-care device can be used to screen for leukocyte and neutrophil counts. Because of the relatively high measurement uncertainty and poor correlation with venous samples, we recommend to repeat the measurement with a venous sample if cell counts are in the lower reference range. In case of clozapine therapy, neutropenia can probably be excluded if high neutrophil counts are found and patients can continue their therapy. © 2016 John Wiley & Sons Ltd.

  13. Counting cormorants

    DEFF Research Database (Denmark)

    Bregnballe, Thomas; Carss, David N; Lorentsen, Svein-Håkon

    2013-01-01

    This chapter focuses on Cormorant population counts for both summer (i.e. breeding) and winter (i.e. migration, winter roosts) seasons. It also explains differences in the data collected from undertaking ‘day’ versus ‘roost’ counts, gives some definitions of the term ‘numbers’, and presents two...

  14. Configurable memory system and method for providing atomic counting operations in a memory device

    Science.gov (United States)

    Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin

    2010-09-14

    A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.

  15. Standard Test Method for Sizing and Counting Particulate Contaminant In and On Clean Room Garments

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2000-01-01

    1.1 This test method covers the determination of detachable particulate contaminant 5 m or larger, in and on the fabric of clean room garments. 1.2 This test method does not apply to nonporous fabrics such as Tyvek or Gortex. It only applies to fabrics that are porous such as cotton or polyester. 1.3 The values stated in SI units are to be regarded as the standard. The inch-pound values given in parentheses are for information only. 1.4 This test method provides not only the traditional optical microscopic analysis but also a size distribution and surface obscuration analysis for particles on a fine-textured membrane filter or in a tape lift sample. It utilizes transmitted illumination to render all particles darker than the background for gray level detection. Particles collected on opaque plates must be transferred to a suitable membrane filter. This standard may involve hazardous materials, operations, and equipment. This standard does not purport to address all of the safety concerns, if any, associat...

  16. Methods of legitimation: how ethics committees decide which reasons count in public policy decision-making.

    Science.gov (United States)

    Edwards, Kyle T

    2014-07-01

    In recent years, liberal democratic societies have struggled with the question of how best to balance expertise and democratic participation in the regulation of emerging technologies. This study aims to explain how national deliberative ethics committees handle the practical tension between scientific expertise, ethical expertise, expert patient input, and lay public input by explaining two institutions' processes for determining the legitimacy or illegitimacy of reasons in public policy decision-making: that of the United Kingdom's Human Fertilisation and Embryology Authority (HFEA) and the United States' American Society for Reproductive Medicine (ASRM). The articulation of these 'methods of legitimation' draws on 13 in-depth interviews with HFEA and ASRM members and staff conducted in January and February 2012 in London and over Skype, as well as observation of an HFEA deliberation. This study finds that these two institutions employ different methods in rendering certain arguments legitimate and others illegitimate: while the HFEA attempts to 'balance' competing reasons but ultimately legitimizes arguments based on health and welfare concerns, the ASRM seeks to 'filter' out arguments that challenge reproductive autonomy. The notably different structures and missions of each institution may explain these divergent approaches, as may what Sheila Jasanoff (2005) terms the distinctive 'civic epistemologies' of the US and the UK. Significantly for policy makers designing such deliberative committees, each method differs substantially from that explicitly or implicitly endorsed by the institution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Boundary integral methods for unsaturated flow

    International Nuclear Information System (INIS)

    Martinez, M.J.; McTigue, D.F.

    1990-01-01

    Many large simulations may be required to assess the performance of Yucca Mountain as a possible site for the nations first high level nuclear waste repository. A boundary integral equation method (BIEM) is described for numerical analysis of quasilinear steady unsaturated flow in homogeneous material. The applicability of the exponential model for the dependence of hydraulic conductivity on pressure head is discussed briefly. This constitutive assumption is at the heart of the quasilinear transformation. Materials which display a wide distribution in pore-size are described reasonably well by the exponential. For materials with a narrow range in pore-size, the exponential is suitable over more limited ranges in pressure head. The numerical implementation of the BIEM is used to investigate the infiltration from a strip source to a water table. The net infiltration of moisture into a finite-depth layer is well-described by results for a semi-infinite layer if αD > 4, where α is the sorptive number and D is the depth to the water table. the distribution of moisture exhibits a similar dependence on αD. 11 refs., 4 figs.,

  18. Do different standard plate counting (IDF/ISSO or AOAC) methods interfere in the conversion of individual bacteria counts to colony forming units in raw milk?

    Science.gov (United States)

    Cassoli, L D; Lima, W J F; Esguerra, J C; Da Silva, J; Machado, P F; Mourão, G B

    2016-10-01

    This study aimed to establish the correlation between individual bacterial count (IBC) obtained by flow cytometry and the number of colony forming units (CFU) determined by standard plate count (SPC) in raw milk using two different reference methodologies: the methodology of the International Dairy Federation (IDF) - International Organization for Standardization (ISO) 4833, incubation for 72 h at 30°C and the methodology of the Association of Official Analytical Chemists (AOAC), incubation for 48 h at 35°C. For this, 100 bovine milk samples (80 ml) from different farms were collected in a sterile bottle and maintained refrigerated at 4°C and were delivered to the laboratory. In the laboratory, the samples were divided into two vials of 40 ml each. Then, half of the vials were forwarded for the SPC analysis, and the other half were analysed using the equipment BactoScan FC. The analyses by flow cytometry and SPC were performed at the same time (maximum deviation of +/- 1 h). To transform the data from IBC ml(-1) to CFU ml(-1) (IDF or AOAC methodology), a standard linear regression equation was used, as recommended by IDF/ISO-196. The difference between the reference methodologies affects the equation that transforms IBC into CFU and therefore the accuracy of the results. The results estimated by the equation using the ISO 4833 methodology were on average 0·18 log units higher than the results estimated using the equation using the AOAC methodology. After the comparison of the methodologies, it was concluded that there is an impact of the reference methodologies on the conversion of the results from IBC to CFU. Depending on the methodology adopted by each laboratory or country, there may not be equivalence in the results. Hence, the laboratories specialized in milk quality analysis that have changed their methodology for analysis, passing from the MAPA (AOAC) methodology to the IDF standard, need to develop new conversion equations to make their

  19. Box-Counting Method of 2D Neuronal Image: Method Modification and Quantitative Analysis Demonstrated on Images from the Monkey and Human Brain

    Directory of Open Access Journals (Sweden)

    Nemanja Rajković

    2017-01-01

    Full Text Available This study calls attention to the difference between traditional box-counting method and its modification. The appropriate scaling factor, influence on image size and resolution, and image rotation, as well as different image presentation, are showed on the sample of asymmetrical neurons from the monkey dentate nucleus. The standard BC method and its modification were evaluated on the sample of 2D neuronal images from the human neostriatum. In addition, three box dimensions (which estimate the space-filling property, the shape, complexity, and the irregularity of dendritic tree were used to evaluate differences in the morphology of type III aspiny neurons between two parts of the neostriatum.

  20. Box-Counting Method of 2D Neuronal Image: Method Modification and Quantitative Analysis Demonstrated on Images from the Monkey and Human Brain.

    Science.gov (United States)

    Rajković, Nemanja; Krstonošić, Bojana; Milošević, Nebojša

    2017-01-01

    This study calls attention to the difference between traditional box-counting method and its modification. The appropriate scaling factor, influence on image size and resolution, and image rotation, as well as different image presentation, are showed on the sample of asymmetrical neurons from the monkey dentate nucleus. The standard BC method and its modification were evaluated on the sample of 2D neuronal images from the human neostriatum. In addition, three box dimensions (which estimate the space-filling property, the shape, complexity, and the irregularity of dendritic tree) were used to evaluate differences in the morphology of type III aspiny neurons between two parts of the neostriatum.

  1. [Study on modeling method of total viable count of fresh pork meat based on hyperspectral imaging system].

    Science.gov (United States)

    Wang, Wei; Peng, Yan-Kun; Zhang, Xiao-Li

    2010-02-01

    Once the total viable count (TVC) of bacteria in fresh pork meat exceeds a certain number, it will become pathogenic bacteria. The present paper is to explore the feasibility of hyperspectral imaging technology combined with relevant modeling method for the prediction of TVC in fresh pork meat. For the certain kind of problem that has remarkable nonlinear characteristic and contains few samples, as well as the problem that has large amount of data used to express the information of spectrum and space dimension, it is crucial to choose a logical modeling method in order to achieve good prediction result. Based on the comparative result of partial least-squares regression (PLSR), artificial neural networks (ANNs) and least square support vector machines (LS-SVM), the authors found that the PLSR method was helpless for nonlinear regression problem, and the ANNs method couldn't get approving prediction result for few samples problem, however the prediction models based on LS-SVM can give attention to the little training error and the favorable generalization ability as soon as possible, and can make them well synchronously. Therefore LS-SVM was adopted as the modeling method to predict the TVC of pork meat. Then the TVC prediction model was constructed using all the 512 wavelength data acquired by the hyperspectral imaging system. The determination coefficient between the TVC obtained with the standard plate count for bacterial colonies method and the LS-SVM prediction result was 0.987 2 and 0.942 6 for the samples of calibration set and prediction set respectively, also the root mean square error of calibration (RMSEC) and the root mean square error of prediction (RMSEP) was 0.207 1 and 0.217 6 individually, and the result was considerably better than that of MLR, PLSR and ANNs method. This research demonstrates that using the hyperspectral imaging system coupled with the LS-SVM modeling method is a valid means for quick and nondestructive determination of TVC of pork

  2. Integral Equation Methods for Electromagnetic and Elastic Waves

    CERN Document Server

    Chew, Weng; Hu, Bin

    2008-01-01

    Integral Equation Methods for Electromagnetic and Elastic Waves is an outgrowth of several years of work. There have been no recent books on integral equation methods. There are books written on integral equations, but either they have been around for a while, or they were written by mathematicians. Much of the knowledge in integral equation methods still resides in journal papers. With this book, important relevant knowledge for integral equations are consolidated in one place and researchers need only read the pertinent chapters in this book to gain important knowledge needed for integral eq

  3. Analytic methods to generate integrable mappings

    Indian Academy of Sciences (India)

    essential integrability features of an integrable differential equation is a .... With this in mind we first write x3(t) as a cubic polynomial in (xn−1,xn,xn+1) and then ..... coefficients, the quadratic equation in xn+N has real and distinct roots which in ...

  4. Sampling methods for rumen microbial counts by Real-Time PCR techniques

    Directory of Open Access Journals (Sweden)

    S. Puppo

    2010-02-01

    Full Text Available Fresh rumen samples were withdrawn from 4 cannulated buffalo females fed a fibrous diets in order to quantify bacteria concentration in the rumen by Real-Time PCR techniques. To obtain DNA of a good quality from whole rumen fluid, eight (M1-M8 different pre-filtration methods (cheese cloths, glass-fibre and nylon filter in combination with various centrifugation speeds (1000, 5000 and 14,000 rpm were tested. Genomic DNA extraction was performed either on fresh or frozen samples (-20°C. The quantitative bacteria analysis was realized according to Real-Time PCR procedure for Butyrivibrio fibrisolvens reported in literature. M5 resulted the best sampling procedure allowing to obtain a suitable genomic DNA. No differences were revealed between fresh and frozen samples.

  5. Beauty-jet tagging using the track counting method in pp collisions with ALICE at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Feldkamp, Linus [Westfaelische Wilhelms-Universitaet Muenster (Germany); Collaboration: ALICE-Collaboration

    2016-07-01

    Charm and beauty quarks, produced in the early stage of heavy-ion collisions, are ideal probes to study the characteristics of the hot and dense deconfined medium (Quark-Gluon Plasma) formed in these collisions. The radiative energy loss of high energy partons interacting with the medium is expected to be larger for gluons than for quarks, and to depend on the quark mass, with beauty quarks losing less energy than charm quarks, light quarks and gluons. Therefore, a comparison of the modification in the momentum distribution or possibly in the jet shape of beauty-jets with that of light flavour or c-jets in Pb-Pb collisions relative to pp collisions allows to investigate the mass dependence of the energy loss. It also allows to study the redistribution of the lost energy and possible modifications to b-quark fragmentation in the medium. The track counting method exploits the large rφ-impact parameters, d{sub 0}, of B-meson decay products to identify beauty-jets. The signed rφ-impact parameter, d{sub 0} = sign(vector d{sub 0} . vector p{sub jet}) d{sub 0}, is calculated for each track in the jet cone, where vector d{sub 0} is pointing away from the primary vertex. The distribution of the n-th largest d{sub 0} in a jet is sensitive to the flavor of the hadronizing parton and allows to select jets coming form beauty on a statistical basis. In this contribution, we give an overview of the beauty jet measurement using the track counting method with ALICE in pp collisions at √(s) = 7 TeV that will serve as baseline reference for future energy loss studies.

  6. Size-controlled fluorescent nanodiamonds: A facile method of fabrication and color-center counting

    KAUST Repository

    Mahfouz, Remi

    2013-01-01

    We present a facile method for the production of fluorescent diamond nanocrystals (DNCs) of different sizes and efficiently quantify the concentration of emitting defect color centers (DCCs) of each DNC size. We prepared the DNCs by ball-milling commercially available micrometer-sized synthetic (high pressure, high temperature (HPHT)) diamonds and then separated the as-produced DNCs by density gradient ultracentrifugation (DGU) into size-controlled fractions. A protocol to enhance the uniformity of the nitrogen-vacancy (NV) centers in the diamonds was devised by depositing the DNCs as a dense monolayer on amino-silanized silicon substrates and then subjecting the monolayer to He+ beam irradiation. Using a standard confocal setup, we analyzed the average number of NV centers per crystal, and obtained a quantitative relationship between the DNC particle size and the NV number per crystal. This relationship was in good agreement with results from previous studies that used more elaborate setups. Our findings suggest that nanocrystal size separation by DGU may be used to control the number of defects per nanocrystal. The efficient approaches described herein to control and quantify DCCs are valuable to researchers as they explore applications for color centers and new strategies to create them. © 2013 The Royal Society of Chemistry.

  7. Comparing and counting logs in direct and effective methods of QCD resummation

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Leandro G. [Laboratoire de Physique Théorique, Université Paris-Sud 11 and CNRS,91405 Orsay Cedex (France); Institut de Biologie de l’École Normale Supérieure (IBENS),Inserm 1024-CNRS 8197, 46 rue d’Ulm, 75005 Paris (France); Ellis, Stephen D. [Department of Physics, University of Washington,Seattle, WA 98195 (United States); Lee, Christopher [Theoretical Division, MS B283, Los Alamos National Laboratory,Los Alamos, NM 87544 (United States); Sterman, George [C.N. Yang Institute for Theoretical Physics, Stony Brook University,Stony Brook, NY 11794 (United States); Sung, Ilmo [Department of Applied Physics, New York University,Brooklyn, NY 11201 (United States); Queens College, City University of New York,Flushing, NY 11367 (United States); Walsh, Jonathan R. [Lawrence Berkeley National Laboratory, University of California,Berkeley, CA 94720 (United States); Berkeley Center for Theoretical Physics, University of California,Berkeley, CA 94720 (United States)

    2014-04-29

    We compare methods to resum logarithms in event shape distributions as they have been used in perturbative QCD directly and in effective field theory. We demonstrate that they are equivalent. In showing this equivalence, we are able to put standard soft-collinear effective theory (SCET) formulae for cross sections in momentum space into a novel form more directly comparable with standard QCD formulae, and endow the QCD formulae with dependence on separated hard, jet, and soft scales, providing potential ways to improve estimates of theoretical uncertainty. We show how to compute cross sections in momentum space to keep them as accurate as the corresponding expressions in Laplace space. In particular, we point out that that care is required in truncating differential distributions at N{sup k}LL accuracy to ensure they match the accuracy of the corresponding cumulant or Laplace transform. We explain how to avoid such mismatches at N{sup k}LL accuracy, and observe why they can also be avoided by working to N{sup k}LL{sup ′} accuracy.

  8. Evaluation of left ventricular function in patients with atrial fibrillation by ECG gated blood pool scintigraphy. Using frame count normalization method

    Energy Technology Data Exchange (ETDEWEB)

    Akanabe, Hiroshi; Oshima, Motoo; Sakuma, Sadayuki

    1988-07-01

    The assumption necessary to perform ECG gated blood pool scintigraphy (EGBPS) are seemingly not valid for patients with atrial fibrillation (af), since they have wide variability in cardiac cycle length. The data were acquired in frame mode within the limits of mean heart rate of fix the first diastolic volume, and were calculated by frame count normalization (FCN) method to correct total counts in each frame. EGBPS were performed twelve patients with af, who were operated against valvular disease. The data acquired within mean heart rate +-10 % in frame mode were divided to 32 frames, and calculated total frame counts. With FCN method total frame counts from at 22nd to 32nd frame were multiplied to be equal to the average of total frame counts. FCN method could correct total frame counts at the latter frames. And there was good correlation between left ventricular ejection fraction calculated from scintigraphy and that from contrast cineangiography. Thus EGBPS with FCN method may be allow estimation of cardiac function even in subjects with af.

  9. An Evaluation of the Accuracy of the Subtraction Method Used for Determining Platelet Counts in Advanced Platelet-Rich Fibrin and Concentrated Growth Factor Preparations

    Directory of Open Access Journals (Sweden)

    Taisuke Watanabe

    2017-01-01

    Full Text Available Platelet concentrates should be quality-assured of purity and identity prior to clinical use. Unlike for the liquid form of platelet-rich plasma, platelet counts cannot be directly determined in solid fibrin clots and are instead calculated by subtracting the counts in other liquid or semi-clotted fractions from those in whole blood samples. Having long suspected the validity of this method, we herein examined the possible loss of platelets in the preparation process. Blood samples collected from healthy male donors were immediately centrifuged for advanced platelet-rich fibrin (A-PRF and concentrated growth factors (CGF according to recommended centrifugal protocols. Blood cells in liquid and semi-clotted fractions were directly counted. Platelets aggregated on clot surfaces were observed by scanning electron microscopy. A higher centrifugal force increased the numbers of platelets and platelet aggregates in the liquid red blood cell fraction and the semi-clotted red thrombus in the presence and absence of the anticoagulant, respectively. Nevertheless, the calculated platelet counts in A-PRF/CGF preparations were much higher than expected, rendering the currently accepted subtraction method inaccurate for determining platelet counts in fibrin clots. To ensure the quality of solid types of platelet concentrates chairside in a timely manner, a simple and accurate platelet-counting method should be developed immediately.

  10. Validation of Analysis Method of pesticides in fresh tomatoes by Gas Chromatography associated to a liquid scintillation counting

    International Nuclear Information System (INIS)

    Dhib, Ahlem

    2011-01-01

    Pesticides are nowadays considered as toxic for human health. The maximum residues levels (MRL) in foodstuff are more and more strict. Therefore, selective analytical techniques are necessary for their identification and their quantification. The aim of this study is to set up a multi residue method for the determination of pesticides in tomatoes by gas chromatography with μECD detector (GC/μECD) associated to liquid scintillation counting. A global analytical protocol consisting of a QuECHERS version of the extraction step followed by purification step of the resulting extract on a polymeric sorbent was set up. The 14 C-chloropyrifos used as an internal standard proved excellent to control the different steps needed for the sample preparation. The method optimized is specific, selective with a recovery averaged more than 70 pour cent, repetitive and reproducible. Although some others criteria need to be checked regarding validation before its use in routine analysis, the potential of the method has been demonstrated.

  11. Electronic alarm device for radioactivity detector associated with a direct current amplifier or with a integration-based counting assembly

    International Nuclear Information System (INIS)

    Desmaretz, Marc; Ferlicot, Rene

    1964-04-01

    The authors report the study of a device aimed at triggering sound and light alarms when a radiation detector associated with a direct current amplifier or with a counting assembly detects a radiation intensity greater than one or two previously defined thresholds. This device can be used at any time for a detection assembly which is not continuously monitored. It has been designed to be adapted to the CEA standard electronics currently used in installations and on which the alarm function had not been initially foreseen. The assembly comprises an additional safety device for the control of any untimely shutdown of the detection chain [fr

  12. Integrated circuit and method of arbitration in a network on an integrated circuit.

    NARCIS (Netherlands)

    2011-01-01

    The invention relates to an integrated circuit and to a method of arbitration in a network on an integrated circuit. According to the invention, a method of arbitration in a network on an integrated circuit is provided, the network comprising a router unit, the router unit comprising a first input

  13. Integrals of Frullani type and the method of brackets

    Directory of Open Access Journals (Sweden)

    Bravo Sergio

    2017-01-01

    Full Text Available The method of brackets is a collection of heuristic rules, some of which have being made rigorous, that provide a flexible, direct method for the evaluation of definite integrals. The present work uses this method to establish classical formulas due to Frullani which provide values of a specific family of integrals. Some generalizations are established.

  14. Primary standardization of {sup 152}Eu by 4πβ(LS) - γ (NaI) coincidence counting and CIEMAT-NIST method

    Energy Technology Data Exchange (ETDEWEB)

    Ruzzarin, A., E-mail: aruzzarin@nuclear.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (LIN/PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Lab. de Instrumentação Nuclear; Cruz, P.A.L. da; Ferreira Filho, A.L.; Iwahara, A. [Instituto de Radioproteção e Dosimetria (LNMRI/IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. Nacional de Metrologia das Radiações Ionizantes

    2017-07-01

    The 4πβ-γ coincidence counting and CIEMAT/NIST liquid scintillation method were used in the standardization of a solution of {sup 152}Eu. In CIEMAT/NIST method, measurements were performed in a Liquid Scintillation Counter model Wallac 1414. In the 4πβ-γ coincidence counting, the solution was standardized using a coincidence method with 'beta-efficiency extrapolation'. A simple 4πβ-γ coincidence system was used, with acrylic scintillation cell coupled to two coincident photomultipliers at 180° each other and NaI(Tl) detector. The activity concentrations obtained were 156.934 ± 0.722 and 157.403 ± 0.113 kBq/g, respectively, for CIEMAT/NIST and 4πβ-γ coincidence counting measurement methods. (author)

  15. Method validation and verification in liquid scintillation counting using the long-term uncertainty method (LTUM) on two decades of proficiency test data

    International Nuclear Information System (INIS)

    Verrezen, F.; Vasile, M.; Loots, H.; Bruggeman, M.

    2017-01-01

    Results from proficiency tests gathered over the past two decades by the laboratory for low level radioactivity measurements for liquid scintillation counting of 3 H (184 results) and 14 C (74 results) are used to verify the validated measurement methods used by the laboratory, in particular the estimated uncertainty budget of the method and its reproducibility and stability. A linear regression approach is used for the analysis of the results, described in the literature as the long term uncertainty in measurement method. The present study clearly indicates the advantages of using proficiency test results in identifying possible constant or proportional bias effects as well as the possibility to compare the laboratory performance with the performance of peer laboratories. (author)

  16. Accurate Electromagnetic Modeling Methods for Integrated Circuits

    NARCIS (Netherlands)

    Sheng, Z.

    2010-01-01

    The present development of modern integrated circuits (IC’s) is characterized by a number of critical factors that make their design and verification considerably more difficult than before. This dissertation addresses the important questions of modeling all electromagnetic behavior of features on

  17. A powerful nonparametric method for detecting differentially co-expressed genes: distance correlation screening and edge-count test.

    Science.gov (United States)

    Zhang, Qingyang

    2018-05-16

    Differential co-expression analysis, as a complement of differential expression analysis, offers significant insights into the changes in molecular mechanism of different phenotypes. A prevailing approach to detecting differentially co-expressed genes is to compare Pearson's correlation coefficients in two phenotypes. However, due to the limitations of Pearson's correlation measure, this approach lacks the power to detect nonlinear changes in gene co-expression which is common in gene regulatory networks. In this work, a new nonparametric procedure is proposed to search differentially co-expressed gene pairs in different phenotypes from large-scale data. Our computational pipeline consisted of two main steps, a screening step and a testing step. The screening step is to reduce the search space by filtering out all the independent gene pairs using distance correlation measure. In the testing step, we compare the gene co-expression patterns in different phenotypes by a recently developed edge-count test. Both steps are distribution-free and targeting nonlinear relations. We illustrate the promise of the new approach by analyzing the Cancer Genome Atlas data and the METABRIC data for breast cancer subtypes. Compared with some existing methods, the new method is more powerful in detecting nonlinear type of differential co-expressions. The distance correlation screening can greatly improve computational efficiency, facilitating its application to large data sets.

  18. Social network extraction based on Web: 3. the integrated superficial method

    Science.gov (United States)

    Nasution, M. K. M.; Sitompul, O. S.; Noah, S. A.

    2018-03-01

    The Web as a source of information has become part of the social behavior information. Although, by involving only the limitation of information disclosed by search engines in the form of: hit counts, snippets, and URL addresses of web pages, the integrated extraction method produces a social network not only trusted but enriched. Unintegrated extraction methods may produce social networks without explanation, resulting in poor supplemental information, or resulting in a social network of durmise laden, consequently unrepresentative social structures. The integrated superficial method in addition to generating the core social network, also generates an expanded network so as to reach the scope of relation clues, or number of edges computationally almost similar to n(n - 1)/2 for n social actors.

  19. Adaptive integral equation methods in transport theory

    International Nuclear Information System (INIS)

    Kelley, C.T.

    1992-01-01

    In this paper, an adaptive multilevel algorithm for integral equations is described that has been developed with the Chandrasekhar H equation and its generalizations in mind. The algorithm maintains good performance when the Frechet derivative of the nonlinear map is singular at the solution, as happens in radiative transfer with conservative scattering and in critical neutron transport. Numerical examples that demonstrate the algorithm's effectiveness are presented

  20. A symplectic integration method for elastic filaments

    Science.gov (United States)

    Ladd, Tony; Misra, Gaurav

    2009-03-01

    Elastic rods are a ubiquitous coarse-grained model of semi-flexible biopolymers such as DNA, actin, and microtubules. The Worm-Like Chain (WLC) is the standard numerical model for semi-flexible polymers, but it is only a linearized approximation to the dynamics of an elastic rod, valid for small deflections; typically the torsional motion is neglected as well. In the standard finite-difference and finite-element formulations of an elastic rod, the continuum equations of motion are discretized in space and time, but it is then difficult to ensure that the Hamiltonian structure of the exact equations is preserved. Here we discretize the Hamiltonian itself, expressed as a line integral over the contour of the filament. This discrete representation of the continuum filament can then be integrated by one of the explicit symplectic integrators frequently used in molecular dynamics. The model systematically approximates the continuum partial differential equations, but has the same level of computational complexity as molecular dynamics and is constraint free. Numerical tests show that the algorithm is much more stable than a finite-difference formulation and can be used for high aspect ratio filaments, such as actin. We present numerical results for the deterministic and stochastic motion of single filaments.

  1. H I, galaxy counts, and reddening: Variation in the gas-to-dust ratio, the extinction at high galactic latitudes, and a new method for determining galactic reddening

    International Nuclear Information System (INIS)

    Burstein, D.; Heiles, C.

    1978-01-01

    We reanalyze the interrelationships among Shane-Wirtanen galaxy counts, H I column densities, and reddenings, and resolve many of the problems raised by Heiles. These problems were caused by two factors: subtle biases in the reddening data and a variable gas-to-dust ratio in the galaxy. We present a compilation of reddenings for RR Lyrae stars and globular clusters which are on the same system and which we believe to be relatively free of biases. The extinction at the galactic poles, as determined by galaxy counts, is reexamined by using a new method to analyze galaxy counts. This new method partially accounts for the nonrandom clustering of galaxies and permits a reasonable estimate of the error in log N/sub gal/ as a function of latitude. The analysis shows that the galaxy counts (or galaxy cluster counts) are too noisy to allow direct determination of the extinction, or variation in extinction, near the galactic poles. From all available data, we conclude that the reddening at the poles is small [< or =0.02 mag in E (B--V) over much of the north galactic pole] and irregularly distributed. We find that there are zero offsets in the relations between E (B--V) and H I, and between galaxy counts and H I, which are at least partly the result of an instrumental effect in the radio data. We also show that the gas-to-dust ratio can vary by a factor of 2 from the average, and we present two methods for correcting for this variability in predicting the reddening of objects which are located outside of the galactic absorbing layer. We present a prescription for predicting these reddenings; in the area of sky covered by the Shane-Wirtanen galaxy counts, the error in these predictions is, on average, less than 0.03 mag in E

  2. Integral Method of Boundary Characteristics: Neumann Condition

    Science.gov (United States)

    Kot, V. A.

    2018-05-01

    A new algorithm, based on systems of identical equalities with integral and differential boundary characteristics, is proposed for solving boundary-value problems on the heat conduction in bodies canonical in shape at a Neumann boundary condition. Results of a numerical analysis of the accuracy of solving heat-conduction problems with variable boundary conditions with the use of this algorithm are presented. The solutions obtained with it can be considered as exact because their errors comprise hundredths and ten-thousandths of a persent for a wide range of change in the parameters of a problem.

  3. Selective Integration in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Lars; Andersen, Søren; Damkilde, Lars

    2009-01-01

    The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....

  4. Numerical method of singular problems on singular integrals

    International Nuclear Information System (INIS)

    Zhao Huaiguo; Mou Zongze

    1992-02-01

    As first part on the numerical research of singular problems, a numerical method is proposed for singular integrals. It is shown that the procedure is quite powerful for solving physics calculation with singularity such as the plasma dispersion function. Useful quadrature formulas for some class of the singular integrals are derived. In general, integrals with more complex singularities can be dealt by this method easily

  5. A Field Evaluation of the Time-of-Detection Method to Estimate Population Size and Density for Aural Avian Point Counts

    Directory of Open Access Journals (Sweden)

    Mathew W. Alldredge

    2007-12-01

    Full Text Available The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture-recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence, which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low homogenous rates per interval with those singing at (high and low heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant

  6. Momentum integral network method for thermal-hydraulic transient analysis

    International Nuclear Information System (INIS)

    Van Tuyle, G.J.

    1983-01-01

    A new momentum integral network method has been developed, and tested in the MINET computer code. The method was developed in order to facilitate the transient analysis of complex fluid flow and heat transfer networks, such as those found in the balance of plant of power generating facilities. The method employed in the MINET code is a major extension of a momentum integral method reported by Meyer. Meyer integrated the momentum equation over several linked nodes, called a segment, and used a segment average pressure, evaluated from the pressures at both ends. Nodal mass and energy conservation determined nodal flows and enthalpies, accounting for fluid compression and thermal expansion

  7. Counting statistics in radioactivity measurements

    International Nuclear Information System (INIS)

    Martin, J.

    1975-01-01

    The application of statistical methods to radioactivity measurement problems is analyzed in several chapters devoted successively to: the statistical nature of radioactivity counts; the application to radioactive counting of two theoretical probability distributions, Poisson's distribution law and the Laplace-Gauss law; true counting laws; corrections related to the nature of the apparatus; statistical techniques in gamma spectrometry [fr

  8. Evaluation of accuracy and precision of a smartphone based automated parasite egg counting system in comparison to the McMaster and Mini-FLOTAC methods.

    Science.gov (United States)

    Scare, J A; Slusarewicz, P; Noel, M L; Wielgus, K M; Nielsen, M K

    2017-11-30

    Fecal egg counts are emphasized for guiding equine helminth parasite control regimens due to the rise of anthelmintic resistance. This, however, poses further challenges, since egg counting results are prone to issues such as operator dependency, method variability, equipment requirements, and time commitment. The use of image analysis software for performing fecal egg counts is promoted in recent studies to reduce the operator dependency associated with manual counts. In an attempt to remove operator dependency associated with current methods, we developed a diagnostic system that utilizes a smartphone and employs image analysis to generate automated egg counts. The aims of this study were (1) to determine precision of the first smartphone prototype, the modified McMaster and ImageJ; (2) to determine precision, accuracy, sensitivity, and specificity of the second smartphone prototype, the modified McMaster, and Mini-FLOTAC techniques. Repeated counts on fecal samples naturally infected with equine strongyle eggs were performed using each technique to evaluate precision. Triplicate counts on 36 egg count negative samples and 36 samples spiked with strongyle eggs at 5, 50, 500, and 1000 eggs per gram were performed using a second smartphone system prototype, Mini-FLOTAC, and McMaster to determine technique accuracy. Precision across the techniques was evaluated using the coefficient of variation. In regards to the first aim of the study, the McMaster technique performed with significantly less variance than the first smartphone prototype and ImageJ (psmartphone and ImageJ performed with equal variance. In regards to the second aim of the study, the second smartphone system prototype had significantly better precision than the McMaster (psmartphone system were 64.51%, 21.67%, and 32.53%, respectively. The Mini-FLOTAC was significantly more accurate than the McMaster (psmartphone system (psmartphone and McMaster counts did not have statistically different accuracies

  9. The Kruskal Count

    OpenAIRE

    Lagarias, Jeffrey C.; Rains, Eric; Vanderbei, Robert J.

    2001-01-01

    The Kruskal Count is a card trick invented by Martin J. Kruskal in which a magician "guesses" a card selected by a subject according to a certain counting procedure. With high probability the magician can correctly "guess" the card. The success of the trick is based on a mathematical principle related to coupling methods for Markov chains. This paper analyzes in detail two simplified variants of the trick and estimates the probability of success. The model predictions are compared with simula...

  10. A simple method for regional cerebral blood flow measurement by one-point arterial blood sampling and 123I-IMP microsphere model (part 2). A study of time correction of one-point blood sample count

    International Nuclear Information System (INIS)

    Masuda, Yasuhiko; Makino, Kenichi; Gotoh, Satoshi

    1999-01-01

    In our previous paper regarding determination of the regional cerebral blood flow (rCBF) using the 123 I-IMP microsphere model, we reported that the accuracy of determination of the integrated value of the input function from one-point arterial blood sampling can be increased by performing correction using the 5 min: 29 min ratio for the whole-brain count. However, failure to carry out the arterial blood collection at exactly 5 minutes after 123 I-IMP injection causes errors with this method, and there is thus a time limitation. We have now revised out method so that the one-point arterial blood sampling can be performed at any time during the interval between 5 minutes and 20 minutes after 123 I-IMP injection, with addition of a correction step for the sampling time. This revised method permits more accurate estimation of the integral of the input functions. This method was then applied to 174 experimental subjects: one-point blood samples collected at random times between 5 and 20 minutes, and the estimated values for the continuous arterial octanol extraction count (COC) were determined. The mean error rate between the COC and the actual measured continuous arterial octanol extraction count (OC) was 3.6%, and the standard deviation was 12.7%. Accordingly, in 70% of the cases, the rCBF was able to be estimated within an error rate of 13%, while estimation was possible in 95% of the cases within an error rate of 25%. This improved method is a simple technique for determination of the rCBF by 123 I-IMP microsphere model and one-point arterial blood sampling which no longer shows a time limitation and does not require any octanol extraction step. (author)

  11. Achieving Integration in Mixed Methods Designs—Principles and Practices

    OpenAIRE

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participato...

  12. Integrated management of thesis using clustering method

    Science.gov (United States)

    Astuti, Indah Fitri; Cahyadi, Dedy

    2017-02-01

    Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.

  13. IMP: Integrated method for power analysis

    Energy Technology Data Exchange (ETDEWEB)

    1989-03-01

    An integrated, easy to use, economical package of microcomputer programs has been developed which can be used by small hydro developers to evaluate potential sites for small scale hydroelectric plants in British Columbia. The programs enable evaluation of sites located far from the nearest stream gauging station, for which streamflow data are not available. For each of the province's 6 hydrologic regions, a streamflow record for one small watershed is provided in the data base. The program can then be used to generate synthetic streamflow records and to compare results obtained by the modelling procedure with the actual data. The program can also be used to explore the significance of modelling parameters and to develop a detailed appreciation for the accuracy which can be obtained under various circumstances. The components of the program are an atmospheric model of precipitation; a watershed model that will generate a continuous series of streamflow data, based on information from the atmospheric model; a flood frequency analysis system that uses site-specific topographic data plus information from the atmospheric model to generate a flood frequency curve; a hydroelectric power simulation program which determines daily energy output for a run-of-river or reservoir storage site based on selected generation facilities and the time series generated in the watershed model; and a graphic analysis package that provides direct visualization of data and modelling results. This report contains a description of the programs, a user guide, the theory behind the model, the modelling methodology, and results from a workshop that reviewed the program package. 32 refs., 16 figs., 18 tabs.

  14. Deterministic methods to solve the integral transport equation in neutronic

    International Nuclear Information System (INIS)

    Warin, X.

    1993-11-01

    We present a synthesis of the methods used to solve the integral transport equation in neutronic. This formulation is above all used to compute solutions in 2D in heterogeneous assemblies. Three kinds of methods are described: - the collision probability method; - the interface current method; - the current coupling collision probability method. These methods don't seem to be the most effective in 3D. (author). 9 figs

  15. Neutron generation time of the reactor 'crocus' by an interval distribution method for counts collected by two detectors

    International Nuclear Information System (INIS)

    Haldy, P.-A.; Chikouche, M.

    1975-01-01

    The distribution is considered of time intervals between a count in one neutron detector and the consequent event registered in a second one. A 'four interval' probability generating function was derived by means of which the expression for the distribution of the time intervals, lasting from triggering detection in the first detector to subsequent count in the second, one could be obtained. The experimental work was conducted in the zero thermal power reactor Crocus, using a neutron source provided by spontaneous fission, a BF 3 counter for the first detector and an He 3 detector for the second instrument. (U.K.)

  16. Detection of irradiated food by using direct epifluorescent filter technique/aerobic plate count method (DEFT/APC)

    International Nuclear Information System (INIS)

    Zhao Yongfu; Li Lili; Wang Changbao; Ji Ping; Wang Chao; Wang Zhidong

    2010-01-01

    The Direct Epifluorescent Filter Technique/Aerobic Plate Count technique(DEFT/APC) can be used to identify the irradiated food by comparing the DEFT and APC counts of the samples prior to the irradiation and after. This technology was tested by using spice and dried marine fish as testing materials in this study. The results shows that the index, log (DEFT/APC) > 4.0, can indicate that the samples have been irradiated at a dose level higher than 1.0 kGy. It also indicates that the detecting sensitivity was affected by the initial value of APC and D 10 value of the samples. (authors)

  17. Quadratic algebras in the noncommutative integration method of wave equation

    International Nuclear Information System (INIS)

    Varaksin, O.L.

    1995-01-01

    The paper deals with the investigation of applications of the method of noncommutative integration of linear differential equations by partial derivatives. Nontrivial example was taken for integration of three-dimensions wave equation with the use of non-Abelian quadratic algebras

  18. New method for calculation of integral characteristics of thermal plumes

    DEFF Research Database (Denmark)

    Zukowska, Daria; Popiolek, Zbigniew; Melikov, Arsen Krikor

    2008-01-01

    A method for calculation of integral characteristics of thermal plumes is proposed. The method allows for determination of the integral parameters of plumes based on speed measurements performed with omnidirectional low velocity thermoanemometers. The method includes a procedure for calculation...... of the directional velocity (upward component of the mean velocity). The method is applied for determination of the characteristics of an asymmetric thermal plume generated by a sitting person. The method was validated in full-scale experiments in a climatic chamber with a thermal manikin as a simulator of a sitting...

  19. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  20. Alternative containment integrity test methods, an overview of possible techniques

    International Nuclear Information System (INIS)

    Spletzer, B.L.

    1986-01-01

    A study is being conducted to develop and analyze alternative methods for testing of containment integrity. The study is focused on techniques for continuously monitoring containment integrity to provide rapid detection of existing leaks, thus providing greater certainty of the integrity of the containment at any time. The study is also intended to develop techniques applicable to the currently required Type A integrated leakage rate tests. A brief discussion of the range of alternative methods currently being considered is presented. The methods include applicability to all major containment types, operating and shutdown plant conditions, and quantitative and qualitative leakage measurements. The techniques are analyzed in accordance with the current state of knowledge of each method. The bulk of the techniques discussed are in the conceptual stage, have not been tested in actual plant conditions, and are presented here as a possible future direction for evaluating containment integrity. Of the methods considered, no single method provides optimum performance for all containment types. Several methods are limited in the types of containment for which they are applicable. The results of the study to date indicate that techniques for continuous monitoring of containment integrity exist for many plants and may be implemented at modest cost

  1. Red Blood Cell Count Automation Using Microscopic Hyperspectral Imaging Technology.

    Science.gov (United States)

    Li, Qingli; Zhou, Mei; Liu, Hongying; Wang, Yiting; Guo, Fangmin

    2015-12-01

    Red blood cell counts have been proven to be one of the most frequently performed blood tests and are valuable for early diagnosis of some diseases. This paper describes an automated red blood cell counting method based on microscopic hyperspectral imaging technology. Unlike the light microscopy-based red blood count methods, a combined spatial and spectral algorithm is proposed to identify red blood cells by integrating active contour models and automated two-dimensional k-means with spectral angle mapper algorithm. Experimental results show that the proposed algorithm has better performance than spatial based algorithm because the new algorithm can jointly use the spatial and spectral information of blood cells.

  2. Two pricing methods for solving an integrated commercial fishery ...

    African Journals Online (AJOL)

    a model (Hasan and Raffensperger, 2006) to solve this problem: the integrated ... planning and labour allocation for that processing firm, but did not consider any fleet- .... the DBONP method actually finds such price information, and uses it.

  3. Critical Analysis of Methods for Integrating Economic and Environmental Indicators

    NARCIS (Netherlands)

    Huguet Ferran, Pau; Heijungs, Reinout; Vogtländer, Joost G.

    2018-01-01

    The application of environmental strategies requires scoring and evaluation methods that provide an integrated vision of the economic and environmental performance of systems. The vector optimisation, ratio and weighted addition of indicators are the three most prevalent techniques for addressing

  4. A simple flow-concentration modelling method for integrating water ...

    African Journals Online (AJOL)

    A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...

  5. APPLICATION OF BOUNDARY INTEGRAL EQUATION METHOD FOR THERMOELASTICITY PROBLEMS

    Directory of Open Access Journals (Sweden)

    Vorona Yu.V.

    2015-12-01

    Full Text Available Boundary Integral Equation Method is used for solving analytically the problems of coupled thermoelastic spherical wave propagation. The resulting mathematical expressions coincide with the solutions obtained in a conventional manner.

  6. New Approaches to Aluminum Integral Foam Production with Casting Methods

    Directory of Open Access Journals (Sweden)

    Ahmet Güner

    2015-08-01

    Full Text Available Integral foam has been used in the production of polymer materials for a long time. Metal integral foam casting systems are obtained by transferring and adapting polymer injection technology. Metal integral foam produced by casting has a solid skin at the surface and a foam core. Producing near-net shape reduces production expenses. Insurance companies nowadays want the automotive industry to use metallic foam parts because of their higher impact energy absorption properties. In this paper, manufacturing processes of aluminum integral foam with casting methods will be discussed.

  7. Tau method approximation of the Hubbell rectangular source integral

    International Nuclear Information System (INIS)

    Kalla, S.L.; Khajah, H.G.

    2000-01-01

    The Tau method is applied to obtain expansions, in terms of Chebyshev polynomials, which approximate the Hubbell rectangular source integral:I(a,b)=∫ b 0 (1/(√(1+x 2 )) arctan(a/(√(1+x 2 )))) This integral corresponds to the response of an omni-directional radiation detector situated over a corner of a plane isotropic rectangular source. A discussion of the error in the Tau method approximation follows

  8. Assessing Backwards Integration as a Method of KBO Family Finding

    Science.gov (United States)

    Benfell, Nathan; Ragozzine, Darin

    2018-04-01

    The age of young asteroid collisional families can sometimes be determined by using backwards n-body integrations of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt, Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. But various challenges present themselves when running precise and accurate 4+ Gyr integrations of Kuiper Belt objects. We have created simulated families of Kuiper Belt Objects with identical starting locations and velocity distributions, based on the Haumea Family. We then ran several long-term test integrations to observe the effect of various simulation parameters on integration results. These integrations were then used to investigate which parameters are of enough significance to require inclusion in the integration. Thereby we determined how to construct long-term integrations that both yield significant results and require manageable processing power. Additionally, we have tested the use of backwards integration as a method of discovery of potential young families in the Kuiper Belt.

  9. Counting Possibilia

    Directory of Open Access Journals (Sweden)

    Alfredo Tomasetta

    2010-06-01

    Full Text Available Timothy Williamson supports the thesis that every possible entity necessarily exists and so he needs to explain how a possible son of Wittgenstein’s, for example, exists in our world:he exists as a merely possible object (MPO, a pure locus of potential. Williamson presents a short argument for the existence of MPOs: how many knives can be made by fitting together two blades and two handles? Four: at the most two are concrete objects, the others being merely possible knives and merely possible objects. This paper defends the idea that one can avoid reference and ontological commitment to MPOs. My proposal is that MPOs can be dispensed with by using the notion of rules of knife-making. I first present a solution according to which we count lists of instructions - selected by the rules - describing physical combinations between components. This account, however, has its own difficulties and I eventually suggest that one can find a way out by admitting possible worlds, entities which are more commonly accepted - at least by philosophers - than MPOs. I maintain that, in answering Williamson’s questions, we count classes of physically possible worlds in which the same instance of a general rule is applied.

  10. Explicit integration of extremely stiff reaction networks: partial equilibrium methods

    International Nuclear Information System (INIS)

    Guidry, M W; Hix, W R; Billings, J J

    2013-01-01

    In two preceding papers (Guidry et al 2013 Comput. Sci. Disc. 6 015001 and Guidry and Harris 2013 Comput. Sci. Disc. 6 015002), we have shown that when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper, we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the approach to equilibrium and show that explicit asymptotic methods, combined with the new partial equilibrium methods, give an integration scheme that can plausibly deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that such explicit methods may offer alternatives to implicit integration of even extremely stiff systems and that these methods may permit integration of much larger networks than have been possible before in a number of fields. (paper)

  11. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  12. An integrated lean-methods approach to hospital facilities redesign.

    Science.gov (United States)

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  13. Iterative algorithm for the volume integral method for magnetostatics problems

    International Nuclear Information System (INIS)

    Pasciak, J.E.

    1980-11-01

    Volume integral methods for solving nonlinear magnetostatics problems are considered in this paper. The integral method is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived

  14. Biomass burning impact on PM 2.5 over the southeastern US during 2007: integrating chemically speciated FRM filter measurements, MODIS fire counts and PMF analysis

    Directory of Open Access Journals (Sweden)

    R. J. Weber

    2010-07-01

    Full Text Available Archived Federal Reference Method (FRM Teflon filters used by state regulatory agencies for measuring PM2.5 mass were acquired from 15 sites throughout the southeastern US and analyzed for water-soluble organic carbon (WSOC, water-soluble ions and carbohydrates to investigate biomass burning contributions to fine aerosol mass. Based on over 900 filters that spanned all of 2007, levoglucosan and K+ were studied in conjunction with MODIS Aqua fire count data to compare their performances as biomass burning tracers. Levoglucosan concentrations exhibited a distinct seasonal variation with large enhancement in winter and spring and a minimum in summer, and were well correlated with fire counts, except in winter when residential wood burning contributions were significant. In contrast, K+ concentrations had no apparent seasonal trend and poor correlation with fire counts. Levoglucosan and K+ only correlated well in winter (r2=0.59 when biomass burning emissions were highest, whereas in other seasons they were not correlated due to the presence of other K+ sources. Levoglucosan also exhibited larger spatial variability than K+. Both species were higher in urban than rural sites (mean 44% higher for levoglucosan and 86% for K+. Positive Matrix Factorization (PMF was applied to analyze PM2.5 sources and four factors were resolved: biomass burning, refractory material, secondary light absorbing WSOC and secondary sulfate/WSOC. The biomass burning source contributed 13% to PM2.5 mass annually, 27% in winter, and less than 2% in summer, consistent with other souce apportionment studies based on levoglucosan, but lower in summer compared to studies based on K+.

  15. Review of Statistical Learning Methods in Integrated Omics Studies (An Integrated Information Science).

    Science.gov (United States)

    Zeng, Irene Sui Lan; Lumley, Thomas

    2018-01-01

    Integrated omics is becoming a new channel for investigating the complex molecular system in modern biological science and sets a foundation for systematic learning for precision medicine. The statistical/machine learning methods that have emerged in the past decade for integrated omics are not only innovative but also multidisciplinary with integrated knowledge in biology, medicine, statistics, machine learning, and artificial intelligence. Here, we review the nontrivial classes of learning methods from the statistical aspects and streamline these learning methods within the statistical learning framework. The intriguing findings from the review are that the methods used are generalizable to other disciplines with complex systematic structure, and the integrated omics is part of an integrated information science which has collated and integrated different types of information for inferences and decision making. We review the statistical learning methods of exploratory and supervised learning from 42 publications. We also discuss the strengths and limitations of the extended principal component analysis, cluster analysis, network analysis, and regression methods. Statistical techniques such as penalization for sparsity induction when there are fewer observations than the number of features and using Bayesian approach when there are prior knowledge to be integrated are also included in the commentary. For the completeness of the review, a table of currently available software and packages from 23 publications for omics are summarized in the appendix.

  16. A dynamic integrated fault diagnosis method for power transformers.

    Science.gov (United States)

    Gao, Wensheng; Bai, Cuifen; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

  17. A Dynamic Integrated Fault Diagnosis Method for Power Transformers

    Science.gov (United States)

    Gao, Wensheng; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

  18. Achieving integration in mixed methods designs-principles and practices.

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-12-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.

  19. Achieving Integration in Mixed Methods Designs—Principles and Practices

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835

  20. Assessing the composition of fragmented agglutinated foraminiferal assemblages in ancient sediments: comparison of counting and area-based methods in Famennian samples (Late Devonian)

    Science.gov (United States)

    Girard, Catherine; Dufour, Anne-Béatrice; Charruault, Anne-Lise; Renaud, Sabrina

    2018-01-01

    Benthic foraminifera have been used as proxies for various paleoenvironmental variables such as food availability, carbon flux from surface waters, microhabitats, and indirectly water depth. Estimating assemblage composition based on morphotypes, as opposed to genus- or species-level identification, potentially loses important ecological information but opens the way to the study of ancient time periods. However, the ability to accurately constrain benthic foraminiferal assemblages has been questioned when the most abundant foraminifera are fragile agglutinated forms, particularly prone to fragmentation. Here we test an alternate method for accurately estimating the composition of fragmented assemblages. The cumulated area per morphotype method is assessed, i.e., the sum of the area of all tests or fragments of a given morphotype in a sample. The percentage of each morphotype is calculated as a portion of the total cumulated area. Percentages of different morphotypes based on counting and cumulated area methods are compared one by one and analyzed using principal component analyses, a co-inertia analysis, and Shannon diversity indices. Morphotype percentages are further compared to an estimate of water depth based on microfacies description. Percentages of the morphotypes are not related to water depth. In all cases, counting and cumulated area methods deliver highly similar results, suggesting that the less time-consuming traditional counting method may provide robust estimates of assemblages. The size of each morphotype may deliver paleobiological information, for instance regarding biomass, but should be considered carefully due to the pervasive issue of fragmentation.

  1. Method of and system for determining a spectrum of radiation characteristics with full counting-loss compensation

    International Nuclear Information System (INIS)

    Westphal, G.P.

    1984-01-01

    Real-time correction of counting losses in the operation of a pulse-height analyzer, connected to the output of a radiation detector, is accomplished by establishing a gating interval at a time when the analyzer is available after processing the last detector pulse, this interval beginning at an instant delayed beyond the trailing edge of that last pulse by at least a predetermined rise time and ending with the leading edge of the next detector pulse. Test pulses generated during this gating interval are counted and their number is used to determine a probability ratio whose reciprocal constitutes a weighting factor; the digitized amplitude of each detector pulse addresses a corresponding memory cell whose contents are thereupon increased by the current weighting factor

  2. A Method Validation for Determination of Gross Alpha and Gross Beta in Water Sample Using Low Background Gross Alpha/ Beta Counting System

    International Nuclear Information System (INIS)

    Zal Uyun Wan Mahmood; Norfaizal Mohamed; Nita Salina Abu Bakar

    2016-01-01

    Method validation (MV) for the measurement of gross alpha and gross beta activity in water (drinking, mineral and environmental) samples using Low Background Gross Alpha/ Beta Counting System was performed to characterize precision, accuracy and reliable results. The main objective of this assignment is to ensure that both the instrument and method always good performed and resulting accuracy and reliable results. Generally, almost the results of estimated RSD, z-score and U_s_c_o_r_e were reliable which are recorded as ≤30 %, less than 2 and less than 1.5, respectively. Minimum Detected Activity (MDA) was estimated based on the counting time of 100 minutes and present background counting value of gross alpha (0.01 - 0.35 cpm) and gross beta (0.50 - 2.18 cpm). Estimated Detection Limit (DL) was 0.1 Bq/ L for gross alpha and 0.2 Bq/ L for gross beta and expended uncertainty was relatively small of 9.77 % for gross alpha and 10.57 % for gross beta. Align with that, background counting for gross alpha and gross beta was ranged of 0.01 - 0.35 cpm and 0.50 - 2.18 cpm, respectively. While, sample volume was set at minimum of 500 mL and maximum of 2000 mL. These proven the accuracy and precision result that are generated from developed method/ technique is satisfactory and method is recommended to be used. Therefore, it can be concluded that the MV found no doubtful on the ability of the developed method. The test result showed the method is suitable for all types of water samples which are contained several radionuclides and elements as well as any impurities that interfere the measurement analysis of gross alpha and gross beta. (author)

  3. Computation of rectangular source integral by rational parameter polynomial method

    International Nuclear Information System (INIS)

    Prabha, Hem

    2001-01-01

    Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively

  4. An integration weighting method to evaluate extremum coordinates

    International Nuclear Information System (INIS)

    Ilyushchenko, V.I.

    1990-01-01

    The numerical version of the Laplace asymptotics has been used to evaluate the coordinates of extrema of multivariate continuous and discontinuous test functions. The performed computer experiments demonstrate the high efficiency of the integration method proposed. The saturating dependence of extremum coordinates on such parameters as a number of integration subregions and that of K going /theoretically/ to infinity has been studied in detail for the limitand being a ratio of two Laplace integrals with exponentiated K. The given method is an integral equivalent of that of weighted means. As opposed to the standard optimization methods of the zero, first and second order the proposed method can be successfully applied to optimize discontinuous objective functions, too. There are possibilities of applying the integration method in the cases, when the conventional techniques fail due to poor analytical properties of the objective functions near extremal points. The proposed method is efficient in searching for both local and global extrema of multimodal objective functions. 12 refs.; 4 tabs

  5. Japanese Society for Laboratory Hematology flow cytometric reference method of determining the differential leukocyte count: external quality assurance using fresh blood samples.

    Science.gov (United States)

    Kawai, Y; Nagai, Y; Ogawa, E; Kondo, H

    2017-04-01

    To provide target values for the manufacturers' survey of the Japanese Society for Laboratory Hematology (JSLH), accurate standard data from healthy volunteers were needed for the five-part differential leukocyte count. To obtain such data, JSLH required an antibody panel that achieved high specificity (particularly for mononuclear cells) using simple gating procedures. We developed a flow cytometric method for determining the differential leukocyte count (JSLH-Diff) and validated it by comparison with the flow cytometric differential leukocyte count of the International Council for Standardization in Haematology (ICSH-Diff) and the manual differential count obtained by microscopy (Manual-Diff). First, the reference laboratory performed an imprecision study of JSLH-Diff and ICSH-Diff, as well as performing comparison among JSLH-Diff, Manual-Diff, and ICSH-Diff. Then two reference laboratories and seven participating laboratories performed imprecision and accuracy studies of JSLH-Diff, Manual-Diff, and ICSH-Diff. Simultaneously, six manufacturers' laboratories provided their own representative values by using automated hematology analyzers. The precision of both JSLH-Diff and ICSH-Diff methods was adequate. Comparison by the reference laboratory showed that all correlation coefficients, slopes and intercepts obtained by the JSLH-Diff, ICSH-Diff, and Manual-Diff methods conformed to the criteria. When the imprecision and accuracy of JSLH-Diff were assessed at seven laboratories, the CV% for lymphocytes, neutrophils, monocytes, eosinophils, and basophils was 0.5~0.9%, 0.3~0.7%, 1.7~2.6%, 3.0~7.9%, and 3.8~10.4%, respectively. More than 99% of CD45 positive leukocytes were identified as normal leukocytes by JSLH-Diff. When JSLH-Diff method were validated by comparison with Manual-Diff and ICSH-Diff, JSLH-Diff showed good performance as a reference method. © 2016 John Wiley & Sons Ltd.

  6. Principles of correlation counting

    International Nuclear Information System (INIS)

    Mueller, J.W.

    1975-01-01

    A review is given of the various applications which have been made of correlation techniques in the field of nuclear physics, in particular for absolute counting. Whereas in most cases the usual coincidence method will be preferable for its simplicity, correlation counting may be the only possible approach in such cases where the two radiations of the cascade cannot be well separated or when there is a longliving intermediate state. The measurement of half-lives and of count rates of spurious pulses is also briefly discussed. The various experimental situations lead to different ways the correlation method is best applied (covariance technique with one or with two detectors, application of correlation functions, etc.). Formulae are given for some simple model cases, neglecting dead-time corrections

  7. Higher-Order Integral Equation Methods in Computational Electromagnetics

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Meincke, Peter

    Higher-order integral equation methods have been investigated. The study has focused on improving the accuracy and efficiency of the Method of Moments (MoM) applied to electromagnetic problems. A new set of hierarchical Legendre basis functions of arbitrary order is developed. The new basis...

  8. Two pricing methods for solving an integrated commercial fishery ...

    African Journals Online (AJOL)

    In this paper, we develop two novel pricing methods for solving an integer program. We demonstrate the methods by solving an integrated commercial fishery planning model (IFPM). In this problem, a fishery manager must schedule fishing trawlers (determine when and where the trawlers should go fishing, and when the ...

  9. Method for integrating a train of fast, nanosecond wide pulses

    International Nuclear Information System (INIS)

    Rose, C.R.

    1987-01-01

    This paper describes a method used to integrate a train of fast, nanosecond wide pulses. The pulses come from current transformers in a RF LINAC beamline. Because they are ac signals and have no dc component, true mathematical integration would yield zero over the pulse train period or an equally erroneous value because of a dc baseline shift. The circuit used to integrate the pulse train first stretches the pulses to 35 ns FWHM. The signals are then fed into a high-speed, precision rectifier which restores a true dc baseline for the following stage - a fast, gated integrator. The rectifier is linear over 55dB in excess of 25 MHz, and the gated integrator is linear over a 60 dB range with input pulse widths as short as 16 ns. The assembled system is linear over 30 dB with a 6 MHz input signal

  10. Calculation of total counting efficiency of a NaI(Tl) detector by hybrid Monte-Carlo method for point and disk sources

    Energy Technology Data Exchange (ETDEWEB)

    Yalcin, S. [Education Faculty, Kastamonu University, 37200 Kastamonu (Turkey)], E-mail: yalcin@gazi.edu.tr; Gurler, O.; Kaynak, G. [Department of Physics, Faculty of Arts and Sciences, Uludag University, Gorukle Campus, 16059 Bursa (Turkey); Gundogdu, O. [Department of Physics, School of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH (United Kingdom)

    2007-10-15

    This paper presents results on the total gamma counting efficiency of a NaI(Tl) detector from point and disk sources. The directions of photons emitted from the source were determined by Monte-Carlo techniques and the photon path lengths in the detector were determined by analytic equations depending on photon directions. This is called the hybrid Monte-Carlo method where analytical expressions are incorporated into the Monte-Carlo simulations. A major advantage of this technique is the short computation time compared to other techniques on similar computational platforms. Another advantage is the flexibility for inputting detector-related parameters (such as source-detector distance, detector radius, source radius, detector linear attenuation coefficient) into the algorithm developed, thus making it an easy and flexible method to apply to other detector systems and configurations. The results of the total counting efficiency model put forward for point and disc sources were compared with the previous work reported in the literature.

  11. Calculation of total counting efficiency of a NaI(Tl) detector by hybrid Monte-Carlo method for point and disk sources

    International Nuclear Information System (INIS)

    Yalcin, S.; Gurler, O.; Kaynak, G.; Gundogdu, O.

    2007-01-01

    This paper presents results on the total gamma counting efficiency of a NaI(Tl) detector from point and disk sources. The directions of photons emitted from the source were determined by Monte-Carlo techniques and the photon path lengths in the detector were determined by analytic equations depending on photon directions. This is called the hybrid Monte-Carlo method where analytical expressions are incorporated into the Monte-Carlo simulations. A major advantage of this technique is the short computation time compared to other techniques on similar computational platforms. Another advantage is the flexibility for inputting detector-related parameters (such as source-detector distance, detector radius, source radius, detector linear attenuation coefficient) into the algorithm developed, thus making it an easy and flexible method to apply to other detector systems and configurations. The results of the total counting efficiency model put forward for point and disc sources were compared with the previous work reported in the literature

  12. A study of compositional verification based IMA integration method

    Science.gov (United States)

    Huang, Hui; Zhang, Guoquan; Xu, Wanmeng

    2018-03-01

    The rapid development of avionics systems is driving the application of integrated modular avionics (IMA) systems. But meanwhile it is improving avionics system integration, complexity of system test. Then we need simplify the method of IMA system test. The IMA system supports a module platform that runs multiple applications, and shares processing resources. Compared with federated avionics system, IMA system is difficult to isolate failure. Therefore, IMA system verification will face the critical problem is how to test shared resources of multiple application. For a simple avionics system, traditional test methods are easily realizing to test a whole system. But for a complex system, it is hard completed to totally test a huge and integrated avionics system. Then this paper provides using compositional-verification theory in IMA system test, so that reducing processes of test and improving efficiency, consequently economizing costs of IMA system integration.

  13. INTEGRATED SENSOR EVALUATION CIRCUIT AND METHOD FOR OPERATING SAID CIRCUIT

    OpenAIRE

    Krüger, Jens; Gausa, Dominik

    2015-01-01

    WO15090426A1 Sensor evaluation device and method for operating said device Integrated sensor evaluation circuit for evaluating a sensor signal (14) received from a sensor (12), having a first connection (28a) for connection to the sensor and a second connection (28b) for connection to the sensor. The integrated sensor evaluation circuit comprises a configuration data memory (16) for storing configuration data which describe signal properties of a plurality of sensor control signals (26a-c). T...

  14. Mathematical methods linear algebra normed spaces distributions integration

    CERN Document Server

    Korevaar, Jacob

    1968-01-01

    Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector

  15. User's guide to Monte Carlo methods for evaluating path integrals

    Science.gov (United States)

    Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan

    2018-04-01

    We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.

  16. Classification Method in Integrated Information Network Using Vector Image Comparison

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2014-05-01

    Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

  17. A channel-by-channel method of reducing the errors associated with peak area integration

    International Nuclear Information System (INIS)

    Luedeke, T.P.; Tripard, G.E.

    1996-01-01

    A new method of reducing the errors associated with peak area integration has been developed. This method utilizes the signal content of each channel as an estimate of the overall peak area. These individual estimates can then be weighted according to the precision with which each estimate is known, producing an overall area estimate. Experimental measurements were performed on a small peak sitting on a large background, and the results compared to those obtained from a commercial software program. Results showed a marked decrease in the spread of results around the true value (obtained by counting for a long period of time), and a reduction in the statistical uncertainty associated with the peak area. (orig.)

  18. Quantifying the sources of variability in equine faecal egg counts: implications for improving the utility of the method.

    Science.gov (United States)

    Denwood, M J; Love, S; Innocent, G T; Matthews, L; McKendrick, I J; Hillary, N; Smith, A; Reid, S W J

    2012-08-13

    The faecal egg count (FEC) is the most widely used means of quantifying the nematode burden of horses, and is frequently used in clinical practice to inform treatment and prevention. The statistical process underlying the FEC is complex, comprising a Poisson counting error process for each sample, compounded with an underlying continuous distribution of means between samples. Being able to quantify the sources of variability contributing to this distribution of means is a necessary step towards providing estimates of statistical power for future FEC and FECRT studies, and may help to improve the usefulness of the FEC technique by identifying and minimising unwanted sources of variability. Obtaining such estimates require a hierarchical statistical model coupled with repeated FEC observations from a single animal over a short period of time. Here, we use this approach to provide the first comparative estimate of multiple sources of within-horse FEC variability. The results demonstrate that a substantial proportion of the observed variation in FEC between horses occurs as a result of variation in FEC within an animal, with the major sources being aggregation of eggs within faeces and variation in egg concentration between faecal piles. The McMaster procedure itself is associated with a comparatively small coefficient of variation, and is therefore highly repeatable when a sufficiently large number of eggs are observed to reduce the error associated with the counting process. We conclude that the variation between samples taken from the same animal is substantial, but can be reduced through the use of larger homogenised faecal samples. Estimates are provided for the coefficient of variation (cv) associated with each within animal source of variability in observed FEC, allowing the usefulness of individual FEC to be quantified, and providing a basis for future FEC and FECRT studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. An improved method for {sup 85}Kr analysis by liquid scintillation counting and its application to atmospheric {sup 85}Kr determination

    Energy Technology Data Exchange (ETDEWEB)

    Momoshima, Noriyuki, E-mail: momoshima.noriyuki.551@m.kyushu-u.ac.j [Radioisotope Center, Kyushu University, 6-10-1 Hakozaki, Higashi-ku, Fukuoka 812-8581 (Japan); Inoue, Fumio [Graduate School of Science, Kyushu University, 6-10-1Hakozaki, Higashi-ku, Fukuoka 812-8581 (Japan); Sugihara, Shinji [Radioisotope Center, Kyushu University, 6-10-1 Hakozaki, Higashi-ku, Fukuoka 812-8581 (Japan); Shimada, Jun [Graduate School of Science and Technology, Kumamoto University, 2-39-1 Kurokami, Kumamoto 860-8555 (Japan); Taniguchi, Makoto [Research Institute for Humanity and Nature, 457-4 Motoyama Kamigamo, Kita-ku, Kyoto 603-8047 (Japan)

    2010-08-15

    Atmospheric {sup 85}Kr concentration at Fukuoka, Japan was determined by an improved {sup 85}Kr analytical method using liquid scintillation counting (LSC). An average value of 1.54 {+-} 0.05 Bq m{sup -3} was observed in 2008, which is about two times that measured in 1981 at Fukuoka, indicating a 29 mBq y{sup -1} rate of increase as an average for these 27 years. The analytical method developed involves collecting Kr from air using activated charcoal at liquid N{sub 2} temperature and purifying it using He at dry ice temperature, followed by Kr separation by gas chromatography. An overall Kr recovery of 76.4 {+-} 8.1% was achieved when Kr was analyzed in 500-1000 l of air. The Kr isolated by gas chromatography was collected on silica gel in a quartz glass vial cooled to liquid N{sub 2} temperature and the activity of {sup 85}Kr was measured with a low-background LS counter. The detection limit of {sup 85}Kr activity by the present analytical method is 0.0015 Bq at a 95% confidence level, including all propagation errors, which is equivalent with {sup 85}Kr in 1.3 l of the present air under the analytical conditions of 72.1% counting efficiency, 0.1597 cps background count rate, and 76.4% Kr recovery.

  20. Application of low background liquid scintillation counting method to pharmacy. Variation of endogenous 14C in human urine

    International Nuclear Information System (INIS)

    Horie, Masanobu; Yanagi, Mashiho; Baba, Shigeo; Kato, Yuka; Yoshimura, Tomoyuki

    2010-01-01

    The intra-day, inter-day and individual variations in endogenous 14 C radioactivity of human urine were studied by use of 5 mL urine. The endogenous 14 C radioactivity of human urine is relatively constant (approximately 1.5 dpm/mL urine). In order to eliminate the effect of endogenous 40 K it is of the greatest importance to count 14 C signal with the optimal window. Since these variations are relatively small, we can estimate correctly the net 14 C activity from the BG value of the same time zone of the day before dosing. (author)

  1. Categorical counting.

    Science.gov (United States)

    Fetterman, J Gregor; Killeen, P Richard

    2010-09-01

    Pigeons pecked on three keys, responses to one of which could be reinforced after a few pecks, to a second key after a somewhat larger number of pecks, and to a third key after the maximum pecking requirement. The values of the pecking requirements and the proportion of trials ending with reinforcement were varied. Transits among the keys were an orderly function of peck number, and showed approximately proportional changes with changes in the pecking requirements, consistent with Weber's law. Standard deviations of the switch points between successive keys increased more slowly within a condition than across conditions. Changes in reinforcement probability produced changes in the location of the psychometric functions that were consistent with models of timing. Analyses of the number of pecks emitted and the duration of the pecking sequences demonstrated that peck number was the primary determinant of choice, but that passage of time also played some role. We capture the basic results with a standard model of counting, which we qualify to account for the secondary experiments. Copyright 2010 Elsevier B.V. All rights reserved.

  2. An integral nodal variational method for multigroup criticality calculations

    International Nuclear Information System (INIS)

    Lewis, E.E.; Tsoulfanidis, N.

    2003-01-01

    An integral formulation of the variational nodal method is presented and applied to a series of benchmark critically problems. The method combines an integral transport treatment of the even-parity flux within the spatial node with an odd-parity spherical harmonics expansion of the Lagrange multipliers at the node interfaces. The response matrices that result from this formulation are compatible with those in the VARIANT code at Argonne National Laboratory. Either homogeneous or heterogeneous nodes may be employed. In general, for calculations requiring higher-order angular approximations, the integral method yields solutions with comparable accuracy while requiring substantially less CPU time and memory than the standard spherical harmonics expansion using the same spatial approximations. (author)

  3. Integrative methods for analyzing big data in precision medicine.

    Science.gov (United States)

    Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša

    2016-03-01

    We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Cultural adaptation and translation of measures: an integrated method.

    Science.gov (United States)

    Sidani, Souraya; Guruge, Sepali; Miranda, Joyal; Ford-Gilboe, Marilyn; Varcoe, Colleen

    2010-04-01

    Differences in the conceptualization and operationalization of health-related concepts may exist across cultures. Such differences underscore the importance of examining conceptual equivalence when adapting and translating instruments. In this article, we describe an integrated method for exploring conceptual equivalence within the process of adapting and translating measures. The integrated method involves five phases including selection of instruments for cultural adaptation and translation; assessment of conceptual equivalence, leading to the generation of a set of items deemed to be culturally and linguistically appropriate to assess the concept of interest in the target community; forward translation; back translation (optional); and pre-testing of the set of items. Strengths and limitations of the proposed integrated method are discussed. (c) 2010 Wiley Periodicals, Inc.

  5. Detection and counting systems

    International Nuclear Information System (INIS)

    Abreu, M.A.N. de

    1976-01-01

    Detection devices based on gaseous ionization are analysed, such as: electroscopes ionization chambers, proportional counters and Geiger-Mueller counters. Scintillation methods are also commented. A revision of the basic concepts in electronics is done and the main equipment for counting is detailed. In the study of gama spectrometry, scintillation and semiconductor detectors are analysed [pt

  6. Radiation intensity counting system

    International Nuclear Information System (INIS)

    Peterson, R.J.

    1982-01-01

    A method is described of excluding the natural dead time of the radiation detector (or eg Geiger-Mueller counter) in a ratemeter counting circuit, thus eliminating the need for dead time corrections. Using a pulse generator an artificial dead time is introduced which is longer than the natural dead time of the detector. (U.K.)

  7. Effect of irradiation and storage post-irradiation of black pepper (Piper nigrum L.) on counts of microorganisms hygienic indicator using methods of conventional analysis and PETRIFILMTM plates

    International Nuclear Information System (INIS)

    Jaimes, Marcial Ibo Silva

    1988-01-01

    Fifteen samples of ground black pepper (Piper nigrum L.) purchased in Sao Paulo local stores, were submitted to irradiation in doses of 3, 6 and 10 kGy. All irradiated samples, including non-irradiated controls, were submitted to counts of yeasts and molds, aerobes (APC), coliforms and mesophilic aerobic spore formers (MASC), using conventional plate count methods and PETRIFILM TM plates. For yeasts and molds count, acidified potato dextrose agar (PDA) an PETRIFILM TM PFYM plates were used. For aerobes, plate count agar (PCA) and PETRIFILM TM PFAC plates were used. Violet red bile agar (VRBA) and PETRIFILM TM PFEC plates were employed for enumeration of coliforms. Counts of these groups of microorganisms obtained through the traditional plating procedures did not differ significantly from those using the corresponding PETRIFILM TM plates. In samples submitted to irradiation, a dose of 10 kGy caused a decrease of the yeasts and molds count from 10 4 -10 5 to less than 10 cfu/g. The same dose caused a decrease of the aerobic counts from 10 7 -10 8 to 10 2 -10 3 cfu/g, of coliforms from 10 4 -10 5 to less than 10 cfu/g and MASC from 10 6 -10 7 cfu/g to 10-10 2 cfu/g. The introduction of a injury repair step in the counting procedure resulted in a 32 to 89% increase in the number of coliforms. However, this additional step did not improve significantly the counts of MASC. After 270 days of storage of samples irradiated with 3 kGy, a decrease in the yeasts and molds population from 10 3 to 20 cfu/g was observed. The APC population in these samples was reduced from 5,0x10 6 to 2,4x10 4 cfu/g; in those irradiated with 6 kGy the reduction was from 4,0x10 4 to 5,0x10 3 cfu/g and in those irradiated with 10 kGy the counts were reduced from 30 to less than 10 cfu/g. After the same time of storage, the coliform population in non irradiated samples decreased from 2,8x10 5 to 1,5x10 4 cfu/g and from 9,1x10 3 to 20 cfu/g in those irradiated with 3 kGy. Similarly, the MASC

  8. Computerized radioautographic grain counting

    International Nuclear Information System (INIS)

    McKanna, J.A.; Casagrande, V.A.

    1985-01-01

    In recent years, radiolabeling techniques have become fundamental assays in physiology and biochemistry experiments. They also have assumed increasingly important roles in morphologic studies. Characteristically, radioautographic analysis of structure has been qualitative rather than quantitative, however, microcomputers have opened the door to several methods for quantifying grain counts and density. The overall goal of this chapter is to describe grain counting using the Bioquant, an image analysis package based originally on the Apple II+, and now available for several popular microcomputers. The authors discuss their image analysis procedures by applying them to a study of development in the central nervous system

  9. Rainflow counting revisited

    Energy Technology Data Exchange (ETDEWEB)

    Soeker, H [Deutsches Windenergie-Institut (Germany)

    1996-09-01

    As state of the art method the rainflow counting technique is presently applied everywhere in fatigue analysis. However, the author feels that the potential of the technique is not fully recognized in wind energy industries as it is used, most of the times, as a mere data reduction technique disregarding some of the inherent information of the rainflow counting results. The ideas described in the following aim at exploitation of this information and making it available for use in the design and verification process. (au)

  10. The 3D Lagrangian Integral Method. Henrik Koblitz Rasmussen

    DEFF Research Database (Denmark)

    Rasmussen, Henrik Koblitz

    2003-01-01

    . This are processes such as thermo-forming, gas-assisted injection moulding and all kind of simultaneous multi-component polymer processing operations. Though, in all polymer processing operations free surfaces (or interfaces) are present and the dynamic of these surfaces are of interest. In the "3D Lagrangian...... Integral Method" to simulate viscoelastic flow, the governing equations are solved for the particle positions (Lagrangian kinematics). Therefore, the transient motion of surfaces can be followed in a particularly simple fashion even in 3D viscoelastic flow. The "3D Lagrangian Integral Method" is described...

  11. Alpha scintillation radon counting

    International Nuclear Information System (INIS)

    Lucas, H.F. Jr.

    1977-01-01

    Radon counting chambers which utilize the alpha-scintillation properties of silver activated zinc sulfide are simple to construct, have a high efficiency, and, with proper design, may be relatively insensitive to variations in the pressure or purity of the counter filling. Chambers which were constructed from glass, metal, or plastic in a wide variety of shapes and sizes were evaluated for the accuracy and the precision of the radon counting. The principles affecting the alpha-scintillation radon counting chamber design and an analytic system suitable for a large scale study of the 222 Rn and 226 Ra content of either air or other environmental samples are described. Particular note is taken of those factors which affect the accuracy and the precision of the method for monitoring radioactivity around uranium mines

  12. [Corrected count].

    Science.gov (United States)

    1991-11-27

    The data of the 1991 census indicated that the population count of Brazil fell short of a former estimate by 3 million people. The population reached 150 million people with an annual increase of 2%, while projections in the previous decade expected an increase of 2.48% to 153 million people. This reduction indicates more widespread use of family planning (FP) and control of fertility among families of lower social status as more information is being provided to them. However, the Ministry of Health ordered an investigation of foreign family planning organizations because it was suspected that women were forced to undergo tubal ligation during vaccination campaigns. A strange alliance of left wing politicians and the Roman Catholic Church alleges a conspiracy of international FP organizations receiving foreign funds. The FP strategies of Bemfam and Pro-Pater offer women who have little alternative the opportunity to undergo tubal ligation or to receive oral contraceptives to control fertility. The ongoing government program of distributing booklets on FP is feeble and is not backed up by an education campaign. Charges of foreign interference are leveled while the government hypocritically ignores the grave problem of 4 million abortions a year. The population is expected to continue to grow until the year 2040 and then to stabilize at a low growth rate of .4%. In 1980, the number of children per woman was 4.4 whereas the 1991 census figures indicate this has dropped to 3.5. The excess population is associated with poverty and a forsaken caste in the interior. The population actually has decreased in the interior and in cities with 15,000 people. The phenomenon of the drop of fertility associated with rural exodus is contrasted with cities and villages where the population is 20% less than expected.

  13. Effect of the double-counting functional on the electronic and magnetic properties of half-metallic magnets using the GGA+U method

    International Nuclear Information System (INIS)

    Tsirogiannis, Christos; Galanakis, Iosif

    2015-01-01

    Methods based on the combination of the usual density functional theory (DFT) codes with the Hubbard models are widely used to investigate the properties of strongly correlated materials. Using first-principle calculations we study the electronic and magnetic properties of 20 half-metallic magnets performing self-consistent GGA+U calculations using both the atomic-limit (AL) and around-mean-field (AMF) functionals for the double counting term, used to subtract the correlation part from the DFT total energy, and compare these results to the usual generalized-gradient-approximation (GGA) calculations. Overall the use of AMF produces results similar to the GGA calculations. On the other hand the effect of AL is diversified depending on the studied material. In general the AL functional produces a stronger tendency towards magnetism leading in some cases to unphysical electronic and magnetic properties. Thus the choice of the adequate double-counting functional is crucial for the results obtained using the GGA+U method. - Highlights: • Ab initio study of half-metallic magnets. • Role of electronic correlations. • Double-counting term. • Atomic-limit vs around-mean-field functionals

  14. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  15. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  16. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  17. Real-time hybrid simulation using the convolution integral method

    International Nuclear Information System (INIS)

    Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A

    2011-01-01

    This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results

  18. Integral methods in science and engineering theoretical and practical aspects

    CERN Document Server

    Constanda, C; Rollins, D

    2006-01-01

    Presents a series of analytic and numerical methods of solution constructed for important problems arising in science and engineering, based on the powerful operation of integration. This volume is meant for researchers and practitioners in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students.

  19. An approximation method for nonlinear integral equations of Hammerstein type

    International Nuclear Information System (INIS)

    Chidume, C.E.; Moore, C.

    1989-05-01

    The solution of a nonlinear integral equation of Hammerstein type in Hilbert spaces is approximated by means of a fixed point iteration method. Explicit error estimates are given and, in some cases, convergence is shown to be at least as fast as a geometric progression. (author). 25 refs

  20. The philosophy and method of integrative humanism and religious ...

    African Journals Online (AJOL)

    This paper titled “Philosophy and Method of Integrative Humanism and Religious Crises in Nigeria: Picking the Essentials”, acknowledges the damaging effects of religious bigotry, fanaticism and creed differences on the social, political and economic development of the country. The need for the cessation of religious ...

  1. An Integrated Approach to Research Methods and Capstone

    Science.gov (United States)

    Postic, Robert; McCandless, Ray; Stewart, Beth

    2014-01-01

    In 1991, the AACU issued a report on improving undergraduate education suggesting, in part, that a curriculum should be both comprehensive and cohesive. Since 2008, we have systematically integrated our research methods course with our capstone course in an attempt to accomplish the twin goals of comprehensiveness and cohesion. By taking this…

  2. Confluent education: an integrative method for nursing (continuing) education.

    NARCIS (Netherlands)

    Francke, A.L.; Erkens, T.

    1994-01-01

    Confluent education is presented as a method to bridge the gap between cognitive and affective learning. Attention is focused on three main characteristics of confluent education: (a) the integration of four overlapping domains in a learning process (readiness, the cognitive domain, the affective

  3. On the solution of high order stable time integration methods

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Blaheta, Radim; Sysala, Stanislav; Ahmad, B.

    2013-01-01

    Roč. 108, č. 1 (2013), s. 1-22 ISSN 1687-2770 Institutional support: RVO:68145535 Keywords : evolution equations * preconditioners for quadratic matrix polynomials * a stiffly stable time integration method Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2013 http://www.boundaryvalueproblems.com/content/2013/1/108

  4. Educational integrating projects as a method of interactive learning

    Directory of Open Access Journals (Sweden)

    Иван Николаевич Куринин

    2013-12-01

    Full Text Available The article describes a method of interactive learning based on educational integrating projects. Some examples of content of such projects for the disciplines related to the study of information and Internet technologies and their application in management are presented.

  5. Integrating Expressive Methods in a Relational-Psychotherapy

    Directory of Open Access Journals (Sweden)

    Richard G. Erskine

    2011-06-01

    Full Text Available Therapeutic Involvement is an integral part of all effective psychotherapy.This article is written to illustrate the concept of Therapeutic Involvement in working within a therapeutic relationship – within the transference -- and with active expressive and experiential methods to resolve traumatic experiences, relational disturbances and life shaping decisions.

  6. The integral equation method applied to eddy currents

    International Nuclear Information System (INIS)

    Biddlecombe, C.S.; Collie, C.J.; Simkin, J.; Trowbridge, C.W.

    1976-04-01

    An algorithm for the numerical solution of eddy current problems is described, based on the direct solution of the integral equation for the potentials. In this method only the conducting and iron regions need to be divided into elements, and there are no boundary conditions. Results from two computer programs using this method for iron free problems for various two-dimensional geometries are presented and compared with analytic solutions. (author)

  7. Differential white cell count by centrifugal microfluidics.

    Energy Technology Data Exchange (ETDEWEB)

    Sommer, Gregory Jon; Tentori, Augusto M.; Schaff, Ulrich Y.

    2010-07-01

    We present a method for counting white blood cells that is uniquely compatible with centrifugation based microfluidics. Blood is deposited on top of one or more layers of density media within a microfluidic disk. Spinning the disk causes the cell populations within whole blood to settle through the media, reaching an equilibrium based on the density of each cell type. Separation and fluorescence measurement of cell types stained with a DNA dye is demonstrated using this technique. The integrated signal from bands of fluorescent microspheres is shown to be proportional to their initial concentration in suspension. Among the current generation of medical diagnostics are devices based on the principle of centrifuging a CD sized disk functionalized with microfluidics. These portable 'lab on a disk' devices are capable of conducting multiple assays directly from a blood sample, embodied by platforms developed by Gyros, Samsung, and Abaxis. [1,2] However, no centrifugal platform to date includes a differential white blood cell count, which is an important metric complimentary to diagnostic assays. Measuring the differential white blood cell count (the relative fraction of granulocytes, lymphocytes, and monocytes) is a standard medical diagnostic technique useful for identifying sepsis, leukemia, AIDS, radiation exposure, and a host of other conditions that affect the immune system. Several methods exist for measuring the relative white blood cell count including flow cytometry, electrical impedance, and visual identification from a stained drop of blood under a microscope. However, none of these methods is easily incorporated into a centrifugal microfluidic diagnostic platform.

  8. Mixed methods in psychotherapy research: A review of method(ology) integration in psychotherapy science.

    Science.gov (United States)

    Bartholomew, Theodore T; Lockard, Allison J

    2018-06-13

    Mixed methods can foster depth and breadth in psychological research. However, its use remains in development in psychotherapy research. Our purpose was to review the use of mixed methods in psychotherapy research. Thirty-one studies were identified via the PRISMA systematic review method. Using Creswell & Plano Clark's typologies to identify design characteristics, we assessed each study for rigor and how each used mixed methods. Key features of mixed methods designs and these common patterns were identified: (a) integration of clients' perceptions via mixing; (b) understanding group psychotherapy; (c) integrating methods with cases and small samples; (d) analyzing clinical data as qualitative data; and (e) exploring cultural identities in psychotherapy through mixed methods. The review is discussed with respect to the value of integrating multiple data in single studies to enhance psychotherapy research. © 2018 Wiley Periodicals, Inc.

  9. Dhage Iteration Method for Generalized Quadratic Functional Integral Equations

    Directory of Open Access Journals (Sweden)

    Bapurao C. Dhage

    2015-01-01

    Full Text Available In this paper we prove the existence as well as approximations of the solutions for a certain nonlinear generalized quadratic functional integral equation. An algorithm for the solutions is developed and it is shown that the sequence of successive approximations starting at a lower or upper solution converges monotonically to the solutions of related quadratic functional integral equation under some suitable mixed hybrid conditions. We rely our main result on Dhage iteration method embodied in a recent hybrid fixed point theorem of Dhage (2014 in partially ordered normed linear spaces. An example is also provided to illustrate the abstract theory developed in the paper.

  10. Entropic sampling in the path integral Monte Carlo method

    International Nuclear Information System (INIS)

    Vorontsov-Velyaminov, P N; Lyubartsev, A P

    2003-01-01

    We have extended the entropic sampling Monte Carlo method to the case of path integral representation of a quantum system. A two-dimensional density of states is introduced into path integral form of the quantum canonical partition function. Entropic sampling technique within the algorithm suggested recently by Wang and Landau (Wang F and Landau D P 2001 Phys. Rev. Lett. 86 2050) is then applied to calculate the corresponding entropy distribution. A three-dimensional quantum oscillator is considered as an example. Canonical distributions for a wide range of temperatures are obtained in a single simulation run, and exact data for the energy are reproduced

  11. Adobe photoshop quantification (PSQ) rather than point-counting: A rapid and precise method for quantifying rock textural data and porosities

    Science.gov (United States)

    Zhang, Xuefeng; Liu, Bo; Wang, Jieqiong; Zhang, Zhe; Shi, Kaibo; Wu, Shuanglin

    2014-08-01

    Commonly used petrological quantification methods are visual estimation, counting, and image analyses. However, in this article, an Adobe Photoshop-based analyzing method (PSQ) is recommended for quantifying the rock textural data and porosities. Adobe Photoshop system provides versatile abilities in selecting an area of interest and the pixel number of a selection could be read and used to calculate its area percentage. Therefore, Adobe Photoshop could be used to rapidly quantify textural components, such as content of grains, cements, and porosities including total porosities and different genetic type porosities. This method was named as Adobe Photoshop Quantification (PSQ). The workflow of the PSQ method was introduced with the oolitic dolomite samples from the Triassic Feixianguan Formation, Northeastern Sichuan Basin, China, for example. And the method was tested by comparing with the Folk's and Shvetsov's "standard" diagrams. In both cases, there is a close agreement between the "standard" percentages and those determined by the PSQ method with really small counting errors and operator errors, small standard deviations and high confidence levels. The porosities quantified by PSQ were evaluated against those determined by the whole rock helium gas expansion method to test the specimen errors. Results have shown that the porosities quantified by the PSQ are well correlated to the porosities determined by the conventional helium gas expansion method. Generally small discrepancies (mostly ranging from -3% to 3%) are caused by microporosities which would cause systematic underestimation of 2% and/or by macroporosities causing underestimation or overestimation in different cases. Adobe Photoshop could be used to quantify rock textural components and porosities. This method has been tested to be precise and accurate. It is time saving compared with usual methods.

  12. High sensitive quench detection method using an integrated test wire

    International Nuclear Information System (INIS)

    Fevrier, A.; Tavergnier, J.P.; Nithart, H.; Kiblaire, M.; Duchateau, J.L.

    1981-01-01

    A high sensitive quench detection method which works even in the presence of an external perturbing magnetic field is reported. The quench signal is obtained from the difference in voltages at the superconducting winding terminals and at the terminals at a secondary winding strongly coupled to the primary. The secondary winding could consist of a ''zero-current strand'' of the superconducting cable not connected to one of the winding terminals or an integrated normal test wire inside the superconducting cable. Experimental results on quench detection obtained by this method are described. It is shown that the integrated test wire method leads to efficient and sensitive quench detection, especially in the presence of an external perturbing magnetic field

  13. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  14. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  15. Standard test method for nondestructive assay of nuclear material in scrap and waste by passive-Active neutron counting using 252Cf shuffler

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method covers the nondestructive assay of scrap and waste items for U, Pu, or both, using a 252Cf shuffler. Shuffler measurements have been applied to a variety of matrix materials in containers of up to several 100 L. Corrections are made for the effects of matrix material. Applications of this test method include measurements for safeguards, accountability, TRU, and U waste segregation, disposal, and process control purposes (1, 2, 3). 1.1.1 This test method uses passive neutron coincidence counting (4) to measure the 240Pu-effective mass. It has been used to assay items with total Pu contents between 0.03 g and 1000 g. It could be used to measure other spontaneously fissioning isotopes such as Cm and Cf. It specifically describes the approach used with shift register electronics; however, it can be adapted to other electronics. 1.1.2 This test method uses neutron irradiation with a moveable Cf source and counting of the delayed neutrons from the induced fissions to measure the 235U equiva...

  16. Identification of Lactobacillus delbrueckii and Streptococcus thermophilus Strains Present in Artisanal Raw Cow Milk Cheese Using Real-time PCR and Classic Plate Count Methods.

    Science.gov (United States)

    Stachelska, Milena A

    2017-12-04

    The aim of this paper was to detect Lactobacillus delbrueckii and Streptococcus thermophilus using real-time quantitative PCR assay in 7-day ripening cheese produced from unpasteurised milk. Real-time quantitative PCR assays were designed to identify and enumerate the chosen species of lactic acid bacteria (LAB) in ripened cheese. The results of molecular quantification and classic bacterial enumeration showed a high level of similarity proving that DNA extraction was carried out in a proper way and that genomic DNA solutions were free of PCR inhibitors. These methods revealed the presence of L. delbrueckii and S. thermophilus. The real-time PCR enabled quantification with a detection of 101-103 CFU/g of product. qPCR-standard curves were linear over seven log units down to 101 copies per reaction; efficiencies ranged from 77.9% to 93.6%. Cheese samples were analysed with plate count method and qPCR in parallel. Compared with the classic plate count method, the newly developed qPCR method provided faster and species specific identification of two dairy LAB and yielded comparable quantitative results.

  17. Nuclear methods - an integral part of the NBS certification program

    International Nuclear Information System (INIS)

    Gills, T.E.

    1984-01-01

    Within the past twenty years, new techniques and methods have emerged in response to new technologies that are based upon the performance of high-purity and well-characterized materials. The National Bureau of Standards, through its Standard Reference Materials (SRM's) Program, provides standards in the form of many of these materials to ensure accuracy and the compatibility of measurements throughout the US and the world. These standards, defined by the National Bureau of Standards as Standard Reference Materials (SRMs), are developed by using state-of-the-art methods and procedures for both preparation and analysis. Nuclear methods-activation analysis constitute an integral part of that analysis process

  18. Study of the radioactivity of drinking water in the city of Antsirabe using the method of liquid scintillation counting

    International Nuclear Information System (INIS)

    RAKOTOMANGA, H.

    2003-01-01

    The objective of this work is to determine the radioactivity by liquid scintillation counting (LSC) and to study the distribution of radionuclides in drinking water from the Antsirabe region. Optiphase Hisafe 3TM is used to measure gross alpha-beta radioactivity. Radium, radon and the excess of short half-life radionuclides are obtained from MaxilightTM cocktail. Results of measurement of drinking water samples show that : gross alpha-beta activities are between (120±27) Bq.l -1 and (426±47) Bq.l -1 , the radium activities are from (14±2)Bq.l -1 to (78±5) Bq.l -1 , radon activities are from (0.6±0.2)Bq.l -1 to (60±6) Bq.l -1 , short half-life radionuclides activities are between 25±8 Bq.l -1 and 270±27 Bq.l -1 .Annual dose exposure from ingestion of radium-226 is between (2.09 ± 0.32)mSv and (11.03 ± 0.73)mSv. These values are close to the Malagasy standard (5 mSv). The results show that drinking waters from the Antsirabe region contain natural radionuclides. Dose exposure increases if the water is directly ingested after collection. [fr

  19. Direct integration multiple collision integral transport analysis method for high energy fusion neutronics

    International Nuclear Information System (INIS)

    Koch, K.R.

    1985-01-01

    A new analysis method specially suited for the inherent difficulties of fusion neutronics was developed to provide detailed studies of the fusion neutron transport physics. These studies should provide a better understanding of the limitations and accuracies of typical fusion neutronics calculations. The new analysis method is based on the direct integration of the integral form of the neutron transport equation and employs a continuous energy formulation with the exact treatment of the energy angle kinematics of the scattering process. In addition, the overall solution is analyzed in terms of uncollided, once-collided, and multi-collided solution components based on a multiple collision treatment. Furthermore, the numerical evaluations of integrals use quadrature schemes that are based on the actual dependencies exhibited in the integrands. The new DITRAN computer code was developed on the Cyber 205 vector supercomputer to implement this direct integration multiple-collision fusion neutronics analysis. Three representative fusion reactor models were devised and the solutions to these problems were studied to provide suitable choices for the numerical quadrature orders as well as the discretized solution grid and to understand the limitations of the new analysis method. As further verification and as a first step in assessing the accuracy of existing fusion-neutronics calculations, solutions obtained using the new analysis method were compared to typical multigroup discrete ordinates calculations

  20. Precise material identification method based on a photon counting technique with correction of the beam hardening effect in X-ray spectra

    International Nuclear Information System (INIS)

    Kimoto, Natsumi; Hayashi, Hiroaki; Asahara, Takashi; Mihara, Yoshiki; Kanazawa, Yuki; Yamakawa, Tsutomu; Yamamoto, Shuichiro; Yamasaki, Masashi; Okada, Masahiro

    2017-01-01

    The aim of our study is to develop a novel material identification method based on a photon counting technique, in which the incident and penetrating X-ray spectra are analyzed. Dividing a 40 kV X-ray spectra into two energy regions, the corresponding linear attenuation coefficients are derived. We can identify the materials precisely using the relationship between atomic number and linear attenuation coefficient through the correction of the beam hardening effect of the X-ray spectra. - Highlights: • We propose a precise material identification method to be used as a photon counting system. • Beam hardening correction is important, even when the analysis is applied to the short energy regions in the X-ray spectrum. • Experiments using a single probe-type CdTe detector were performed, and Monte Carlo simulation was also carried out. • We described the applicability of our method for clinical diagnostic X-ray imaging in the near future.

  1. Serum bactericidal assay for the evaluation of typhoid vaccine using a semi-automated colony-counting method.

    Science.gov (United States)

    Jang, Mi Seon; Sahastrabuddhe, Sushant; Yun, Cheol-Heui; Han, Seung Hyun; Yang, Jae Seung

    2016-08-01

    Typhoid fever, mainly caused by Salmonella enterica serovar Typhi (S. Typhi), is a life-threatening disease, mostly in developing countries. Enzyme-linked immunosorbent assay (ELISA) is widely used to quantify antibodies against S. Typhi in serum but does not provide information about functional antibody titers. Although the serum bactericidal assay (SBA) using an agar plate is often used to measure functional antibody titers against various bacterial pathogens in clinical specimens, it has rarely been used for typhoid vaccines because it is time-consuming and labor-intensive. In the present study, we established an improved SBA against S. Typhi using a semi-automated colony-counting system with a square agar plate harboring 24 samples. The semi-automated SBA efficiently measured bactericidal titers of sera from individuals immunized with S. Typhi Vi polysaccharide vaccines. The assay specifically responded to S. Typhi Ty2 but not to other irrelevant enteric bacteria including Vibrio cholerae and Shigella flexneri. Baby rabbit complement was more appropriate source for the SBA against S. Typhi than complements from adult rabbit, guinea pig, and human. We also examined the correlation between SBA and ELISA for measuring antibody responses against S. Typhi using pre- and post-vaccination sera from 18 human volunteers. The SBA titer showed a good correlation with anti-Vi IgG quantity in the serum as determined by Spearman correlation coefficient of 0.737 (P measure functional antibody titers against S. Typhi in sera from human subjects immunized with typhoid vaccines. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. An Integrated Method of Supply Chains Vulnerability Assessment

    Directory of Open Access Journals (Sweden)

    Jiaguo Liu

    2016-01-01

    Full Text Available Supply chain vulnerability identification and evaluation are extremely important to mitigate the supply chain risk. We present an integrated method to assess the supply chain vulnerability. The potential failure mode of the supply chain vulnerability is analyzed through the SCOR model. Combining the fuzzy theory and the gray theory, the correlation degree of each vulnerability indicator can be calculated and the target improvements can be carried out. In order to verify the effectiveness of the proposed method, we use Kendall’s tau coefficient to measure the effect of different methods. The result shows that the presented method has the highest consistency in the assessment compared with the other two methods.

  3. Field Method for Integrating the First Order Differential Equation

    Institute of Scientific and Technical Information of China (English)

    JIA Li-qun; ZHENG Shi-wang; ZHANG Yao-yu

    2007-01-01

    An important modern method in analytical mechanics for finding the integral, which is called the field-method, is used to research the solution of a differential equation of the first order. First, by introducing an intermediate variable, a more complicated differential equation of the first order can be expressed by two simple differential equations of the first order, then the field-method in analytical mechanics is introduced for solving the two differential equations of the first order. The conclusion shows that the field-method in analytical mechanics can be fully used to find the solutions of a differential equation of the first order, thus a new method for finding the solutions of the first order is provided.

  4. Computing thermal Wigner densities with the phase integration method

    International Nuclear Information System (INIS)

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-01-01

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems

  5. Computing thermal Wigner densities with the phase integration method.

    Science.gov (United States)

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  6. Methods for Developing Emissions Scenarios for Integrated Assessment Models

    Energy Technology Data Exchange (ETDEWEB)

    Prinn, Ronald [MIT; Webster, Mort [MIT

    2007-08-20

    The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.

  7. Methods in Entrepreneurship Education Research: A Review and Integrative Framework

    DEFF Research Database (Denmark)

    Blenker, Per; Trolle Elmholdt, Stine; Frederiksen, Signe Hedeboe

    2014-01-01

    is fragmented both conceptually and methodologically. Findings suggest that the methods applied in entrepreneurship education research cluster in two groups: 1. quantitative studies of the extent and effect of entrepreneurship education, and 2. qualitative single case studies of different courses and programmes....... It integrates qualitative and quantitative techniques, the use of research teams consisting of insiders (teachers studying their own teaching) and outsiders (research collaborators studying the education) as well as multiple types of data. To gain both in-depth and analytically generalizable studies...... a variety of helpful methods, explore the potential relation between insiders and outsiders in the research process, and discuss how different types of data can be combined. The integrated framework urges researchers to extend investments in methodological efforts and to enhance the in-depth understanding...

  8. System integrational and migrational concepts and methods within healthcare

    DEFF Research Database (Denmark)

    Endsleff, F; Loubjerg, P

    1997-01-01

    In this paper an overview and comparison of the basic concepts and methods behind different system integrational implementations is given, including the DHE, which is based on the coming Healthcare Information Systems Architecture pre-standard HISA, developed by CEN TC251. This standard and the DHE...... (Distributed Healthcare Environment) not only provides highly relevant standards, but also provides an efficient and well structured platform for Healthcare IT Systems....

  9. A geometrical method towards first integrals for dynamical systems

    International Nuclear Information System (INIS)

    Labrunie, S.; Conte, R.

    1996-01-01

    We develop a method, based on Darboux close-quote s and Liouville close-quote s works, to find first integrals and/or invariant manifolds for a physically relevant class of dynamical systems, without making any assumption on these elements close-quote forms. We apply it to three dynamical systems: Lotka endash Volterra, Lorenz and Rikitake. copyright 1996 American Institute of Physics

  10. Towards risk-based structural integrity methods for PWRs

    International Nuclear Information System (INIS)

    Chapman, O.J.V.; Lloyd, R.B.

    1992-01-01

    This paper describes the development of risk-based structural integrity assurance methods and their application to Pressurized Water Reactor (PWR) plant. In-service inspection is introduced as a way of reducing the failure probability of high risk sites and the latter are identified using reliability analysis; the extent and interval of inspection can also be optimized. The methodology is illustrated by reference to the aspect of reliability of weldments in PWR systems. (author)

  11. INTEGRATED APPLICATION OF OPTICAL DIAGNOSTIC METHODS IN ULCERATIVE COLITIS

    Directory of Open Access Journals (Sweden)

    E. V. Velikanov

    2013-01-01

    Full Text Available Abstract. Our results suggest that the combined use of optical coherent tomography (OCT and fluorescence diagnosis helps to refine the nature and boundaries of the pathological process in the tissue of the colon in ulcerative colitis. Studies have shown that an integrated optical diagnostics allows us to differentiate lesions respectively to histology and to decide on the need for biopsy and venue. This method is most appropriate in cases difficult for diagnosis. 

  12. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  13. Evaluation of the filtered leapfrog-trapezoidal time integration method

    International Nuclear Information System (INIS)

    Roache, P.J.; Dietrich, D.E.

    1988-01-01

    An analysis and evaluation are presented for a new method of time integration for fluid dynamic proposed by Dietrich. The method, called the filtered leapfrog-trapezoidal (FLT) scheme, is analyzed for the one-dimensional constant-coefficient advection equation and is shown to have some advantages for quasi-steady flows. A modification (FLTW) using a weighted combination of FLT and leapfrog is developed which retains the advantages for steady flows, increases accuracy for time-dependent flows, and involves little coding effort. Merits and applicability are discussed

  14. Investigation of Optimal Integrated Circuit Raster Image Vectorization Method

    Directory of Open Access Journals (Sweden)

    Leonas Jasevičius

    2011-03-01

    Full Text Available Visual analysis of integrated circuit layer requires raster image vectorization stage to extract layer topology data to CAD tools. In this paper vectorization problems of raster IC layer images are presented. Various line extraction from raster images algorithms and their properties are discussed. Optimal raster image vectorization method was developed which allows utilization of common vectorization algorithms to achieve the best possible extracted vector data match with perfect manual vectorization results. To develop the optimal method, vectorized data quality dependence on initial raster image skeleton filter selection was assessed.Article in Lithuanian

  15. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    Science.gov (United States)

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  16. Early-Season Stand Count Determination in Corn via Integration of Imagery from Unmanned Aerial Systems (UAS and Supervised Learning Techniques

    Directory of Open Access Journals (Sweden)

    Sebastian Varela

    2018-02-01

    Full Text Available Corn (Zea mays L. is one of the most sensitive crops to planting pattern and early-season uniformity. The most common method to determine number of plants is by visual inspection on the ground but this field activity becomes time-consuming, labor-intensive, biased, and may lead to less profitable decisions by farmers. The objective of this study was to develop a reliable, timely, and unbiased method for counting corn plants based on ultra-high-resolution imagery acquired from unmanned aerial systems (UAS to automatically scout fields and applied to real field conditions. A ground sampling distance of 2.4 mm was targeted to extract information at a plant-level basis. First, an excess greenness (ExG index was used to individualized green pixels from the background, then rows and inter-row contours were identified and extracted. A scalable training procedure was implemented using geometric descriptors as inputs of the classifier. Second, a decision tree was implemented and tested using two training modes in each site to expose the workflow to different ground conditions at the time of the aerial data acquisition. Differences in performance were due to training modes and spatial resolutions in the two sites. For an object classification task, an overall accuracy of 0.96, based on the proportion of corrected assessment of corn and non-corn objects, was obtained for local (per-site classification, and an accuracy of 0.93 was obtained for the combined training modes. For successful model implementation, plants should have between two to three leaves when images are collected (avoiding overlapping between plants. Best workflow performance was reached at 2.4 mm resolution corresponding to 10 m of altitude (lower altitude; higher altitudes were gradually penalized. The latter was coincident with the larger number of detected green objects in the images and the effectiveness of geometry as descriptor for corn plant detection.

  17. Determination of trace elements in scallop and fish otolith by instrumental neutron activation analysis using anti-coincidence and coincidence counting methods

    International Nuclear Information System (INIS)

    Suzuki, Shogo; Okada, Yukiko; Hirai, Shoji

    2005-01-01

    Trace element concentrations in scallop reference material and fish otolith certified reference materials prepared at the National Institute for Environmental Studies (NIES) of Japan were determined by instrumental neutron activation analysis (INAA). Nine aliquots of scallop sample (ca. 252∼507 mg) and five aliquots of fish otolith sample (ca. 502 ∼ 988 mg) and comparative standards were irradiated for a short time (10 s) at a thermal neutron flux of 1.5 x 10 12 n cm -2 s -1 (pneumatic transfer) and for a long time (6 h) at a thermal neutron flux of 3.7 x 10 12 n cm -2 s -1 (central thimble) in the Rikkyo University Research Reactor (100 kW). The irradiated samples were measured by conventional γ-ray spectrometry using a coaxial Ge detector, and by anti-coincidence and coincidence γ-ray spectrometry with a coaxial Ge detector and a well-type NaI (Tl) detector to determine as many trace elements as possible with high sensitivity. The concentrations of 34 elements of the NIES No.15 scallop reference material and 16 elements of the NIES No.22 fish otolith CRM were determined. Using the coincidence counting method to determine Se, Ba and Hf, the lower limit of the determination was improved by 2 times compared with the conventional counting method. (author)

  18. Platelet counting using the Coulter electronic counter.

    Science.gov (United States)

    Eggleton, M J; Sharp, A A

    1963-03-01

    A method for counting platelets in dilutions of platelet-rich plasm using the Coulter electronic counter is described.(1) The results obtained show that such platelet counts are at least as accurate as the best methods of visual counting. The various technical difficulties encountered are discussed.

  19. Numerical method for solving integral equations of neutron transport. II

    International Nuclear Information System (INIS)

    Loyalka, S.K.; Tsai, R.W.

    1975-01-01

    In a recent paper it was pointed out that the weakly singular integral equations of neutron transport can be quite conveniently solved by a method based on subtraction of singularity. This previous paper was devoted entirely to the consideration of simple one-dimensional isotropic-scattering and one-group problems. The present paper constitutes interesting extensions of the previous work in that in addition to a typical two-group anisotropic-scattering albedo problem in the slab geometry, the method is also applied to an isotropic-scattering problem in the x-y geometry. These results are compared with discrete S/sub N/ (ANISN or TWOTRAN-II) results, and for the problems considered here, the proposed method is found to be quite effective. Thus, the method appears to hold considerable potential for future applications. (auth)

  20. Comparison of Three Different Methods for Pile Integrity Testing on a Cylindrical Homogeneous Polyamide Specimen

    Science.gov (United States)

    Lugovtsova, Y. D.; Soldatov, A. I.

    2016-01-01

    Three different methods for pile integrity testing are proposed to compare on a cylindrical homogeneous polyamide specimen. The methods are low strain pile integrity testing, multichannel pile integrity testing and testing with a shaker system. Since the low strain pile integrity testing is well-established and standardized method, the results from it are used as a reference for other two methods.

  1. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data

    Directory of Open Access Journals (Sweden)

    Li Chen

    2018-04-01

    Full Text Available Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios—a simple but effective normalization method—for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  2. Amylase and blood cell-count hematological radiation-injury biomarkers in a rhesus monkey radiation model-use of multiparameter and integrated biological dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Blakely, W.F. [Uniformed Services University, Armed Forces Radiobiology Research Institute, 8901 Wisconsin Avenue, Bethesda, MD 20889-5603 (United States)], E-mail: blakely@afrri.usuhs.mil; Ossetrova, N.I.; Manglapus, G.L.; Salter, C.A.; Levine, I.H.; Jackson, W.E.; Grace, M.B.; Prasanna, P.G.S.; Sandgren, D.J.; Ledney, G.D. [Uniformed Services University, Armed Forces Radiobiology Research Institute, 8901 Wisconsin Avenue, Bethesda, MD 20889-5603 (United States)

    2007-07-15

    Effective medical management of suspected radiation exposure incidents requires the recording of dynamic medical data (clinical signs and symptoms), biological assessments of radiation exposure, and physical dosimetry in order to provide diagnostic information to the treating physician and dose assessment for personnel radiation protection records. The need to rapidly assess radiation dose in mass-casualty and population-monitoring scenarios prompted an evaluation of suitable biomarkers that can provide early diagnostic information after exposure. We investigated the utility of serum amylase and hematological blood-cell count biomarkers to provide early assessment of severe radiation exposures in a non-human primate model (i.e., rhesus macaques; n=8) exposed to whole-body radiation of {sup 60}Co-gamma rays (6.5 Gy, 40cGymin{sup -1}). Serum amylase activity was significantly elevated (12.3{+-}3.27- and 2.6{+-}0.058-fold of day zero samples) at 1 and 2-days, respectively, after radiation. Lymphocyte cell counts decreased ({<=}15% of day zero samples) 1 and 2 days after radiation exposure. Neutrophil cell counts increased at day one by 1.9({+-}0.38)-fold compared with levels before irradiation. The ratios of neutrophil to lymphocyte cell counts increased by 13({+-}2.66)- and 4.23({+-}0.95)-fold at 1 and 2 days, respectively, after irradiation. These results demonstrate that increases in serum amylase activity along with decreases of lymphocyte counts, increases in neutrophil cell counts, and increases in the ratio of neutrophil to lymphocyte counts 1 day after irradiation can provide enhanced early triage discrimination of individuals with severe radiation exposure and injury. Use of the biodosimetry assessment tool (BAT) application is encouraged to permit dynamic recording of medical data in the management of a suspected radiological casualty.

  3. Evaluation of Normalization Methods on GeLC-MS/MS Label-Free Spectral Counting Data to Correct for Variation during Proteomic Workflows

    Science.gov (United States)

    Gokce, Emine; Shuford, Christopher M.; Franck, William L.; Dean, Ralph A.; Muddiman, David C.

    2011-12-01

    Normalization of spectral counts (SpCs) in label-free shotgun proteomic approaches is important to achieve reliable relative quantification. Three different SpC normalization methods, total spectral count (TSpC) normalization, normalized spectral abundance factor (NSAF) normalization, and normalization to selected proteins (NSP) were evaluated based on their ability to correct for day-to-day variation between gel-based sample preparation and chromatographic performance. Three spectral counting data sets obtained from the same biological conidia sample of the rice blast fungus Magnaporthe oryzae were analyzed by 1D gel and liquid chromatography-tandem mass spectrometry (GeLC-MS/MS). Equine myoglobin and chicken ovalbumin were spiked into the protein extracts prior to 1D-SDS- PAGE as internal protein standards for NSP. The correlation between SpCs of the same proteins across the different data sets was investigated. We report that TSpC normalization and NSAF normalization yielded almost ideal slopes of unity for normalized SpC versus average normalized SpC plots, while NSP did not afford effective corrections of the unnormalized data. Furthermore, when utilizing TSpC normalization prior to relative protein quantification, t-testing and fold-change revealed the cutoff limits for determining real biological change to be a function of the absolute number of SpCs. For instance, we observed the variance decreased as the number of SpCs increased, which resulted in a higher propensity for detecting statistically significant, yet artificial, change for highly abundant proteins. Thus, we suggest applying higher confidence level and lower fold-change cutoffs for proteins with higher SpCs, rather than using a single criterion for the entire data set. By choosing appropriate cutoff values to maintain a constant false positive rate across different protein levels (i.e., SpC levels), it is expected this will reduce the overall false negative rate, particularly for proteins with

  4. A simple reliability block diagram method for safety integrity verification

    International Nuclear Information System (INIS)

    Guo Haitao; Yang Xianhui

    2007-01-01

    IEC 61508 requires safety integrity verification for safety related systems to be a necessary procedure in safety life cycle. PFD avg must be calculated to verify the safety integrity level (SIL). Since IEC 61508-6 does not give detailed explanations of the definitions and PFD avg calculations for its examples, it is difficult for common reliability or safety engineers to understand when they use the standard as guidance in practice. A method using reliability block diagram is investigated in this study in order to provide a clear and feasible way of PFD avg calculation and help those who take IEC 61508-6 as their guidance. The method finds mean down times (MDTs) of both channel and voted group first and then PFD avg . The calculated results of various voted groups are compared with those in IEC61508 part 6 and Ref. [Zhang T, Long W, Sato Y. Availability of systems with self-diagnostic components-applying Markov model to IEC 61508-6. Reliab Eng System Saf 2003;80(2):133-41]. An interesting outcome can be realized from the comparison. Furthermore, although differences in MDT of voted groups exist between IEC 61508-6 and this paper, PFD avg of voted groups are comparatively close. With detailed description, the method of RBD presented can be applied to the quantitative SIL verification, showing a similarity of the method in IEC 61508-6

  5. Numerical Simulation of Antennas with Improved Integral Equation Method

    International Nuclear Information System (INIS)

    Ma Ji; Fang Guang-You; Lu Wei

    2015-01-01

    Simulating antennas around a conducting object is a challenge task in computational electromagnetism, which is concerned with the behaviour of electromagnetic fields. To analyze this model efficiently, an improved integral equation-fast Fourier transform (IE-FFT) algorithm is presented in this paper. The proposed scheme employs two Cartesian grids with different size and location to enclose the antenna and the other object, respectively. On the one hand, IE-FFT technique is used to store matrix in a sparse form and accelerate the matrix-vector multiplication for each sub-domain independently. On the other hand, the mutual interaction between sub-domains is taken as the additional exciting voltage in each matrix equation. By updating integral equations several times, the whole electromagnetic system can achieve a stable status. Finally, the validity of the presented method is verified through the analysis of typical antennas in the presence of a conducting object. (paper)

  6. A reliable method for the counting and control of single ions for single-dopant controlled devices

    International Nuclear Information System (INIS)

    Shinada, T; Kurosawa, T; Nakayama, H; Zhu, Y; Hori, M; Ohdomari, I

    2008-01-01

    By 2016, transistor device size will be just 10 nm. However, a transistor that is doped at a typical concentration of 10 18 atoms cm -3 has only one dopant atom in the active channel region. Therefore, it can be predicted that conventional doping methods such as ion implantation and thermal diffusion will not be available ten years from now. We have been developing a single-ion implantation (SII) method that enables us to implant dopant ions one-by-one into semiconductors until the desired number is reached. Here we report a simple but reliable method to control the number of single-dopant atoms by detecting the change in drain current induced by single-ion implantation. The drain current decreases in a stepwise fashion as a result of the clusters of displaced Si atoms created by every single-ion incidence. This result indicates that the single-ion detection method we have developed is capable of detecting single-ion incidence with 100% efficiency. Our method potentially could pave the way to future single-atom devices, including a solid-state quantum computer

  7. Comparative analysis of dose rates in bricks determined by neutron activation analysis, alpha counting and X-ray fluorescence analysis for the thermoluminescence fine grain dating method

    Science.gov (United States)

    Bártová, H.; Kučera, J.; Musílek, L.; Trojek, T.

    2014-11-01

    In order to evaluate the age from the equivalent dose and to obtain an optimized and efficient procedure for thermoluminescence (TL) dating, it is necessary to obtain the values of both the internal and the external dose rates from dated samples and from their environment. The measurements described and compared in this paper refer to bricks from historic buildings and a fine-grain dating method. The external doses are therefore negligible, if the samples are taken from a sufficient depth in the wall. However, both the alpha dose rate and the beta and gamma dose rates must be taken into account in the internal dose. The internal dose rate to fine-grain samples is caused by the concentrations of natural radionuclides 238U, 235U, 232Th and members of their decay chains, and by 40K concentrations. Various methods can be used for determining trace concentrations of these natural radionuclides and their contributions to the dose rate. The dose rate fraction from 238U and 232Th can be calculated, e.g., from the alpha count rate, or from the concentrations of 238U and 232Th, measured by neutron activation analysis (NAA). The dose rate fraction from 40K can be calculated from the concentration of potassium measured, e.g., by X-ray fluorescence analysis (XRF) or by NAA. Alpha counting and XRF are relatively simple and are accessible for an ordinary laboratory. NAA can be considered as a more accurate method, but it is more demanding regarding time and costs, since it needs a nuclear reactor as a neutron source. A comparison of these methods allows us to decide whether the time- and cost-saving simpler techniques introduce uncertainty that is still acceptable.

  8. A comparison of non-integrating reprogramming methods

    Science.gov (United States)

    Schlaeger, Thorsten M; Daheron, Laurence; Brickler, Thomas R; Entwisle, Samuel; Chan, Karrie; Cianci, Amelia; DeVine, Alexander; Ettenger, Andrew; Fitzgerald, Kelly; Godfrey, Michelle; Gupta, Dipti; McPherson, Jade; Malwadkar, Prerana; Gupta, Manav; Bell, Blair; Doi, Akiko; Jung, Namyoung; Li, Xin; Lynes, Maureen S; Brookes, Emily; Cherry, Anne B C; Demirbas, Didem; Tsankov, Alexander M; Zon, Leonard I; Rubin, Lee L; Feinberg, Andrew P; Meissner, Alexander; Cowan, Chad A; Daley, George Q

    2015-01-01

    Human induced pluripotent stem cells (hiPSCs1–3) are useful in disease modeling and drug discovery, and they promise to provide a new generation of cell-based therapeutics. To date there has been no systematic evaluation of the most widely used techniques for generating integration-free hiPSCs. Here we compare Sendai-viral (SeV)4, episomal (Epi)5 and mRNA transfection mRNA6 methods using a number of criteria. All methods generated high-quality hiPSCs, but significant differences existed in aneuploidy rates, reprogramming efficiency, reliability and workload. We discuss the advantages and shortcomings of each approach, and present and review the results of a survey of a large number of human reprogramming laboratories on their independent experiences and preferences. Our analysis provides a valuable resource to inform the use of specific reprogramming methods for different laboratories and different applications, including clinical translation. PMID:25437882

  9. Improved parallel solution techniques for the integral transport matrix method

    Energy Technology Data Exchange (ETDEWEB)

    Zerr, R. Joseph, E-mail: rjz116@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA (United States); Azmy, Yousry Y., E-mail: yyazmy@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Burlington Engineering Laboratories, Raleigh, NC (United States)

    2011-07-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  10. Improved parallel solution techniques for the integral transport matrix method

    International Nuclear Information System (INIS)

    Zerr, R. Joseph; Azmy, Yousry Y.

    2011-01-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  11. Improved method for the determination of the cortisol production rate using high-performance liquid chromatography and liquid scintillation counting

    NARCIS (Netherlands)

    van Ingen, H. E.; Endert, E.

    1988-01-01

    Two new methods for the determination of the cortisol production rate using reversed-phase high-performance liquid chromatography are described. One uses ultraviolet detection at 205 nm, the other on-line post-column derivatization with benzamidine, followed by fluorimetric detection. The specific

  12. Counting to 20: Online Implementation of a Face-to-Face, Elementary Mathematics Methods Problem-Solving Activity

    Science.gov (United States)

    Schwartz, Catherine Stein

    2012-01-01

    This study describes implementation of the same problem-solving activity in both online and face-to-face environments. The activity, done in the first class period or first module of a K-2 mathematics methods course, was initially used in a face-to-face class and then adapted later for use in an online class. While the task was originally designed…

  13. Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis

    Science.gov (United States)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.

    1998-01-01

    Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.

  14. Method for deposition of a conductor in integrated circuits

    Science.gov (United States)

    Creighton, J. Randall; Dominguez, Frank; Johnson, A. Wayne; Omstead, Thomas R.

    1997-01-01

    A method is described for fabricating integrated semiconductor circuits and, more particularly, for the selective deposition of a conductor onto a substrate employing a chemical vapor deposition process. By way of example, tungsten can be selectively deposited onto a silicon substrate. At the onset of loss of selectivity of deposition of tungsten onto the silicon substrate, the deposition process is interrupted and unwanted tungsten which has deposited on a mask layer with the silicon substrate can be removed employing a halogen etchant. Thereafter, a plurality of deposition/etch back cycles can be carried out to achieve a predetermined thickness of tungsten.

  15. [Development method of healthcare information system integration based on business collaboration model].

    Science.gov (United States)

    Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2015-02-01

    Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.

  16. Integrated Data Collection Analysis (IDCA) Program - SSST Testing Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Whinnery, LeRoy L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Phillips, Jason J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Shelley, Timothy J. [Bureau of Alcohol, Tobacco and Firearms (ATF), Huntsville, AL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-03-25

    The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testing by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent.

  17. Box-counting dimension revisited: presenting an efficient method of minimising quantisation error and an assessment of the self-similarity of structural root systems

    Directory of Open Access Journals (Sweden)

    Martin eBouda

    2016-02-01

    Full Text Available Fractal dimension (FD, estimated by box-counting, is a metric used to characterise plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantisation error (QE, which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterise the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitised in 3D and subjected to box-counts. A pattern search algorithm was used to minimise QE by optimising grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates.QE due to both grid position and orientation was a significant source of error in FD estimates, but pattern search provided an efficient means of minimising it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitisations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did

  18. Life cycle integrated thermoeconomic assessment method for energy conversion systems

    International Nuclear Information System (INIS)

    Kanbur, Baris Burak; Xiang, Liming; Dubey, Swapnil; Choo, Fook Hoong; Duan, Fei

    2017-01-01

    Highlights: • A new LCA integrated thermoeconomic approach is presented. • The new unit fuel cost is found 4.8 times higher than the classic method. • The new defined parameter increased the sustainability index by 67.1%. • The case studies are performed for countries with different CO 2 prices. - Abstract: Life cycle assessment (LCA) based thermoeconomic modelling has been applied for the evaluation of energy conversion systems since it provided more comprehensive and applicable assessment criteria. This study proposes an improved thermoeconomic method, named as life cycle integrated thermoeconomic assessment (LCiTA), which combines the LCA based enviroeconomic parameters in the production steps of the system components and fuel with the conventional thermoeconomic method for the energy conversion systems. A micro-cogeneration system is investigated and analyzed with the LCiTA method, the comparative studies show that the unit cost of fuel by using the LCiTA method is 3.8 times higher than the conventional thermoeconomic model. It is also realized that the enviroeconomic parameters during the operation of the system components do not have significant impacts on the system streams since the exergetic parameters are dominant in the thermoeconomic calculations. Moreover, the improved sustainability index is found roundly 67.2% higher than the previously defined sustainability index, suggesting that the enviroeconomic and thermoeconomic parameters decrease the impact of the exergy destruction in the sustainability index definition. To find the feasible operation conditions for the micro-cogeneration system, different assessment strategies are presented. Furthermore, a case study for Singapore is conducted to see the impact of the forecasted carbon dioxide prices on the thermoeconomic performance of the micro-cogeneration system.

  19. Methods for using argon-39 to age-date groundwater using ultra-low-background proportional counting

    Energy Technology Data Exchange (ETDEWEB)

    Mace, Emily; Aalseth, Craig; Brandenberger, Jill; Day, Anthony; Hoppe, Eric; Humble, Paul; Keillor, Martin; Kulongoski, Justin; Overman, Cory; Panisko, Mark; Seifert, Allen; White, Signe; Wilcox Freeburg, Eric; Williams, Richard

    2017-08-01

    Argon-39 can be used as a tracer for age-dating glaciers, oceans, and more recently, groundwater. With a half-life of 269 years, 39Ar fills an intermediate age range gap (50-1,000 years) not currently covered by other common groundwater tracers. Therefore, adding this tracer to the data suite for groundwater studies provides an important tool for improving our understanding of groundwater systems. We present the methods employed for arriving at an age-date for a given sample of argon degassed from groundwater.

  20. Survey of anthelmintic resistance on Danish horse farms, using 5 different methods of calculating faecal egg count reduction

    DEFF Research Database (Denmark)

    Craven, J.; Bjørn, H.; Henriksen, S.A.

    1998-01-01

    of resistance in sheep was the most sensitive procedure for detecting resistance. Using this method benzimidazole resistance was detected on 33 of 42 farms (79%) examined. Pyrantel was tested on 15 farms and FECR tests indicate resistance on 3 (30%) farms. On 2 farms on which resistance to pyrantel was detected...... resistance to benzimidazoles was also detected. On one of 16 farms examined ivermectin resistance was indicated at Day 14 but not at Day 19. On the 15 remaining farms ivermectin was effective. Due to the high prevalence of anthelmintic resistance in Danish horse herds it is recommended that tests...

  1. Bonding of Si wafers by surface activation method for the development of high efficiency high counting rate radiation detectors

    International Nuclear Information System (INIS)

    Kanno, Ikuo; Yamashita, Makoto; Onabe, Hideaki

    2006-01-01

    Si wafers with two different resistivities ranging over two orders of magnitude were bonded by the surface activation method. The resistivities of bonded Si wafers were measured as a function of annealing temperature. Using calculations based on a model, the interface resistivities of bonded Si wafers were estimated as a function of the measured resistivities of bonded Si wafers. With thermal treatment from 500degC to 900degC, all interfaces showed high resistivity, with behavior that was close to that of an insulator. Annealing at 1000degC decreased the interface resistivity and showed close to ideal bonding after thermal treatment at 1100degC. (author)

  2. The reduced basis method for the electric field integral equation

    International Nuclear Information System (INIS)

    Fares, M.; Hesthaven, J.S.; Maday, Y.; Stamm, B.

    2011-01-01

    We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, for many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.

  3. Integrating Multiple Teaching Methods into a General Chemistry Classroom

    Science.gov (United States)

    Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella

    1998-02-01

    In addition to the traditional lecture format, three other teaching strategies (class discussions, concept maps, and cooperative learning) were incorporated into a freshman level general chemistry course. Student perceptions of their involvement in each of the teaching methods, as well as their perceptions of the utility of each method were used to assess the effectiveness of the integration of the teaching strategies as received by the students. Results suggest that each strategy serves a unique purpose for the students and increased student involvement in the course. These results indicate that the multiple teaching strategies were well received by the students and that all teaching strategies are necessary for students to get the most out of the course.

  4. Integral equation methods for vesicle electrohydrodynamics in three dimensions

    Science.gov (United States)

    Veerapaneni, Shravan

    2016-12-01

    In this paper, we develop a new boundary integral equation formulation that describes the coupled electro- and hydro-dynamics of a vesicle suspended in a viscous fluid and subjected to external flow and electric fields. The dynamics of the vesicle are characterized by a competition between the elastic, electric and viscous forces on its membrane. The classical Taylor-Melcher leaky-dielectric model is employed for the electric response of the vesicle and the Helfrich energy model combined with local inextensibility is employed for its elastic response. The coupled governing equations for the vesicle position and its transmembrane electric potential are solved using a numerical method that is spectrally accurate in space and first-order in time. The method uses a semi-implicit time-stepping scheme to overcome the numerical stiffness associated with the governing equations.

  5. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  6. Bacterial and Fungal Counts of Dried and Semi-Dried Foods Collected from Dhaka, Bangladesh, and Their Reduction Methods.

    Science.gov (United States)

    Feroz, Farahnaaz; Shimizu, Hiromi; Nishioka, Terumi; Mori, Miho; Sakagami, Yoshikazu

    2016-01-01

     Food is a basic necessity for human survival, but it is still the vehicle for the transmission of food borne disease. Various studies have examined the roles of spices, herbs, nuts, and semi-dried fruits, making the need for safe and convenient methods of decontamination a necessity. The current study determined the bacterial and fungal loads of 26 spices and herbs, 5 nuts, 10 semi-dried fruits and 5 other foods. Spices, herbs and semi-dried foods demonstrated the highest bacterial and fungal loads with the majority showing over 10 4 CFU/mL. Nuts and other foods showed growths ranging from 10 2 to 10 6 CFU/mL. The current study also attempted to determine the effects of heat and plasma treatment. The log reduction of bacterial growth after heat treatment (maximum: 120 min for 60℃) was between 0.08 to 4.47, and the log reduction after plasma treatment (maximum: 40 min) ranged from 2.37 to 5.75. Spices showed the lowest rates of reduction, whereas the semi-dried and other foods showed moderate to high levels of decrease after heat treatment. The log reduction of fungal growth after heat treatment ranged from 0.27 to 4.40, and log reduction after plasma treatment ranged from 2.15 to 5.91.Furthermore, we validated the sterilization effect of plasma treatment against Bacillus spp. and Staphylococcus spp. by using scanning electron microscopy. Both treatment methods could prove to be advantageous in the agriculture related fields, enhancing the quality of the foods.

  7. Accelerometer method and apparatus for integral display and control functions

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-06-01

    Vibration analysis has been used for years to provide a determination of the proper functioning of different types of machinery, including rotating machinery and rocket engines. A determination of a malfunction, if detected at a relatively early stage in its development, will allow changes in operating mode or a sequenced shutdown of the machinery prior to a total failure. Such preventative measures result in less extensive and/or less expensive repairs, and can also prevent a sometimes catastrophic failure of equipment. Standard vibration analyzers are generally rather complex, expensive, and of limited portability. They also usually result in displays and controls being located remotely from the machinery being monitored. Consequently, a need exists for improvements in accelerometer electronic display and control functions which are more suitable for operation directly on machines and which are not so expensive and complex. The invention includes methods and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. The apparatus includes an accelerometer package having integral display and control functions. The accelerometer package is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine condition over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase over the selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated. The benefits of a vibration recording and monitoring system with controls and displays readily

  8. Acoustic 3D modeling by the method of integral equations

    Science.gov (United States)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2018-02-01

    This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

  9. Hierarchical Matrices Method and Its Application in Electromagnetic Integral Equations

    Directory of Open Access Journals (Sweden)

    Han Guo

    2012-01-01

    Full Text Available Hierarchical (H- matrices method is a general mathematical framework providing a highly compact representation and efficient numerical arithmetic. When applied in integral-equation- (IE- based computational electromagnetics, H-matrices can be regarded as a fast algorithm; therefore, both the CPU time and memory requirement are reduced significantly. Its kernel independent feature also makes it suitable for any kind of integral equation. To solve H-matrices system, Krylov iteration methods can be employed with appropriate preconditioners, and direct solvers based on the hierarchical structure of H-matrices are also available along with high efficiency and accuracy, which is a unique advantage compared to other fast algorithms. In this paper, a novel sparse approximate inverse (SAI preconditioner in multilevel fashion is proposed to accelerate the convergence rate of Krylov iterations for solving H-matrices system in electromagnetic applications, and a group of parallel fast direct solvers are developed for dealing with multiple right-hand-side cases. Finally, numerical experiments are given to demonstrate the advantages of the proposed multilevel preconditioner compared to conventional “single level” preconditioners and the practicability of the fast direct solvers for arbitrary complex structures.

  10. A method for establishing integrity in software-based systems

    International Nuclear Information System (INIS)

    Staple, B.D.; Berg, R.S.; Dalton, L.J.

    1997-01-01

    In this paper, the authors present a digital system requirements specification method that has demonstrated a potential for improving the completeness of requirements while reducing ambiguity. It assists with making proper digital system design decisions, including the defense against specific digital system failures modes. It also helps define the technical rationale for all of the component and interface requirements. This approach is a procedural method that abstracts key features that are expanded in a partitioning that identifies and characterizes hazards and safety system function requirements. The key system features are subjected to a hierarchy that progressively defines their detailed characteristics and components. This process produces a set of requirements specifications for the system and all of its components. Based on application to nuclear power plants, the approach described here uses two ordered domains: plant safety followed by safety system integrity. Plant safety refers to those systems defined to meet the safety goals for the protection of the public. Safety system integrity refers to systems defined to ensure that the system can meet the safety goals. Within each domain, a systematic process is used to identify hazards and define the corresponding means of defense and mitigation. In both domains, the approach and structure are focused on the completeness of information and eliminating ambiguities in the generation of safety system requirements that will achieve the plant safety goals

  11. Comparison of triple to double coincidence ratio and Quench Parameter External methods for the determination of 3H efficiency by liquid scintillation counting

    International Nuclear Information System (INIS)

    Nisti, M.B.; Saueia, C.H.R.; Mazzilli, B.P.

    2013-01-01

    The aim of this study is to determine the tritium efficiency by liquid scintillation counting using two methodologies, Quench Parameter External (QPE) and Triple to Double Coincidence Ratio (TDCR), and to compare the results. The equipment used was the HIDEX model 300-SL Liquid Scintillation Counter, composed of three photomultipliers coupled with coincidence pulses, discrimination level and Mikro Win 2000 software. The efficiency varied from 0.028 to 0.706 cps dps -1 for QPE and from 0.061 to 0.703 cps dps -1 for TDCR. Different efficiencies were obtained using both methods, in the range from 459 to 572 quenching, above this range the efficiencies were similar. The verification of the efficiencies was performed by participating in the Intercomparison National Program (PNI). (author)

  12. Efficacy of ivermectin against gastrointestinal nematodes of cattle in Denmark evaluated by different methods for analysis of faecal egg count reduction.

    Science.gov (United States)

    Peña-Espinoza, Miguel; Thamsborg, Stig M; Denwood, Matthew J; Drag, Markus; Hansen, Tina V; Jensen, Vibeke F; Enemark, Heidi L

    2016-12-01

    The efficacy of ivermectin (IVM) against gastrointestinal nematodes in Danish cattle was assessed by faecal egg count reduction test (FECRT). Six cattle farms with history of clinical parasitism and avermectin use were included. On the day of treatment (Day 0), 20 naturally infected calves per farm (total n = 120) were stratified by initial faecal egg counts (FEC) and randomly allocated to a treatment group dosed with 0.2 mg IVM kg -1 body weight s.c. (IVM; n = 10) or an untreated control group (CTL; n = 10). Individual FEC were obtained at Day 0 and Day 14 post-treatment and pooled faeces by group were cultured to isolate L3 for detection of Ostertagia ostertagi and Cooperia oncophora by qPCR. Treatment efficacies were analysed using the recommended WAAVP method and two open-source statistical procedures based on Bayesian modelling: 'eggCounts' and 'Bayescount'. A simulation study evaluated the performance of the different procedures to correctly identify FEC reduction percentages of simulated bovine FEC data representing the observed real data. In the FECRT, reduced IVM efficacy was detected in three farms by all procedures using data from treated animals only, and in one farm according to the procedures including data from treated and untreated cattle. Post-treatment, O. ostertagi and C. oncophora L3 were detected by qPCR in faeces of treated animals from one and three herds with declared reduced IVM efficacy, respectively. Based on the simulation study, all methods showed a reduced performance when FEC aggregation increased post-treatment and suggested that a treatment group of 10 animals is insufficient for the FECRT in cattle. This is the first report of reduced anthelmintic efficacy in Danish cattle and warrants the implementation of larger surveys. Advantages and caveats regarding the use of Bayesian modelling and the relevance of including untreated cattle in the FECRT are discussed. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All

  13. The Final Count Down: A Review of Three Decades of Flight Controller Training Methods for Space Shuttle Mission Operations

    Science.gov (United States)

    Dittermore, Gary; Bertels, Christie

    2011-01-01

    Operations of human spaceflight systems is extremely complex; therefore, the training and certification of operations personnel is a critical piece of ensuring mission success. Mission Control Center (MCC-H), at the Lyndon B. Johnson Space Center in Houston, Texas, manages mission operations for the Space Shuttle Program, including the training and certification of the astronauts and flight control teams. An overview of a flight control team s makeup and responsibilities during a flight, and details on how those teams are trained and certified, reveals that while the training methodology for developing flight controllers has evolved significantly over the last thirty years the core goals and competencies have remained the same. In addition, the facilities and tools used in the control center have evolved. Changes in methodology and tools have been driven by many factors, including lessons learned, technology, shuttle accidents, shifts in risk posture, and generational differences. Flight controllers share their experiences in training and operating the space shuttle. The primary training method throughout the program has been mission simulations of the orbit, ascent, and entry phases, to truly train like you fly. A review of lessons learned from flight controller training suggests how they could be applied to future human spaceflight endeavors, including missions to the moon or to Mars. The lessons learned from operating the space shuttle for over thirty years will help the space industry build the next human transport space vehicle.

  14. Multiple-Event, Single-Photon Counting Imaging Sensor

    Science.gov (United States)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  15. An integrated approach for facilities planning by ELECTRE method

    Science.gov (United States)

    Elbishari, E. M. Y.; Hazza, M. H. F. Al; Adesta, E. Y. T.; Rahman, Nur Salihah Binti Abdul

    2018-01-01

    Facility planning is concerned with the design, layout, and accommodation of people, machines and activities of a system. Most of the researchers try to investigate the production area layout and the related facilities. However, few of them try to investigate the relationship between the production space and its relationship with service departments. The aim of this research to is to integrate different approaches in order to evaluate, analyse and select the best facilities planning method that able to explain the relationship between the production area and other supporting departments and its effect on human efforts. To achieve the objective of this research two different approaches have been integrated: Apple’s layout procedure as one of the effective tools in planning factories, ELECTRE method as one of the Multi Criteria Decision Making methods (MCDM) to minimize the risk of getting poor facilities planning. Dalia industries have been selected as a case study to implement our integration the factory have been divided two main different area: the whole facility (layout A), and the manufacturing area (layout B). This article will be concerned with the manufacturing area layout (Layout B). After analysing the data gathered, the manufacturing area was divided into 10 activities. There are five factors that the alternative were compared upon which are: Inter department satisfactory level, total distance travelled for workers, total distance travelled for the product, total time travelled for the workers, and total time travelled for the product. Three different layout alternatives have been developed in addition to the original layouts. Apple’s layout procedure was used to study and evaluate the different alternatives layouts, the study and evaluation of the layouts was done by calculating scores for each of the factors. After obtaining the scores from evaluating the layouts, ELECTRE method was used to compare the proposed alternatives with each other and with

  16. Sequence analysis of annually normalized citation counts: an empirical analysis based on the characteristic scores and scales (CSS) method.

    Science.gov (United States)

    Bornmann, Lutz; Ye, Adam Y; Ye, Fred Y

    2017-01-01

    In bibliometrics, only a few publications have focused on the citation histories of publications, where the citations for each citing year are assessed. In this study, therefore, annual categories of field- and time-normalized citation scores (based on the characteristic scores and scales method: 0 = poorly cited, 1 = fairly cited, 2 = remarkably cited, and 3 = outstandingly cited) are used to study the citation histories of papers. As our dataset, we used all articles published in 2000 and their annual citation scores until 2015. We generated annual sequences of citation scores (e.g., [Formula: see text]) and compared the sequences of annual citation scores of six broader fields (natural sciences, engineering and technology, medical and health sciences, agricultural sciences, social sciences, and humanities). In agreement with previous studies, our results demonstrate that sequences with poorly cited (0) and fairly cited (1) elements dominate the publication set; sequences with remarkably cited (3) and outstandingly cited (4) periods are rare. The highest percentages of constantly poorly cited papers can be found in the social sciences; the lowest percentages are in the agricultural sciences and humanities. The largest group of papers with remarkably cited (3) and/or outstandingly cited (4) periods shows an increasing impact over the citing years with the following orders of sequences: [Formula: see text] (6.01%), which is followed by [Formula: see text] (1.62%). Only 0.11% of the papers ( n  = 909) are constantly on the outstandingly cited level.

  17. Defining a realistic control for the chloroform fumigation-incubation method using microscopic counting and 14C-substrates

    International Nuclear Information System (INIS)

    Horwath, W.R.; Paul, E.A.; Harris, D.; Norton, J.; Jagger, L.; National Science Foundation, Logan, UT; Horton, K.A.

    1996-01-01

    Chloroform fumigation-incubation (CFI) has made possible the extensive characterization of soil microbial biomass carbon (C) (MBC). Defining the non-microbial C mineralized in soils following fumigation remains the major limitation of CFI. The mineralization of non-microbial C during CFI was examined by adding 14 C-maize to soil before incubation. The decomposition of the 14 C-maize during a 10-d incubation after fumigation was 22.5% that in non-fumigated control soils. Re-inoculation of the fumigated soil raised 14 C-maize decomposition to 77% that of the unfumigated control. A method was developed which varies the proportion of mineralized C from the unfumigated soil (UF c ) that is subtracted in calculating CFI biomass C. The proportion subtracted (P) varies according to a linear function of the ratio of C mineralized in the fumigated (F c ) and unfumigated samples (F c /UF c ) with two parameters K 1 and K 2 (P = K 1 F c /UF c ) + K 2 ). These parameters were estimated by regression of CFI biomass C, calculated according to the equation MBC = (F c - PUF c )/0.41, against that derived by direct microscopy in a series of California soils. Parameter values which gave the best estimate of microscopic biomass from the fumigation data were K 1 = 0.29 and K 2 = 0.23 (R 2 = 0.87). Substituting these parameter values, the equation can be simplified to MBC = 1.73F c - 0.56UF c . The equation was applied to other CFI data to determine its effect on the measurement of MBC. The use of this approach corrected data that were previously difficult to interpret and helped to reveal temporal trends and changes in MBC associated with soil depth. (author). 40 refs., 4 tabs., 3 figs

  18. Liquid scintillation counting of chlorophyll

    International Nuclear Information System (INIS)

    Fric, F.; Horickova, B.; Haspel-Horvatovic, E.

    1975-01-01

    A precise and reproducible method of liquid scintillation counting was worked out for measuring the radioactivity of 14 C-labelled chlorophyll a and chlorophyll b solutions without previous bleaching. The spurious count rate caused by luminescence of the scintillant-chlorophyll system is eliminated by using a suitable scintillant and by measuring the radioactivity at 4 to 8 0 C after an appropriate time of dark adaptation. Bleaching of the chlorophyll solutions is necessary only for measuring of very low radioactivity. (author)

  19. Radiostrontium accumulation in animal bones: development of a radiochemical method by ultra low-level liquid scintillation counting for its quantification.

    Science.gov (United States)

    Iammarino, Marco; Dell'Oro, Daniela; Bortone, Nicola; Mangiacotti, Michele; Chiaravalle, Antonio Eugenio

    2018-03-31

    Strontium-90 (90Sr) is a fission product, resulting from the use of uranium and plutonium in nuclear reactors and weapons. Consequently, it may be found in the environment as a consequence of nuclear fallouts, nuclear weapon testing, and not correct waste management. When present in the environment, strontium-90 may be taken into animal body by drinking water, eating food, or breathing air. The primary health effects are bone tumors and tumors of the blood-cell forming organs, due to beta particles emitted by both 90Sr and yttrium-90 (90Y). Moreover, another health concern is represented by inhibition of calcification and bone deformities in animals. Actually, radiometric methods for the determination of 90Sr in animal bones are lacking. This article describers a radiochemical method for the determination of 90Sr in animal bones, by ultra low-level liquid scintillation counting. The method precision and trueness have been demonstrated through validation tests (CV% = 12.4%; mean recovery = 98.4%). Detection limit and decision threshold corresponding to 8 and 3 mBecquerel (Bq) kg-1, respectively, represent another strong point of this analytical procedure. This new radiochemical method permits the selective extraction of 90Sr, without interferences, and it is suitable for radiocontamination surveillance programs, and it is also an improvement with respect to food safety controls.

  20. Liquid Scintillation Counting Standardization of {sup 2}2NaCl by te CIEMAT/NIST method; Calibracion por Centelleo Liquido del ''22NaCl, mediante el metodo CIEMAT/NIST

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Barquero, L.; Grau Carles, A.; Grau Malonda, A.

    1995-07-01

    We describe a procedure for preparing a stable solution of ''22NaCl for liquid scintillation counting and its counting stability and spectral evolution in Insta-Gel''R is studied. The solution has been standardised in terms of activity concentration by the CIEMAT/NIST method with discrepancies between experimental and computed efficiencies lower than 0.4 % and an overall uncertainty of 0.35 %. (Author) 4 refs.

  1. Integrated project delivery methods for energy renovation of social housing

    Directory of Open Access Journals (Sweden)

    Tadeo Baldiri Salcedo Rahola

    2015-11-01

    renting them. As such, SHOs are used to dealing with renovations on a professional basis. The limited financial capacity of SHOs to realise energy renovations magnifies the importance of improving process performance in order to get the best possible outcomes. In the last 30 years numerous authors have addressed the need to improve the performance of traditional construction processes via alternative project delivery methods. However, very little is known about the specifics of renovations processes for social housing, the feasibility of applying innovative construction management methods and the consequences for the process, for the role of all the actors involved and for the results of the projects. The aim of this study is to provide an insight into the project delivery methods available for SHOs when they are undertaking energy renovation projects and to evaluate how these methods could facilitate the achievement of a higher process performance. The main research question is: How can Social Housing Organisations improve the performance of energy renovation processes using more integrated project delivery methods? The idea of a PhD thesis about social housing renovation processes originated from the participation of TU Delft as research partner in the Intelligent Energy Europe project SHELTER1 which was carried out between 2010 and 2013. The aim of the SHELTER project was to promote and facilitate the use of new models of cooperation, inspired by integrated design, for the energy renovation of social housing. The SHELTER project was a joint effort between six social housing organisations (Arte Genova, Italy; Black Country Housing Group, United Kingdom; Bulgarian Housing Association, Bulgaria; Dynacité, France; Logirep, France and Société Wallonne du Logement, Belgium, three European professional federations based in Brussels (Architects Council of Europe, Cecodhas Housing Europe and European Builders Confederation and one research partner (Delft University of

  2. Method and apparatus to debug an integrated circuit chip via synchronous clock stop and scan

    Science.gov (United States)

    Bellofatto, Ralph E [Ridgefield, CT; Ellavsky, Matthew R [Rochester, MN; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Gooding, Thomas M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Hehenberger, Lance G [Leander, TX; Ohmacht, Martin [Yorktown Heights, NY

    2012-03-20

    An apparatus and method for evaluating a state of an electronic or integrated circuit (IC), each IC including one or more processor elements for controlling operations of IC sub-units, and each the IC supporting multiple frequency clock domains. The method comprises: generating a synchronized set of enable signals in correspondence with one or more IC sub-units for starting operation of one or more IC sub-units according to a determined timing configuration; counting, in response to one signal of the synchronized set of enable signals, a number of main processor IC clock cycles; and, upon attaining a desired clock cycle number, generating a stop signal for each unique frequency clock domain to synchronously stop a functional clock for each respective frequency clock domain; and, upon synchronously stopping all on-chip functional clocks on all frequency clock domains in a deterministic fashion, scanning out data values at a desired IC chip state. The apparatus and methodology enables construction of a cycle-by-cycle view of any part of the state of a running IC chip, using a combination of on-chip circuitry and software.

  3. Do your syringes count?

    International Nuclear Information System (INIS)

    Brewster, K.

    2002-01-01

    Full text: This study was designed to investigate anecdotal evidence that residual Sestamibi (MIBI) activity vaned in certain situations. For rest studies different brands of syringes were tested to see if the residuals varied. The period of time MIBI doses remained in the syringe between dispensing and injection was also considered as a possible source of increased residual counts. Stress Mibi syringe residual activities were measured to assess if the method of stress test affected residual activity. MIBI was reconstituted using 13 Gbq of Technetium in 3mls of normal saline then boiled for 10 minutes. Doses were dispensed according to department protocol and injected via cannula. Residual syringes were collected for three syringe types. In each case the barrel and plunger were measured separately. As the syringe is flushed during the exercise stress test and not the pharmacological stress test the chosen method was recorded. No relationship was demonstrated between the time MIBI remained in a syringe prior to injection and residual activity. Residual activity was not affected by method of stress test used. Actual injected activity can be calculated if the amount of activity remaining in the syringe post injection is known. Imaging time can be adjusted for residual activity to optimise count statistics. Preliminary results in this study indicate there is no difference in residual activity between syringe brands.Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  4. Standardization of 241Am by digital coincidence counting, liquid scintillation counting and defined solid angle counting

    International Nuclear Information System (INIS)

    Balpardo, C.; Capoulat, M.E.; Rodrigues, D.; Arenillas, P.

    2010-01-01

    The nuclide 241 Am decays by alpha emission to 237 Np. Most of the decays (84.6%) populate the excited level of 237 Np with energy of 59.54 keV. Digital coincidence counting was applied to standardize a solution of 241 Am by alpha-gamma coincidence counting with efficiency extrapolation. Electronic discrimination was implemented with a pressurized proportional counter and the results were compared with two other independent techniques: Liquid scintillation counting using the logical sum of double coincidences in a TDCR array and defined solid angle counting taking into account activity inhomogeneity in the active deposit. The results show consistency between the three methods within a limit of a 0.3%. An ampoule of this solution will be sent to the International Reference System (SIR) during 2009. Uncertainties were analysed and compared in detail for the three applied methods.

  5. Phase-integral method allowing nearlying transition points

    CERN Document Server

    Fröman, Nanny

    1996-01-01

    The efficiency of the phase-integral method developed by the present au­ thors has been shown both analytically and numerically in many publica­ tions. With the inclusion of supplementary quantities, closely related to new Stokes constants and obtained with the aid of comparison equation technique, important classes of problems in which transition points may approach each other become accessible to accurate analytical treatment. The exposition in this monograph is of a mathematical nature but has important physical applications, some examples of which are found in the adjoined papers. Thus, we would like to emphasize that, although we aim at mathematical rigor, our treatment is made primarily with physical needs in mind. To introduce the reader into the background of this book, we start by de­ scribing the phase-integral approximation of arbitrary order generated from an unspecified base function. This is done in Chapter 1, which is reprinted, after minor changes, from a review article. Chapter 2 is the re...

  6. Integrating financial theory and methods in electricity resource planning

    Energy Technology Data Exchange (ETDEWEB)

    Felder, F.A. [Economics Resource Group, Cambridge, MA (United States)

    1996-02-01

    Decision makers throughout the world are introducing risk and market forces in the electric power industry to lower costs and improve services. Incentive based regulation (IBR), which replaces cost of service ratemaking with an approach that divorces costs from revenues, exposes the utility to the risk of profits or losses depending on their performance. Regulators also are allowing for competition within the industry, most notably in the wholesale market and possibly in the retail market. Two financial approaches that incorporate risk in resource planning are evaluated: risk adjusted discount rates (RADR) and options theory (OT). These two complementary approaches are an improvement over the standard present value revenue requirement (PVRR). However, each method has some important limitations. By correctly using RADR and OT and understanding their limitations, decision makers can improve their ability to value risk properly in power plant projects and integrated resource plans. (Author)

  7. Apparatus and method for defect testing of integrated circuits

    Science.gov (United States)

    Cole, Jr., Edward I.; Soden, Jerry M.

    2000-01-01

    An apparatus and method for defect and failure-mechanism testing of integrated circuits (ICs) is disclosed. The apparatus provides an operating voltage, V.sub.DD, to an IC under test and measures a transient voltage component, V.sub.DDT, signal that is produced in response to switching transients that occur as test vectors are provided as inputs to the IC. The amplitude or time delay of the V.sub.DDT signal can be used to distinguish between defective and defect-free (i.e. known good) ICs. The V.sub.DDT signal is measured with a transient digitizer, a digital oscilloscope, or with an IC tester that is also used to input the test vectors to the IC. The present invention has applications for IC process development, for the testing of ICs during manufacture, and for qualifying ICs for reliability.

  8. Integrated airfoil and blade design method for large wind turbines

    DEFF Research Database (Denmark)

    Zhu, Wei Jun; Shen, Wen Zhong

    2013-01-01

    This paper presents an integrated method for designing airfoil families of large wind turbine blades. For a given rotor diameter and tip speed ratio, the optimal airfoils are designed based on the local speed ratios. To achieve high power performance at low cost, the airfoils are designed...... with an objective of high Cp and small chord length. When the airfoils are obtained, the optimum flow angle and rotor solidity are calculated which forms the basic input to the blade design. The new airfoils are designed based on the previous in-house airfoil family which were optimized at a Reynolds number of 3...... million. A novel shape perturbation function is introduced to optimize the geometry on the existing airfoils and thus simplify the design procedure. The viscos/inviscid code Xfoil is used as the aerodynamic tool for airfoil optimization where the Reynolds number is set at 16 million with a free...

  9. Integrated airfoil and blade design method for large wind turbines

    DEFF Research Database (Denmark)

    Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær

    2014-01-01

    This paper presents an integrated method for designing airfoil families of large wind turbine blades. For a given rotor diameter and a tip speed ratio, optimal airfoils are designed based on the local speed ratios. To achieve a high power performance at low cost, the airfoils are designed...... with the objectives of high Cp and small chord length. When the airfoils are obtained, the optimum flow angle and rotor solidity are calculated which forms the basic input to the blade design. The new airfoils are designed based on a previous in-house designed airfoil family which was optimized at a Reynolds number...... of 3 million. A novel shape perturbation function is introduced to optimize the geometry based on the existing airfoils which simplifies the design procedure. The viscous/inviscid interactive code XFOIL is used as the aerodynamic tool for airfoil optimization at a Reynolds number of 16 million...

  10. ARE METHODS USED TO INTEGRATE STANDARDIZED MANAGEMENT SYSTEMS A CONDITIONING FACTOR OF THE LEVEL OF INTEGRATION? AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Merce Bernardo

    2011-09-01

    Full Text Available Organizations are increasingly implementing multiple Management System Standards (M SSs and considering managing the related Management Systems (MSs as a single system.The aim of this paper is to analyze if methods us ed to integrate standardized MSs condition the level of integration of those MSs. A descriptive methodology has been applied to 343 Spanish organizations registered to, at least, ISO 9001 and ISO 14001. Seven groups of these organizations using different combinations of methods have been analyzed Results show that these organizations have a high level of integration of their MSs. The most common method used, was the process map. Organizations using a combination of different methods achieve higher levels of integration than those using a single method. However, no evidence has been found to confirm the relationship between the method used and the integration level achieved.

  11. Methods of assessing total doses integrated across pathways

    International Nuclear Information System (INIS)

    Grzechnik, M.; Camplin, W.; Clyne, F.; Allott, R.; Webbe-Wood, D.

    2006-01-01

    Calculated doses for comparison with limits resulting from discharges into the environment should be summed across all relevant pathways and food groups to ensure adequate protection. Current methodology for assessments used in the radioactivity in Food and the Environment (R.I.F.E.) reports separate doses from pathways related to liquid discharges of radioactivity to the environment from those due to gaseous releases. Surveys of local inhabitant food consumption and occupancy rates are conducted in the vicinity of nuclear sites. Information has been recorded in an integrated way, such that the data for each individual is recorded for all pathways of interest. These can include consumption of foods, such as fish, crustaceans, molluscs, fruit and vegetables, milk and meats. Occupancy times over beach sediments and time spent in close proximity to the site is also recorded for inclusion of external and inhalation radiation dose pathways. The integrated habits survey data may be combined with monitored environmental radionuclide concentrations to calculate total dose. The criteria for successful adoption of a method for this calculation were: Reproducibility can others easily use the approach and reassess doses? Rigour and realism how good is the match with reality?Transparency a measure of the ease with which others can understand how the calculations are performed and what they mean. Homogeneity is the group receiving the dose relatively homogeneous with respect to age, diet and those aspects that affect the dose received? Five methods of total dose calculation were compared and ranked according to their suitability. Each method was labelled (A to E) and given a short, relevant name for identification. The methods are described below; A) Individual doses to individuals are calculated and critical group selection is dependent on dose received. B) Individual Plus As in A, but consumption and occupancy rates for high dose is used to derive rates for application in

  12. Clean Hands Count

    Medline Plus

    Full Text Available ... Like this video? Sign in to make your opinion count. Sign in 131 2 Don't like this video? Sign in to make your opinion count. Sign in 3 Loading... Loading... Transcript The ...

  13. The Big Pumpkin Count.

    Science.gov (United States)

    Coplestone-Loomis, Lenny

    1981-01-01

    Pumpkin seeds are counted after students convert pumpkins to jack-o-lanterns. Among the activities involved, pupils learn to count by 10s, make estimates, and to construct a visual representation of 1,000. (MP)

  14. Integrated method for the measurement of trace nitrogenous atmospheric bases

    Directory of Open Access Journals (Sweden)

    D. Key

    2011-12-01

    Full Text Available Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  15. Stress estimation in reservoirs using an integrated inverse method

    Science.gov (United States)

    Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre

    2018-05-01

    Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.

  16. Standard test method for non-destructive assay of nuclear material in waste by passive and active neutron counting using a differential Die-away system

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers a system that performs nondestructive assay (NDA) of uranium or plutonium, or both, using the active, differential die-away technique (DDT), and passive neutron coincidence counting. Results from the active and passive measurements are combined to determine the total amount of fissile and spontaneously-fissioning material in drums of scrap or waste. Corrections are made to the measurements for the effects of neutron moderation and absorption, assuming that the effects are averaged over the volume of the drum and that no significant lumps of nuclear material are present. These systems are most widely used to assay low-level and transuranic waste, but may also be used for the measurement of scrap materials. The examples given within this test method are specific to the second-generation Los Alamos National Laboratory (LANL) passive-active neutron assay system. 1.1.1 In the active mode, the system measures fissile isotopes such as 235U and 239Pu. The neutrons from a pulsed, 14-MeV ne...

  17. Beta emitter radionuclides (90Sr contamination in animal feed: validation and application of a radiochemical method by ultra low level liquid scintillation counting

    Directory of Open Access Journals (Sweden)

    Marco Iammarino

    2015-02-01

    Full Text Available 90Sr is considered as a dangerous contaminant of agri-food supply chains due to its chemical affinity with Calcium, which makes its absorption in bones easy. 90Sr accumulation in raw materials and then in final products is particularly significant in relationship to its ability to transfer into animal source products. The radionuclides transfer (137Cs and 90Sr from environment to forages and then to products of animal origin (milk, cow and pork meats was studied and evaluated in different studies, which were carried out in contaminated areas, from Chernobyl disaster until today. In the present work, the development and validation of a radiochemical method for the detection of 90Sr in different types of animal feed, and the application of this technique for routinely control activities, are presented. Liquid scintillation counting was the employed analytical technique, since it is able to determine very low activity concentrations of 90Sr (<0.01 Bq kg–1. All samples analysed showed a 90Sr contamination much higher than method detection limit (0.008 Bq kg–1. In particular, the highest mean activity concentration was registered in hay samples (2.93 Bq kg–1, followed by silage samples (2.07 Bq kg–1 and animal feeds (0.77 Bq kg–1. In fact, all samples were characterized by 90Sr activity concentrations much lower than reference limits. This notwithstanding, the necessity to monitor these levels was confirmed, especially considering that 90Sr is a possible carcinogen for human.

  18. Digital coincidence counting

    International Nuclear Information System (INIS)

    Buckman, S.M.; Ius, D.

    1996-01-01

    This paper reports on the development of a digital coincidence-counting system which comprises a custom-built data acquisition card and associated PC software. The system has been designed to digitise the pulse-trains from two radiation detectors at a rate of 20 MSamples/s with 12-bit resolution. Through hardware compression of the data, the system can continuously record both individual pulse-shapes and the time intervals between pulses. Software-based circuits are used to process the stored pulse trains. These circuits are constructed simply by linking together icons representing various components such as coincidence mixers, time delays, single-channel analysers, deadtimes and scalers. This system enables a pair of pulse trains to be processed repeatedly using any number of different methods. Some preliminary results are presented in order to demonstrate the versatility and efficiency of this new method. (orig.)

  19. Digital coincidence counting

    Science.gov (United States)

    Buckman, S. M.; Ius, D.

    1996-02-01

    This paper reports on the development of a digital coincidence-counting system which comprises a custom-built data acquisition card and associated PC software. The system has been designed to digitise the pulse-trains from two radiation detectors at a rate of 20 MSamples/s with 12-bit resolution. Through hardware compression of the data, the system can continuously record both individual pulse-shapes and the time intervals between pulses. Software-based circuits are used to process the stored pulse trains. These circuits are constructed simply by linking together icons representing various components such as coincidence mixers, time delays, single-channel analysers, deadtimes and scalers. This system enables a pair of pulse trains to be processed repeatedly using any number of different methods. Some preliminary results are presented in order to demonstrate the versatility and efficiency of this new method.

  20. Efficacy of ivermectin against gastrointestinal nematodes of cattle in Denmark evaluated by different methods for analysis of faecal egg count reduction

    Directory of Open Access Journals (Sweden)

    Miguel Peña-Espinoza

    2016-12-01

    Full Text Available The efficacy of ivermectin (IVM against gastrointestinal nematodes in Danish cattle was assessed by faecal egg count reduction test (FECRT. Six cattle farms with history of clinical parasitism and avermectin use were included. On the day of treatment (Day 0, 20 naturally infected calves per farm (total n = 120 were stratified by initial faecal egg counts (FEC and randomly allocated to a treatment group dosed with 0.2 mg IVM kg−1 body weight s.c. (IVM; n = 10 or an untreated control group (CTL; n = 10. Individual FEC were obtained at Day 0 and Day 14 post-treatment and pooled faeces by group were cultured to isolate L3 for detection of Ostertagia ostertagi and Cooperia oncophora by qPCR. Treatment efficacies were analysed using the recommended WAAVP method and two open-source statistical procedures based on Bayesian modelling: ‘eggCounts’ and ‘Bayescount’. A simulation study evaluated the performance of the different procedures to correctly identify FEC reduction percentages of simulated bovine FEC data representing the observed real data. In the FECRT, reduced IVM efficacy was detected in three farms by all procedures using data from treated animals only, and in one farm according to the procedures including data from treated and untreated cattle. Post-treatment, O. ostertagi and C. oncophora L3 were detected by qPCR in faeces of treated animals from one and three herds with declared reduced IVM efficacy, respectively. Based on the simulation study, all methods showed a reduced performance when FEC aggregation increased post-treatment and suggested that a treatment group of 10 animals is insufficient for the FECRT in cattle. This is the first report of reduced anthelmintic efficacy in Danish cattle and warrants the implementation of larger surveys. Advantages and caveats regarding the use of Bayesian modelling and the relevance of including untreated cattle in the FECRT are discussed.

  1. The Integral Method, a new approach to quantify bactericidal activity.

    Science.gov (United States)

    Gottardi, Waldemar; Pfleiderer, Jörg; Nagl, Markus

    2015-08-01

    The bactericidal activity (BA) of antimicrobial agents is generally derived from the results of killing assays. A reliable quantitative characterization and particularly a comparison of these substances, however, are impossible with this information. We here propose a new method that takes into account the course of the complete killing curve for assaying BA and that allows a clear-cut quantitative comparison of antimicrobial agents with only one number. The new Integral Method, based on the reciprocal area below the killing curve, reliably calculates an average BA [log10 CFU/min] and, by implementation of the agent's concentration C, the average specific bactericidal activity SBA=BA/C [log10 CFU/min/mM]. Based on experimental killing data, the pertaining BA and SBA values of exemplary active halogen compounds were established, allowing quantitative assertions. N-chlorotaurine (NCT), chloramine T (CAT), monochloramine (NH2Cl), and iodine (I2) showed extremely diverging SBA values of 0.0020±0.0005, 1.11±0.15, 3.49±0.22, and 291±137log10 CFU/min/mM, respectively, against Staphylococcus aureus. This immediately demonstrates an approximately 550-fold stronger activity of CAT, 1730-fold of NH2Cl, and 150,000-fold of I2 compared to NCT. The inferred quantitative assertions and conclusions prove the new method suitable for characterizing bactericidal activity. Its application comprises the effect of defined agents on various bacteria, the consequence of temperature shifts, the influence of varying drug structure, dose-effect relationships, ranking of isosteric agents, comparison of competing commercial antimicrobial formulations, and the effect of additives. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Evaluation of the PetrifilmTM EB and TEMPO® EB systems with ISO 21528-2:2004 method for the count of Enterobacteriaceae in milk

    Directory of Open Access Journals (Sweden)

    Andréia Cirolini

    2013-09-01

    Full Text Available The development of alternative microbiological techniques is driven by the necessity to meet the current needs to deliver rapid results in the manufacturing process of foods, but it is important that these methods be evaluated for each application. The objective of the present study was to assess the PetrifilmTM EB and the TEMPO® EB systems with ISO 21528-2:2004 for the count of Enterobacteriaceae in pasteurized and UHT milk samples. We analyzed the microflora of 141 pasteurized milk samples, 15 samples of artificially contaminated pasteurized milk and 15 samples of artificially contaminated UHT milk. Investigation of the method PetrifilmTM EB and ISO 21528:2 regression analysis showed a high correlation in the samples, r = 0.90 for the microflora of pasteurized milk, r = 0.98 for artificially contaminated pasteurized milk and r = 0.99 for the artificially contaminated UHT milk. In evaluating the system TEMPO EB ® method and ISO 21528:2 correlation was also significant in the analyzed samples, with r = 0.86 for the microflora of pasteurized milk, r = 0.96 for artificially contaminated pasteurized milk and r = 0.99 for artificially contaminated UHT milk. No statistically significant differences were observed between the three methods conducted to analyze artificially contaminated pasteurized and UHT milk at three inoculum levels. In conclusion, the PetrifilmTM EB system and the TEMPO® EB system may be an alternative to the ISO 21528-2:2004 for the Enterobacteriaceae assay for milk as because of the ease-of-operation and the time reduction achieved for conducting the microbiological assay using these systems.

  3. Standardization of Ga-68 by coincidence measurements, liquid scintillation counting and 4πγ counting.

    Science.gov (United States)

    Roteta, Miguel; Peyres, Virginia; Rodríguez Barquero, Leonor; García-Toraño, Eduardo; Arenillas, Pablo; Balpardo, Christian; Rodrígues, Darío; Llovera, Roberto

    2012-09-01

    The radionuclide (68)Ga is one of the few positron emitters that can be prepared in-house without the use of a cyclotron. It disintegrates to the ground state of (68)Zn partially by positron emission (89.1%) with a maximum energy of 1899.1 keV, and partially by electron capture (10.9%). This nuclide has been standardized in the frame of a cooperation project between the Radionuclide Metrology laboratories from CIEMAT (Spain) and CNEA (Argentina). Measurements involved several techniques: 4πβ-γ coincidences, integral gamma counting and Liquid Scintillation Counting using the triple to double coincidence ratio and the CIEMAT/NIST methods. Given the short half-life of the radionuclide assayed, a direct comparison between results from both laboratories was excluded and a comparison of experimental efficiencies of similar NaI detectors was used instead. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Comparison of Kato-Katz thick-smear and McMaster egg counting method for the assessment of drug efficacy against soil-transmitted helminthiasis in school children in Jimma Town, Ethiopia.

    Science.gov (United States)

    Bekana, Teshome; Mekonnen, Zeleke; Zeynudin, Ahmed; Ayana, Mio; Getachew, Mestawet; Vercruysse, Jozef; Levecke, Bruno

    2015-10-01

    There is a paucity of studies that compare efficacy of drugs obtained by different diagnostic methods. We compared the efficacy of a single oral dose albendazole (400 mg), measured as egg reduction rate, against soil-transmitted helminth infections in 210 school children (Jimma Town, Ethiopia) using both Kato-Katz thick smear and McMaster egg counting method. Our results indicate that differences in sensitivity and faecal egg counts did not imply a significant difference in egg reduction rate estimates. The choice of a diagnostic method to assess drug efficacy should not be based on sensitivity and faecal egg counts only. © The Author 2015. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Statistical approaches to the analysis of point count data: A little extra information can go a long way

    Science.gov (United States)

    Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.

    2005-01-01

    Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.

  6. Developing integrated methods to address complex resource and environmental issues

    Science.gov (United States)

    Smith, Kathleen S.; Phillips, Jeffrey D.; McCafferty, Anne E.; Clark, Roger N.

    2016-02-08

    IntroductionThis circular provides an overview of selected activities that were conducted within the U.S. Geological Survey (USGS) Integrated Methods Development Project, an interdisciplinary project designed to develop new tools and conduct innovative research requiring integration of geologic, geophysical, geochemical, and remote-sensing expertise. The project was supported by the USGS Mineral Resources Program, and its products and acquired capabilities have broad applications to missions throughout the USGS and beyond.In addressing challenges associated with understanding the location, quantity, and quality of mineral resources, and in investigating the potential environmental consequences of resource development, a number of field and laboratory capabilities and interpretative methodologies evolved from the project that have applications to traditional resource studies as well as to studies related to ecosystem health, human health, disaster and hazard assessment, and planetary science. New or improved tools and research findings developed within the project have been applied to other projects and activities. Specifically, geophysical equipment and techniques have been applied to a variety of traditional and nontraditional mineral- and energy-resource studies, military applications, environmental investigations, and applied research activities that involve climate change, mapping techniques, and monitoring capabilities. Diverse applied geochemistry activities provide a process-level understanding of the mobility, chemical speciation, and bioavailability of elements, particularly metals and metalloids, in a variety of environmental settings. Imaging spectroscopy capabilities maintained and developed within the project have been applied to traditional resource studies as well as to studies related to ecosystem health, human health, disaster assessment, and planetary science. Brief descriptions of capabilities and laboratory facilities and summaries of some

  7. Radon counting statistics - a Monte Carlo investigation

    International Nuclear Information System (INIS)

    Scott, A.G.

    1996-01-01

    Radioactive decay is a Poisson process, and so the Coefficient of Variation (COV) of open-quotes nclose quotes counts of a single nuclide is usually estimated as 1/√n. This is only true if the count duration is much shorter than the half-life of the nuclide. At longer count durations, the COV is smaller than the Poisson estimate. Most radon measurement methods count the alpha decays of 222 Rn, plus the progeny 218 Po and 214 Po, and estimate the 222 Rn activity from the sum of the counts. At long count durations, the chain decay of these nuclides means that every 222 Rn decay must be followed by two other alpha decays. The total number of decays is open-quotes 3Nclose quotes, where N is the number of radon decays, and the true COV of the radon concentration estimate is 1/√(N), √3 larger than the Poisson total count estimate of 1/√3N. Most count periods are comparable to the half lives of the progeny, so the relationship between COV and count time is complex. A Monte-Carlo estimate of the ratio of true COV to Poisson estimate was carried out for a range of count periods from 1 min to 16 h and three common radon measurement methods: liquid scintillation, scintillation cell, and electrostatic precipitation of progeny. The Poisson approximation underestimates COV by less than 20% for count durations of less than 60 min

  8. Metriplectic Gyrokinetics and Discretization Methods for the Landau Collision Integral

    Science.gov (United States)

    Hirvijoki, Eero; Burby, Joshua W.; Kraus, Michael

    2017-10-01

    We present two important results for the kinetic theory and numerical simulation of warm plasmas: 1) We provide a metriplectic formulation of collisional electrostatic gyrokinetics that is fully consistent with the First and Second Laws of Thermodynamics. 2) We provide a metriplectic temporal and velocity-space discretization for the particle phase-space Landau collision integral that satisfies the conservation of energy, momentum, and particle densities to machine precision, as well as guarantees the existence of numerical H-theorem. The properties are demonstrated algebraically. These two result have important implications: 1) Numerical methods addressing the Vlasov-Maxwell-Landau system of equations, or its reduced gyrokinetic versions, should start from a metriplectic formulation to preserve the fundamental physical principles also at the discrete level. 2) The plasma physics community should search for a metriplectic reduction theory that would serve a similar purpose as the existing Lagrangian and Hamiltonian reduction theories do in gyrokinetics. The discovery of metriplectic formulation of collisional electrostatic gyrokinetics is strong evidence in favor of such theory and, if uncovered, the theory would be invaluable in constructing reduced plasma models. Supported by U.S. DOE Contract Nos. DE-AC02-09-CH11466 (EH) and DE-AC05-06OR23100 (JWB) and by European Union's Horizon 2020 research and innovation Grant No. 708124 (MK).

  9. Boundary integral method for torsion of composite shafts

    International Nuclear Information System (INIS)

    Chou, S.I.; Mohr, J.A.

    1987-01-01

    The Saint-Venant torsion problem for homogeneous shafts with simply or multiply-connected regions has received a great deal of attention in the past. However, because of the mathematical difficulties inherent in the problem, very few problems of torsion of shafts with composite cross sections have been solved analytically. Muskhelishvili (1963) studied the torsion problem for shafts with cross sections having several solid inclusions surrounded by an elastic material. The problem of a circular shaft reinforced by a non-concentric round inclusion, a rectangular shaft composed of two rectangular parts made of different materials were solved. In this paper, a boundary integral equation method, which can be used to solve problems more complex than those considered by Katsikadelis et. al., is developed. Square shaft with two dissimilar rectangular parts, square shaft with a square inclusion are solved and the results compared with those given in the reference cited above. Finally, a square shaft composed of two rectangular parts with circular inclusion is solved. (orig./GL)

  10. Integration of rock typing methods for carbonate reservoir characterization

    International Nuclear Information System (INIS)

    Aliakbardoust, E; Rahimpour-Bonab, H

    2013-01-01

    Reservoir rock typing is the most important part of all reservoir modelling. For integrated reservoir rock typing, static and dynamic properties need to be combined, but sometimes these two are incompatible. The failure is due to the misunderstanding of the crucial parameters that control the dynamic behaviour of the reservoir rock and thus selecting inappropriate methods for defining static rock types. In this study, rock types were defined by combining the SCAL data with the rock properties, particularly rock fabric and pore types. First, air-displacing-water capillary pressure curues were classified because they are representative of fluid saturation and behaviour under capillary forces. Next the most important rock properties which control the fluid flow and saturation behaviour (rock fabric and pore types) were combined with defined classes. Corresponding petrophysical properties were also attributed to reservoir rock types and eventually, defined rock types were compared with relative permeability curves. This study focused on representing the importance of the pore system, specifically pore types in fluid saturation and entrapment in the reservoir rock. The most common tests in static rock typing, such as electrofacies analysis and porosity–permeability correlation, were carried out and the results indicate that these are not appropriate approaches for reservoir rock typing in carbonate reservoirs with a complicated pore system. (paper)

  11. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  12. High temperature spectral emissivity measurement using integral blackbody method

    Science.gov (United States)

    Pan, Yijie; Dong, Wei; Lin, Hong; Yuan, Zundong; Bloembergen, Pieter

    2016-10-01

    Spectral emissivity is a critical material's thermos-physical property for heat design and radiation thermometry. A prototype instrument based upon an integral blackbody method was developed to measure material's spectral emissivity above 1000 °. The system was implemented with an optimized commercial variable-high-temperature blackbody, a high speed linear actuator, a linear pyrometer, and an in-house designed synchronization circuit. A sample was placed in a crucible at the bottom of the blackbody furnace, by which the sample and the tube formed a simulated blackbody which had an effective total emissivity greater than 0.985. During the measurement, the sample was pushed to the end opening of the tube by a graphite rod which was actuated through a pneumatic cylinder. A linear pyrometer was used to monitor the brightness temperature of the sample surface through the measurement. The corresponding opto-converted voltage signal was fed and recorded by a digital multi-meter. A physical model was proposed to numerically evaluate the temperature drop along the process. Tube was discretized as several isothermal cylindrical rings, and the temperature profile of the tube was measurement. View factors between sample and rings were calculated and updated along the whole pushing process. The actual surface temperature of the sample at the end opening was obtained. Taking advantages of the above measured voltage profile and the calculated true temperature, spectral emissivity under this temperature point was calculated.

  13. Methods for assessing NPP containment pressure boundary integrity

    International Nuclear Information System (INIS)

    Naus, D.J.; Ellingwood, B.R.; Graves, H.L.

    2004-01-01

    Research is being conducted to address aging of the containment pressure boundary in light-water reactor plants. Objectives of this research are to (1) understand the significant factors relating to corrosion occurrence, efficacy of inspection, and structural capacity reduction of steel containments and of liners of concrete containments; (2) provide the U.S. Nuclear Regulatory Commission (USNRC) reviewers a means of establishing current structural capacity margins or estimating future residual structural capacity margins for steel containments and concrete containments as limited by liner integrity; and (3) provide recommendations, as appropriate, on information to be requested of licensees for guidance that could be utilized by USNRC reviewers in assessing the seriousness of reported incidences of containment degradation. Activities include development of a degradation assessment methodology; reviews of techniques and methods for inspection and repair of containment metallic pressure boundaries; evaluation of candidate techniques for inspection of inaccessible regions of containment metallic pressure boundaries; establishment of a methodology for reliability-based condition assessments of steel containments and liners; and fragility assessments of steel containments with localized corrosion

  14. Survey on the presence of 90Sr in milk samples by a validated ultra low level liquid scintillation counting (LSC method

    Directory of Open Access Journals (Sweden)

    dell’Oro D.

    2013-04-01

    Full Text Available 90Sr is one of the most biologically hazardous radionuclides produced in nuclear fission processes and decays emitting high-energy beta particles turning 90Y. 90Sr is transferred from soil-plant to cow’s milk and then to humans if it is introduced into the environment. Radiostrontium is chemically similar to calcium entering the human body through several food chains and depositing in bone and blood-forming tissue (bone marrow. Among main foodstuffs assumed in human diet, milk is considered of special interest for radiostrontium determination, especially in emergency situations, because the consumption of contaminated milk is the main source of internal radiation exposure, particularly for infants. In this work an analytical method for the determination of radiostrontium in milk was developed and validated in order to determine low activity levels by liquid scintillation counting (LSC after achieving 90Y secular equilibrium condition. The analytical procedure was applied both in surveillance and routine programmes to detect radiocontamination in cow’s, goat and sheep milk samples.

  15. Leveraging multiple datasets for deep leaf counting

    OpenAIRE

    Dobrescu, Andrei; Giuffrida, Mario Valerio; Tsaftaris, Sotirios A

    2017-01-01

    The number of leaves a plant has is one of the key traits (phenotypes) describing its development and growth. Here, we propose an automated, deep learning based approach for counting leaves in model rosette plants. While state-of-the-art results on leaf counting with deep learning methods have recently been reported, they obtain the count as a result of leaf segmentation and thus require per-leaf (instance) segmentation to train the models (a rather strong annotation). Instead, our method tre...

  16. A Multi-Objective Optimization Method to integrate Heat Pumps in Industrial Processes

    OpenAIRE

    Becker, Helen; Spinato, Giulia; Maréchal, François

    2011-01-01

    Aim of process integration methods is to increase the efficiency of industrial processes by using pinch analysis combined with process design methods. In this context, appropriate integrated utilities offer promising opportunities to reduce energy consumption, operating costs and pollutants emissions. Energy integration methods are able to integrate any type of predefined utility, but so far there is no systematic approach to generate potential utilities models based on their technology limit...

  17. Photon-counting multifactor optical encryption and authentication

    International Nuclear Information System (INIS)

    Pérez-Cabré, E; Millán, M S; Mohammed, E A; Saadon, H L

    2015-01-01

    The multifactor optical encryption authentication method [Opt. Lett., 31 721-3 (2006)] reinforces optical security by allowing the simultaneous authentication of up to four factors. In this work, the photon-counting imaging technique is applied to the multifactor encrypted function so that a sparse phase-only distribution is generated for the encrypted data. The integration of both techniques permits an increased capacity for signal hiding with simultaneous data reduction for better fulfilling the general requirements of protection, storage and transmission. Cryptanalysis of the proposed method is carried out in terms of chosen-plaintext and chosen-ciphertext attacks. Although the multifactor authentication process is not substantially altered by those attacks, its integration with the photon-counting imaging technique prevents from possible partial disclosure of any encrypted factor, thus increasing the security level of the overall process. Numerical experiments and results are provided and discussed. (paper)

  18. Study of the quantitative assessment method for high-cycle thermal fatigue of a T-pipe under turbulent fluid mixing based on the coupled CFD-FEM method and the rainflow counting method

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Y.; Lu, T., E-mail: likesurge@sina.com

    2016-12-01

    Highlights: • Two characteristic parameters of the temperature fluctuations are used for qualitative analysis. • A quantitative assessment method for high-cycle thermal fatigue of a T-pipe is proposed. • The time-dependent curves for the temperature and thermal stress are not always “in-phase”. • Large magnitude of thermal stresses may not mean large number of fatigue cycles. • The normalized fatigue damage rate and normalized RMS temperature are positively related. - Abstract: With the development of nuclear power and nuclear power safety, high-cycle thermal fatigue of the pipe structures induced by the flow and heat transfer of the fluid in pipes have aroused more and more attentions. Turbulent mixing of hot and cold flows in a T-pipe is a well-recognized source of thermal fatigue in piping system, and thermal fatigue is a significant long-term degradation mechanism. It is not an easy work to evaluate thermal fatigue of a T-pipe under turbulent flow mixing because of the thermal loads acting at fluid–structure interface of the pipe are so complex and changeful. In this paper, a one-way Computational Fluid Dynamics-Finite Element Method (CFD-FEM method) coupling based on the ANSYS Workbench 15.0 software has been developed to calculate transient thermal stresses with the temperature fields of turbulent flow mixing, and thermal fatigue assessment has been carried out with this obtained fluctuating thermal stresses by programming in the software platform of Matlab based on the rainflow counting method. In the thermal analysis, the normalized mean temperatures and the normalized root mean square (RMS) temperatures are obtained and compared with the experiment of the test case from the Vattenfall benchmark facility to verify the accuracy of the CFD calculation and to determine the position which thermal fatigue is most likely to occur in the T-junction. Besides, more insights have been obtained in the coupled CFD-FEM analysis and the thermal fatigue

  19. Effect of the integrated approach of yoga therapy on platelet count and uric acid in pregnancy: A multicenter stratified randomized single-blind study

    Directory of Open Access Journals (Sweden)

    R Jayashree

    2013-01-01

    Conclusion: Antenatal integrated yoga from the twelfth week is safe and effective in promoting a healthy progression of platelets and uric acid in women with high-risk pregnancy, pointing to healthy hemodilution and better physiological adaptation.

  20. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  1. Count rate balance method of measuring sediment transport of sand beds by radioactive tracers; Methode du bilan des taux de comptage d'indicateurs radioactifs pour la determination du debit de charriage des lits sableux

    Energy Technology Data Exchange (ETDEWEB)

    Sauzay, G [Commissariat a l' Energie Atomique, 91 - Saclay (France). Centre d' Etudes Nucleaires

    1967-11-01

    Radioactive tracers are applied to the direct measurement of the sediment transport rate of sand beds. The theoretical measurement formula is derived: the variation of the count rate balance is inverse of that of the transport thickness. Simultaneously the representativeness of the tracer is critically studied. The minimum quantity of tracer which has to be injected in order to obtain a correct statistical definition of count rate given by a low number of grains 'seen' by the detector is then studied. A field experiment was made and has let to study the technological conditions for applying this method: only the treatment of results is new, the experiment itself is carried out with conventional techniques applied with great care. (author) [French] Les indicateurs radioactifs sont appliques a la mesure directe du debit de charriage des lits sableux. On etablit la formule theorique de mesure: le bilan des taux de comptage varie en sens inverse de l'epaisseur de charriage. Parallelement on fait une etude critique de la representativite de l'indicateur, puis on determine la quantite minimale de traceur qu'il faut immerger pour que les taux de comptage fournis pour un faible nombre de grains 'vus' par le detecteur aient une definition statistique correcte. Une experience de terrain a permis d'etudier les conditions technologiques de cette methode: seul le depouillement des resultats est nouveau. L'experimentation in-situ se fait suivant les procedes classiques avec un tres grand soin. (auteur)

  2. Methods for external event screening quantification: Risk Methods Integration and Evaluation Program (RMIEP) methods development

    International Nuclear Information System (INIS)

    Ravindra, M.K.; Banon, H.

    1992-07-01

    In this report, the scoping quantification procedures for external events in probabilistic risk assessments of nuclear power plants are described. External event analysis in a PRA has three important goals; (1) the analysis should be complete in that all events are considered; (2) by following some selected screening criteria, the more significant events are identified for detailed analysis; (3) the selected events are analyzed in depth by taking into account the unique features of the events: hazard, fragility of structures and equipment, external-event initiated accident sequences, etc. Based on the above goals, external event analysis may be considered as a three-stage process: Stage I: Identification and Initial Screening of External Events; Stage II: Bounding Analysis; Stage III: Detailed Risk Analysis. In the present report, first, a review of published PRAs is given to focus on the significance and treatment of external events in full-scope PRAs. Except for seismic, flooding, fire, and extreme wind events, the contributions of other external events to plant risk have been found to be negligible. Second, scoping methods for external events not covered in detail in the NRC's PRA Procedures Guide are provided. For this purpose, bounding analyses for transportation accidents, extreme winds and tornadoes, aircraft impacts, turbine missiles, and chemical release are described

  3. Diagrammatical methods within the path integral representation for quantum systems

    International Nuclear Information System (INIS)

    Alastuey, A

    2014-01-01

    The path integral representation has been successfully applied to the study of equilibrium properties of quantum systems for a long time. In particular, such a representation allowed Ginibre to prove the convergence of the low-fugacity expansions for systems with short-range interactions. First, I will show that the crucial trick underlying Ginibre's proof is the introduction of an equivalent classical system made with loops. Within the Feynman-Kac formula for the density matrix, such loops naturally emerge by collecting together the paths followed by particles exchanged in a given cyclic permutation. Two loops interact via an average of two- body genuine interactions between particles belonging to different loops, while the interactions between particles inside a given loop are accounted for in a loop fugacity. It turns out that the grand-partition function of the genuine quantum system exactly reduces to its classical counterpart for the gas of loops. The corresponding so-called magic formula can be combined with standard Mayer diagrammatics for the classical gas of loops. This provides low-density representations for the quantum correlations or thermodynamical functions, which are quite useful when collective effects must be taken into account properly. Indeed, resummations and or reorganizations of Mayer graphs can be performed by exploiting their remarkable topological and combinatorial properties, while statistical weights and bonds are purely c-numbers. The interest of that method will be illustrated through a brief description of its application to two long-standing problems, namely recombination in Coulomb systems and condensation in the interacting Bose gas.

  4. Integration of gas chromatography mass spectrometry methods for differentiating ricin preparation methods.

    Science.gov (United States)

    Wunschel, David S; Melville, Angela M; Ehrhardt, Christopher J; Colburn, Heather A; Victry, Kristin D; Antolick, Kathryn C; Wahl, Jon H; Wahl, Karen L

    2012-05-07

    The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of Ricinus communis, commonly known as the castor plant. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatography-mass spectrometry (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid, as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods, starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid, or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method, independent of the seed source. In particular, the abundance of mannose, arabinose, fucose, ricinoleic acid, and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation than would be possible using a single analytical method.

  5. Let's Make Data Count

    Science.gov (United States)

    Budden, A. E.; Abrams, S.; Chodacki, J.; Cruse, P.; Fenner, M.; Jones, M. B.; Lowenberg, D.; Rueda, L.; Vieglais, D.

    2017-12-01

    The impact of research has traditionally been measured by citations to journal publications and used extensively for evaluation and assessment in academia, but this process misses the impact and reach of data and software as first-class scientific products. For traditional publications, Article-Level Metrics (ALM) capture the multitude of ways in which research is disseminated and used, such as references and citations within social media and other journal articles. Here we present on the extension of usage and citation metrics collection to include other artifacts of research, namely datasets. The Make Data Count (MDC) project will enable measuring the impact of research data in a manner similar to what is currently done with publications. Data-level metrics (DLM) are a multidimensional suite of indicators measuring the broad reach and use of data as legitimate research outputs. By making data metrics openly available for reuse in a number of different ways, the MDC project represents an important first step on the path towards the full integration of data metrics into the research data management ecosystem. By assuring researchers that their contributions to scholarly progress represented by data corpora are acknowledged, data level metrics provide a foundation for streamlining the advancement of knowledge by actively promoting desirable best practices regarding research data management, publication, and sharing.

  6. A discontinous Galerkin finite element method with an efficient time integration scheme for accurate simulations

    KAUST Repository

    Liu, Meilin; Bagci, Hakan

    2011-01-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results

  7. Integrating Evidence Within and Across Evidence Streams Using Qualitative Methods

    Science.gov (United States)

    There is high demand in environmental health for adoption of a structured process that evaluates and integrates evidence while making decisions transparent. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework holds promise to address this deman...

  8. Integral methods of solving boundary-value problems of nonstationary heat conduction and their comparative analysis

    Science.gov (United States)

    Kot, V. A.

    2017-11-01

    The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.

  9. The integrated circuit IC EMP transient state disturbance effect experiment method investigates

    International Nuclear Information System (INIS)

    Li Xiaowei

    2004-01-01

    Transient state disturbance characteristic study on the integrated circuit, IC, need from its coupling path outset. Through cable (aerial) coupling, EMP converts to an pulse current voltage and results in the impact to the integrated circuit I/O orifice passing the cable. Aiming at the armament system construction feature, EMP effect to the integrated circuit, IC inside the system is analyzed. The integrated circuit, IC EMP effect experiment current injection method is investigated and a few experiments method is given. (authors)

  10. Method of mechanical quadratures for solving singular integral equations of various types

    Science.gov (United States)

    Sahakyan, A. V.; Amirjanyan, H. A.

    2018-04-01

    The method of mechanical quadratures is proposed as a common approach intended for solving the integral equations defined on finite intervals and containing Cauchy-type singular integrals. This method can be used to solve singular integral equations of the first and second kind, equations with generalized kernel, weakly singular equations, and integro-differential equations. The quadrature rules for several different integrals represented through the same coefficients are presented. This allows one to reduce the integral equations containing integrals of different types to a system of linear algebraic equations.

  11. Usefulness of rate of increase in SPECT counts in one-day method of N-isopropyl-4-iodoamphetamine [123I] SPECT studies at rest and after acetazolamide challenge using a method for estimating time-dependent distribution at rest

    International Nuclear Information System (INIS)

    Kawamura, Yoshifumi; Ashizaki, Michio; Saida, Shoko; Sugimoto, Hideharu

    2008-01-01

    When N-isopropyl-4-iodoamphetamine ( 123 I-IMP) single-photon emission computed tomography (SPECT) studies at rest and after acetazolamide (ACZ) challenge are conducted in a day, the time-dependent change in IMP in the brain at rest should be estimated accurately. We devised the method and investigated whether our one-day method for measuring the rate of increase in SPECT counts allowed reduction in the acquisition time. Sequential, 5-min SPECT scans were performed. We estimated the time-dependent change in the brain using the change in slopes of two linear equations derived from the first three SPECT counts. For the one-day method, ACZ was administered 15 min or 20 min after IMP administration. The second IMP was administered 10 min after ACZ administration. Time-dependent changes in the brain were classified into 13 patterns when estimation was started at 5 min after IMP administration and 6 patterns when estimation was started at 10 min, and fitting coefficients were determined. The correlation between actual measurements at 37.5 min and estimates was high with a correlation coefficient of 0.99 or greater. Rates of increase obtained from 20-min data were highly correlated with those obtained from 15-min or 10-min data (r=0.97 or greater). In patients with unilateral cerebrovascular disease, the rate of increase on the unaffected side was 44.4±10.9% when ACZ was administered 15 min later and 48.0±16.0% when ACZ was administered 20 min later, and the rates of increase with different timings of administration were not significantly different. The examination time may be reduced from 50 min to 45 min or 40 min as needed. The rate of increase was not influenced by the time frame for determination or the timing of ACZ administration. These findings suggest that our estimation method is accurate and versatile. (author)

  12. Two reports: (i) Correlation properties of delayed neutrons from fast neutron induced fission. (ii) Method and set-up for measurements of trace level content of heavy fissionable elements based on delayed neutron counting

    International Nuclear Information System (INIS)

    Piksaikin, V.M.; Isaev, S.G.; Goverdovski, A.A.; Pshakin, G.M.

    1998-10-01

    The document includes the following two reports: 'Correlation properties of delayed neutrons from fast neutron induced fission' and 'Method and set-up for measurements of trace level content of heavy fissionable elements based on delayed neutron counting. A separate abstract was prepared for each report

  13. Characterization methods of integrated optics for mid-infrared interferometry

    Science.gov (United States)

    Labadie, Lucas; Kern, Pierre Y.; Schanen-Duport, Isabelle; Broquin, Jean-Emmanuel

    2004-10-01

    his article deals with one of the important instrumentation challenges of the stellar interferometry mission IRSI-Darwin of the European Space Agency: the necessity to have a reliable and performant system for beam combination has enlightened the advantages of an integrated optics solution, which is already in use for ground-base interferometry in the near infrared. Integrated optics provides also interesting features in terms of filtering, which is a main issue for the deep null to be reached by Darwin. However, Darwin will operate in the mid infrared range from 4 microns to 20 microns where no integrated optics functions are available on-the-shelf. This requires extending the integrated optics concept and the undergoing technology in this spectral range. This work has started with the IODA project (Integrated Optics for Darwin) under ESA contract and aims to provide a first component for interferometry. In this paper are presented the guidelines of the characterization work that is implemented to test and validate the performances of a component at each step of the development phase. We present also an example of characterization experiment used within the frame of this work, is theoretical approach and some results.

  14. On the use of liquid scintillation counting of 51Cr and 14C in the twin tracer method of measuring assimilation efficiency

    International Nuclear Information System (INIS)

    Cammen, L.M.

    1977-01-01

    Calow and Fletcher (1972) calculated assimilation efficiency from the ratio of an assimilated radiotracer ( 14 C) to a non-assimilated tracer ( 51 Cr) in food and feces. Wightman (1975) improved the efficiency of their technique by using liquid scintillation to count both isotopes simultaneously, but stated incorrectly that it was not necessary to convert count per minute (CPM) to disintegration per minute (DPM). Unless the CPM data are corrected for quenching and converted to DPM prior to calculation of assimilation efficiency, a significant error may be introduced. (orig.) [de

  15. Aerial population estimates of wild horses (Equus caballus) in the adobe town and salt wells creek herd management areas using an integrated simultaneous double-count and sightability bias correction technique

    Science.gov (United States)

    Lubow, Bruce C.; Ransom, Jason I.

    2007-01-01

    An aerial survey technique combining simultaneous double-count and sightability bias correction methodologies was used to estimate the population of wild horses inhabiting Adobe Town and Salt Wells Creek Herd Management Areas, Wyoming. Based on 5 surveys over 4 years, we conclude that the technique produced estimates consistent with the known number of horses removed between surveys and an annual population growth rate of 16.2 percent per year. Therefore, evidence from this series of surveys supports the validity of this survey method. Our results also indicate that the ability of aerial observers to see horse groups is very strongly dependent on skill of the individual observer, size of the horse group, and vegetation cover. It is also more modestly dependent on the ruggedness of the terrain and the position of the sun relative to the observer. We further conclude that censuses, or uncorrected raw counts, are inadequate estimates of population size for this herd. Such uncorrected counts were all undercounts in our trials, and varied in magnitude from year to year and observer to observer. As of April 2007, we estimate that the population of the Adobe Town /Salt Wells Creek complex is 906 horses with a 95 percent confidence interval ranging from 857 to 981 horses.

  16. Platelet Count and Plateletcrit

    African Journals Online (AJOL)

    strated that neonates with late onset sepsis (bacteremia after 3 days of age) had a dramatic increase in MPV and. PDW18. We hypothesize that as the MPV and PDW increase and platelet count and PCT decrease in sick children, intui- tively, the ratio of MPV to PCT; MPV to Platelet count,. PDW to PCT, PDW to platelet ...

  17. EcoCount

    Directory of Open Access Journals (Sweden)

    Phillip P. Allen

    2014-05-01

    Full Text Available Techniques that analyze biological remains from sediment sequences for environmental reconstructions are well established and widely used. Yet, identifying, counting, and recording biological evidence such as pollen grains remain a highly skilled, demanding, and time-consuming task. Standard procedure requires the classification and recording of between 300 and 500 pollen grains from each representative sample. Recording the data from a pollen count requires significant effort and focused resources from the palynologist. However, when an adaptation to the recording procedure is utilized, efficiency and time economy improve. We describe EcoCount, which represents a development in environmental data recording procedure. EcoCount is a voice activated fully customizable digital count sheet that allows the investigator to continuously interact with a field of view during the data recording. Continuous viewing allows the palynologist the opportunity to remain engaged with the essential task, identification, for longer, making pollen counting more efficient and economical. EcoCount is a versatile software package that can be used to record a variety of environmental evidence and can be installed onto different computer platforms, making the adoption by users and laboratories simple and inexpensive. The user-friendly format of EcoCount allows any novice to be competent and functional in a very short time.

  18. Clean Hands Count

    Medline Plus

    Full Text Available ... starting stop Loading... Watch Queue Queue __count__/__total__ It’s YouTube. Uninterrupted. Loading... Want music and videos with ... ads? Get YouTube Red. Working... Not now Try it free Find out why Close Clean Hands Count ...

  19. Counting It Twice.

    Science.gov (United States)

    Schattschneider, Doris

    1991-01-01

    Provided are examples from many domains of mathematics that illustrate the Fubini Principle in its discrete version: the value of a summation over a rectangular array is independent of the order of summation. Included are: counting using partitions as in proof by pictures, combinatorial arguments, indirect counting as in the inclusion-exclusion…

  20. Integrative Mixed Methods Data Analytic Strategies in Research on School Success in Challenging Circumstances

    Science.gov (United States)

    Jang, Eunice E.; McDougall, Douglas E.; Pollon, Dawn; Herbert, Monique; Russell, Pia

    2008-01-01

    There are both conceptual and practical challenges in dealing with data from mixed methods research studies. There is a need for discussion about various integrative strategies for mixed methods data analyses. This article illustrates integrative analytic strategies for a mixed methods study focusing on improving urban schools facing challenging…

  1. Improving the counting efficiency in time-correlated single photon counting experiments by dead-time optimization

    Energy Technology Data Exchange (ETDEWEB)

    Peronio, P.; Acconcia, G.; Rech, I.; Ghioni, M. [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)

    2015-11-15

    Time-Correlated Single Photon Counting (TCSPC) has been long recognized as the most sensitive method for fluorescence lifetime measurements, but often requiring “long” data acquisition times. This drawback is related to the limited counting capability of the TCSPC technique, due to pile-up and counting loss effects. In recent years, multi-module TCSPC systems have been introduced to overcome this issue. Splitting the light into several detectors connected to independent TCSPC modules proportionally increases the counting capability. Of course, multi-module operation also increases the system cost and can cause space and power supply problems. In this paper, we propose an alternative approach based on a new detector and processing electronics designed to reduce the overall system dead time, thus enabling efficient photon collection at high excitation rate. We present a fast active quenching circuit for single-photon avalanche diodes which features a minimum dead time of 12.4 ns. We also introduce a new Time-to-Amplitude Converter (TAC) able to attain extra-short dead time thanks to the combination of a scalable array of monolithically integrated TACs and a sequential router. The fast TAC (F-TAC) makes it possible to operate the system towards the upper limit of detector count rate capability (∼80 Mcps) with reduced pile-up losses, addressing one of the historic criticisms of TCSPC. Preliminary measurements on the F-TAC are presented and discussed.

  2. Piloting a method to evaluate the implementation of integrated water ...

    African Journals Online (AJOL)

    Journal Home > Vol 41, No 5 (2015) >. Log in or Register to get access to full text downloads. ... A methodology with a set of principles, change areas and measures was developed as a performance assessment tool. ... Keywords: Integrated water resource management, Inkomati River Basin, South Africa, Swaziland ...

  3. A joint classification method to integrate scientific and social networks

    NARCIS (Netherlands)

    Neshati, Mahmood; Asgari, Ehsaneddin; Hiemstra, Djoerd; Beigy, Hamid

    In this paper, we address the problem of scientific-social network integration to find a matching relationship between members of these networks. Utilizing several name similarity patterns and contextual properties of these networks, we design a focused crawler to find high probable matching pairs,

  4. Fringe integral equation method for a truncated grounded dielectric slab

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Maci, S.; Toccafondi, A.

    2001-01-01

    The problem of scattering by a semi-infinite grounded dielectric slab illuminated by an arbitrary incident TMz polarized electric field is studied by solving a new set of “fringe” integral equations (F-IEs), whose functional unknowns are physically associated to the wave diffraction processes...

  5. Fourier path-integral Monte Carlo methods: Partial averaging

    International Nuclear Information System (INIS)

    Doll, J.D.; Coalson, R.D.; Freeman, D.L.

    1985-01-01

    Monte Carlo Fourier path-integral techniques are explored. It is shown that fluctuation renormalization techniques provide an effective means for treating the effects of high-order Fourier contributions. The resulting formalism is rapidly convergent, is computationally convenient, and has potentially useful variational aspects

  6. Writing Integrative Reviews of the Literature: Methods and Purposes

    Science.gov (United States)

    Torraco, Richard J.

    2016-01-01

    This article discusses the integrative review of the literature as a distinctive form of research that uses existing literature to create new knowledge. As an expansion and update of a previously published article on this topic, it acknowledges the growth and appeal of this form of research to scholars, it identifies the main components of the…

  7. Integral reactor system and method for fuel cells

    Science.gov (United States)

    Fernandes, Neil Edward; Brown, Michael S; Cheekatamarla, Praveen; Deng, Thomas; Dimitrakopoulos, James; Litka, Anthony F

    2013-11-19

    A reactor system is integrated internally within an anode-side cavity of a fuel cell. The reactor system is configured to convert hydrocarbons to smaller species while mitigating the lower production of solid carbon. The reactor system may incorporate one or more of a pre-reforming section, an anode exhaust gas recirculation device, and a reforming section.

  8. A comparison of point counts with a new acoustic sampling method: a case study of a bird community from the montane forests of Mount Cameroon

    Czech Academy of Sciences Publication Activity Database

    Sedláček, O.; Vokurková, J.; Ferenc, M.; Djomo Nana, E.; Albrecht, Tomáš; Hořák, D.

    2015-01-01

    Roč. 86, č. 3 (2015), s. 213-220 ISSN 0030-6525 R&D Projects: GA ČR(CZ) GAP505/11/1617 Institutional support: RVO:68081766 Keywords : abundance * automatic recording units * montane forest * point count * species richness * species turnover Subject RIV: EG - Zoology Impact factor: 0.418, year: 2015

  9. Digital Integration Method (DIM): A new method for the precise correlation of OCT and fluorescein angiography

    International Nuclear Information System (INIS)

    Hassenstein, A.; Richard, G.; Inhoffen, W.; Scholz, F.

    2007-01-01

    The new integration method (DIM) provides for the first time the anatomically precise integration of the OCT-scan position into the angiogram (fluorescein angiography, FLA), using reference marker at corresponding vessel crossings. Therefore an exact correlation of angiographic and morphological pathological findings is possible und leads to a better understanding of OCT and FLA. Occult findings in FLA were the patient group which profited most. Occult leakages could gain additional information using DIM such as serous detachment of the retinal pigment epithelium (RPE) in a topography. So far it was unclear whether the same localization in the lesion was examined by FLA and OCT especially when different staff were performing and interpreting the examination. Using DIM this problem could be solved using objective markers. This technique is the requirement for follow-up examinations by OCT. Using DIM for an objective, reliable and precise correlation of OCT and FLA-findings it is now possible to provide the identical scan-position in follow-up. Therefore for follow-up in clinical studies it is mandatory to use DIM to improve the evidence-based statement of OCT and the quality of the study. (author) [de

  10. The effect of volume and quenching on estimation of counting efficiencies in liquid scintillation counting

    International Nuclear Information System (INIS)

    Knoche, H.W.; Parkhurst, A.M.; Tam, S.W.

    1979-01-01

    The effect of volume on the liquid scintillation counting performance of 14 C-samples has been investigated. A decrease in counting efficiency was observed for samples with volumes below about 6 ml and those above about 18 ml when unquenched samples were assayed. Two quench-correction methods, sample channels ratio and external standard channels ratio, and three different liquid scintillation counters, were used in an investigation to determine the magnitude of the error in predicting counting efficiencies when small volume samples (2 ml) with different levels of quenching were assayed. The 2 ml samples exhibited slightly greater standard deviations of the difference between predicted and determined counting efficiencies than did 15 ml samples. Nevertheless, the magnitude of the errors indicate that if the sample channels ratio method of quench correction is employed, 2 ml samples may be counted in conventional counting vials with little loss in counting precision. (author)

  11. ADVANCING THE STUDY OF VIOLENCE AGAINST WOMEN USING MIXED METHODS: INTEGRATING QUALITATIVE METHODS INTO A QUANTITATIVE RESEARCH PROGRAM

    Science.gov (United States)

    Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol

    2011-01-01

    A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided. PMID:21307032

  12. An Adaptive Smoother for Counting Measurements

    International Nuclear Information System (INIS)

    Kondrasovs Vladimir; Coulon Romain; Normand Stephane

    2013-06-01

    Counting measurements associated with nuclear instruments are tricky to carry out due to the stochastic process of the radioactivity. Indeed events counting have to be processed and filtered in order to display a stable count rate value and to allow variations monitoring in the measured activity. Smoothers (as the moving average) are adjusted by a time constant defined as a compromise between stability and response time. A new approach has been developed and consists in improving the response time while maintaining count rate stability. It uses the combination of a smoother together with a detection filter. A memory of counting data is processed to calculate several count rate estimates using several integration times. These estimates are then sorted into the memory from short to long integration times. A measurement position, in terms of integration time, is then chosen into this memory after a detection test. An inhomogeneity into the Poisson counting process is detected by comparison between current position estimate and the other estimates contained into the memory in respect with the associated statistical variance calculated with homogeneous assumption. The measurement position (historical time) and the ability to forget an obsolete data or to keep in memory a useful data are managed using the detection test result. The proposed smoother is then an adaptive and a learning algorithm allowing an optimization of the response time while maintaining measurement counting stability and converging efficiently to the best counting estimate after an effective change in activity. This algorithm has also the specificity to be low recursive and thus easily embedded into DSP electronics based on FPGA or micro-controllers meeting 'real life' time requirements. (authors)

  13. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2014-01-01

    The aim in this paper is to develop a new local defect correction approach to gridding for problems with localised regions of high activity in the boundary element method. The technique of local defect correction has been studied for other methods as finite difference methods and finite volume

  14. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2013-01-01

    This paper presents a new approach to gridding for problems with localised regions of high activity. The technique of local defect correction has been studied for other methods as ¿nite difference methods and ¿nite volume methods. In this paper we develop the technique for the boundary element

  15. Accuracy and precision in activation analysis: counting

    International Nuclear Information System (INIS)

    Becker, D.A.

    1974-01-01

    Accuracy and precision in activation analysis was investigated with regard to counting of induced radioactivity. The various parameters discussed include configuration, positioning, density, homogeneity, intensity, radioisotopic purity, peak integration, and nuclear constants. Experimental results are presented for many of these parameters. The results obtained indicate that counting errors often contribute significantly to the inaccuracy and imprecision of analyses. The magnitude of these errors range from less than 1 percent to 10 percent or more in many cases

  16. Remote system for counting of nuclear pulses

    International Nuclear Information System (INIS)

    Nieves V, J.A.; Garcia H, J.M.; Aguilar B, M.A.

    1999-01-01

    In this work, it is describe technically the remote system for counting of nuclear pulses, an integral system of the project radiological monitoring in a petroleum distillation tower. The system acquires the counting of incident nuclear particles in a nuclear detector which process this information and send it in serial form, using the RS-485 toward a remote receiver, which can be a Personal computer or any other device capable to interpret the communication protocol. (Author)

  17. Developments of integrated laser crystals by a direct bonding method

    International Nuclear Information System (INIS)

    Sugiyama, Akira; Fukuyama, Hiroyasu; Katsumata, Masaki; Tanaka, Mitsuhiro; Okada, Yukikatu

    2003-01-01

    Laser crystal integration using a neodymium-doped yttrium vanadate (or orthovanadate) laser crystal, and non-doped yttrium vanadate crystals that function as cold fingers has been demonstrated. A newly developed dry etching process was adopted in the preparation for contact of mechanically polished surfaces. In the heat treatment process, temperature optimization was essential to get rid of precipitation of vanadic acid caused by the thermo-chemical reaction in a vacuum furnace. The bonded crystal was studied via optical characteristics, magnified inspections, laser output performances pumped by a CW laser diode. From these experiments, it was clear that the integrated Nd:YVO 4 laser crystal, securing the well-improved thermal conductivity, can increase laser output power nearly twice that of the conventional single crystal which was cracked in high power laser pumping of 10 W due to its intrinsic poor thermal conductivity. (author)

  18. Symplectic integrators for large scale molecular dynamics simulations: A comparison of several explicit methods

    International Nuclear Information System (INIS)

    Gray, S.K.; Noid, D.W.; Sumpter, B.G.

    1994-01-01

    We test the suitability of a variety of explicit symplectic integrators for molecular dynamics calculations on Hamiltonian systems. These integrators are extremely simple algorithms with low memory requirements, and appear to be well suited for large scale simulations. We first apply all the methods to a simple test case using the ideas of Berendsen and van Gunsteren. We then use the integrators to generate long time trajectories of a 1000 unit polyethylene chain. Calculations are also performed with two popular but nonsymplectic integrators. The most efficient integrators of the set investigated are deduced. We also discuss certain variations on the basic symplectic integration technique

  19. Pesticides and public health: integrated methods of mosquito management.

    OpenAIRE

    Rose, R. I.

    2001-01-01

    Pesticides have a role in public health as part of sustainable integrated mosquito management. Other components of such management include surveillance, source reduction or prevention, biological control, repellents, traps, and pesticide-resistance management. We assess the future use of mosquito control pesticides in view of niche markets, incentives for new product development, Environmental Protection Agency registration, the Food Quality Protection Act, and improved pest management strate...

  20. Application of dematel method in integrated framework of corporate governance

    OpenAIRE

    Klozíková, Jana; Dočkalíková, Iveta

    2015-01-01

    Corporate governance was created in recent decades and we can say that it is a new field of science. The most famous companies failed from day to day. Their failure and scandals had significant impact on local and international community. Finding of a new effective framework of level of corporate governance can help that the similar negative events wouldn't be repeated never again. The new approach in the corporate governance - an integrated framework, created for corporate governance is one ...