WorldWideScience

Sample records for rapid estimation method

  1. A simple and rapid method to estimate radiocesium in man

    International Nuclear Information System (INIS)

    Kindl, P.; Steger, F.

    1990-09-01

    A simple and rapid method for monitoring internal contamination of radiocesium in man was developed. This method is based on measurements of the γ-rays emitted from the muscular parts between the thights by a simple NaJ(Tl)-system. The experimental procedure, the calibration, the estimation of the body activity and results are explained and discussed. (Authors)

  2. A rapid radiobioassay method for strontium estimation in nuclear/radiological emergencies

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Sawant, Pramilla D.; Rao, D.D.; Pradeepkumar, K.S.

    2014-01-01

    During a nuclear/radiological emergency, workers as well as members of the public (MOP) may get internally contaminated with the radionuclides like Sr and Cs. In such situations, a truly rapid radiobioassay method is required to screen a large number of people in order to assess internal contamination and also to decide on subsequent medical intervention. The current precipitation method used at Bioassay Lab., Trombay is quite lengthy and laborious. Efforts are being made to optimize bioassay methods at Bhabha Atomic Research Centre using Solid Extraction Chromatography (SEC) technique for emergency response. The present work reports standardization of SEC technique for rapid estimation of Sr in urine samples. The method standardized using Sr spec is simpler, shorter, result in higher recoveries and reproducible results. It is most suitable for quick dose assessment of 90 Sr in bioassay samples in case of emergency

  3. Are rapid population estimates accurate? A field trial of two different assessment methods.

    Science.gov (United States)

    Grais, Rebecca F; Coulombier, Denis; Ampuero, Julia; Lucas, Marcelino E S; Barretto, Avertino T; Jacquier, Guy; Diaz, Francisco; Balandine, Serge; Mahoudeau, Claude; Brown, Vincent

    2006-09-01

    Emergencies resulting in large-scale displacement often lead to populations resettling in areas where basic health services and sanitation are unavailable. To plan relief-related activities quickly, rapid population size estimates are needed. The currently recommended Quadrat method estimates total population by extrapolating the average population size living in square blocks of known area to the total site surface. An alternative approach, the T-Square, provides a population estimate based on analysis of the spatial distribution of housing units taken throughout a site. We field tested both methods and validated the results against a census in Esturro Bairro, Beira, Mozambique. Compared to the census (population: 9,479), the T-Square yielded a better population estimate (9,523) than the Quadrat method (7,681; 95% confidence interval: 6,160-9,201), but was more difficult for field survey teams to implement. Although applicable only to similar sites, several general conclusions can be drawn for emergency planning.

  4. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    Science.gov (United States)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  5. A rapid method for estimation of Pu-isotopes in urine samples using high volume centrifuge.

    Science.gov (United States)

    Kumar, Ranjeet; Rao, D D; Dubla, Rupali; Yadav, J R

    2017-07-01

    The conventional radio-analytical technique used for estimation of Pu-isotopes in urine samples involves anion exchange/TEVA column separation followed by alpha spectrometry. This sequence of analysis consumes nearly 3-4 days for completion. Many a times excreta analysis results are required urgently, particularly under repeat and incidental/emergency situations. Therefore, there is need to reduce the analysis time for the estimation of Pu-isotopes in bioassay samples. This paper gives the details of standardization of a rapid method for estimation of Pu-isotopes in urine samples using multi-purpose centrifuge, TEVA resin followed by alpha spectrometry. The rapid method involves oxidation of urine samples, co-precipitation of plutonium along with calcium phosphate followed by sample preparation using high volume centrifuge and separation of Pu using TEVA resin. Pu-fraction was electrodeposited and activity estimated using 236 Pu tracer recovery by alpha spectrometry. Ten routine urine samples of radiation workers were analyzed and consistent radiochemical tracer recovery was obtained in the range 47-88% with a mean and standard deviation of 64.4% and 11.3% respectively. With this newly standardized technique, the whole analytical procedure is completed within 9h (one working day hour). Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Benchmarking electrical methods for rapid estimation of root biomass.

    Science.gov (United States)

    Postic, François; Doussan, Claude

    2016-01-01

    To face climate change and subsequent rainfall instabilities, crop breeding strategies now include root traits phenotyping. Rapid estimation of root traits in controlled conditions can be achieved by using parallel electrical capacitance and its linear correlation with root dry mass. The aim of the present study was to improve robustness and efficiency of methods based on capacitance and other electrical variables, such as serial/parallel resistance, conductance, impedance or reactance. Using different electrode configurations and stem contact electrodes, we have measured the electrical impedance spectra of wheat plants grown in pots filled with three types of soil. For each configuration, parallel capacitance and other linearly independent electrical variables were computed and their quality as root dry mass estimator was evaluated by a 'sensitivity score' that we derived from Pearson's correlation coefficient r and linear regression parameters. The highest sensitivity score was obtained by parallel capacitance at an alternating current frequency of 116 Hz in three-terminal configuration. Using a clamp, instead of a needle, as a stem electrode did not significantly affect the capacitance measurements. Finally, in handheld LCR meter equivalent conditions, capacitance had the highest sensitivity score and determination coefficient (r (2) = 0.52) at 10 kHz frequency. Our benchmarking of linear correlations between different electrical variables and root dry mass enables to determine more coherent practices for ensuring a sensitive and robust root dry mass estimation, including in handheld LCR meter conditions. This would enhance the value of electrical capacitance as a tool for screening crops in relation with root systems in breeding programs.

  7. A rapid reliability estimation method for directed acyclic lifeline networks with statistically dependent components

    International Nuclear Information System (INIS)

    Kang, Won-Hee; Kliese, Alyce

    2014-01-01

    Lifeline networks, such as transportation, water supply, sewers, telecommunications, and electrical and gas networks, are essential elements for the economic and societal functions of urban areas, but their components are highly susceptible to natural or man-made hazards. In this context, it is essential to provide effective pre-disaster hazard mitigation strategies and prompt post-disaster risk management efforts based on rapid system reliability assessment. This paper proposes a rapid reliability estimation method for node-pair connectivity analysis of lifeline networks especially when the network components are statistically correlated. Recursive procedures are proposed to compound all network nodes until they become a single super node representing the connectivity between the origin and destination nodes. The proposed method is applied to numerical network examples and benchmark interconnected power and water networks in Memphis, Shelby County. The connectivity analysis results show the proposed method's reasonable accuracy and remarkable efficiency as compared to the Monte Carlo simulations

  8. Development of rapid methods for relaxation time mapping and motion estimation using magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Gilani, Syed Irtiza Ali

    2008-09-15

    Recent technological developments in the field of magnetic resonance imaging have resulted in advanced techniques that can reduce the total time to acquire images. For applications such as relaxation time mapping, which enables improved visualisation of in vivo structures, rapid imaging techniques are highly desirable. TAPIR is a Look- Locker-based sequence for high-resolution, multislice T{sub 1} relaxation time mapping. Despite the high accuracy and precision of TAPIR, an improvement in the k-space sampling trajectory is desired to acquire data in clinically acceptable times. In this thesis, a new trajectory, termed line-sharing, is introduced for TAPIR that can potentially reduce the acquisition time by 40 %. Additionally, the line-sharing method was compared with the GRAPPA parallel imaging method. These methods were employed to reconstruct time-point images from the data acquired on a 4T high-field MR research scanner. Multislice, multipoint in vivo results obtained using these methods are presented. Despite improvement in acquisition speed, through line-sharing, for example, motion remains a problem and artefact-free data cannot always be obtained. Therefore, in this thesis, a rapid technique is introduced to estimate in-plane motion. The presented technique is based on calculating the in-plane motion parameters, i.e., translation and rotation, by registering the low-resolution MR images. The rotation estimation method is based on the pseudo-polar FFT, where the Fourier domain is composed of frequencies that reside in an oversampled set of non-angularly, equispaced points. The essence of the method is that unlike other Fourier-based registration schemes, the employed approach does not require any interpolation to calculate the pseudo-polar FFT grid coordinates. Translation parameters are estimated by the phase correlation method. However, instead of two-dimensional analysis of the phase correlation matrix, a low complexity subspace identification of the phase

  9. Development of rapid methods for relaxation time mapping and motion estimation using magnetic resonance imaging

    International Nuclear Information System (INIS)

    Gilani, Syed Irtiza Ali

    2008-09-01

    Recent technological developments in the field of magnetic resonance imaging have resulted in advanced techniques that can reduce the total time to acquire images. For applications such as relaxation time mapping, which enables improved visualisation of in vivo structures, rapid imaging techniques are highly desirable. TAPIR is a Look- Locker-based sequence for high-resolution, multislice T 1 relaxation time mapping. Despite the high accuracy and precision of TAPIR, an improvement in the k-space sampling trajectory is desired to acquire data in clinically acceptable times. In this thesis, a new trajectory, termed line-sharing, is introduced for TAPIR that can potentially reduce the acquisition time by 40 %. Additionally, the line-sharing method was compared with the GRAPPA parallel imaging method. These methods were employed to reconstruct time-point images from the data acquired on a 4T high-field MR research scanner. Multislice, multipoint in vivo results obtained using these methods are presented. Despite improvement in acquisition speed, through line-sharing, for example, motion remains a problem and artefact-free data cannot always be obtained. Therefore, in this thesis, a rapid technique is introduced to estimate in-plane motion. The presented technique is based on calculating the in-plane motion parameters, i.e., translation and rotation, by registering the low-resolution MR images. The rotation estimation method is based on the pseudo-polar FFT, where the Fourier domain is composed of frequencies that reside in an oversampled set of non-angularly, equispaced points. The essence of the method is that unlike other Fourier-based registration schemes, the employed approach does not require any interpolation to calculate the pseudo-polar FFT grid coordinates. Translation parameters are estimated by the phase correlation method. However, instead of two-dimensional analysis of the phase correlation matrix, a low complexity subspace identification of the phase

  10. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations.

    Science.gov (United States)

    Checchi, Francesco; Stewart, Barclay T; Palmer, Jennifer J; Grundy, Chris

    2013-01-23

    Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons' camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to "gold standard" reference population figures from census or other robust methods. Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of layout. For each site, estimates were produced in 2-5 working person-days. In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or shelters, a complex pattern of roofs and multi-level buildings. Based on these results, we discuss possible ways forward for the method's development.

  11. Rapid bioassay method for estimation of 90Sr in urine samples by liquid scintillation counting

    International Nuclear Information System (INIS)

    Wankhede, Sonal; Chaudhary, Seema; Sawant, Pramilla D.

    2018-01-01

    Radiostrontium (Sr) is a by-product of the nuclear fission of uranium and plutonium in nuclear reactors and is an important radionuclide in spent nuclear fuel and radioactive waste. Rapid bioassay methods are required for estimating Sr in urine following internal contamination. Decision regarding medical intervention, if any can be based upon the results of urinalysis. The present method used at Bioassay Laboratory, Trombay is by Solid Extraction Chromatography (SEC) technique. The Sr separated from urine sample is precipitated as SrCO 3 and analyzed gravimetrically. However, gravimetric procedure is time consuming and therefore, in the present study, feasibility of Liquid Scintillation Counting for direct detection of radiostrontium in effluent was explored. The results obtained in the present study were compared with those obtained using gravimetric method

  12. A new rapid method for rockfall energies and distances estimation

    Science.gov (United States)

    Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric

    2016-04-01

    and distances at the base to block and slope features. The validation of the proposed approach was conducted by comparing predictions to experimental data collected in the field and gathered from the scientific literature. The method can be used for both natural and constructed slopes and easily extended to more complicated and articulated slope geometries. The study shows its great potential for a quick qualitative hazard assessment providing indication about impact energy and horizontal distance of the first impact at the base of a rock cliff. Nevertheless, its application cannot substitute a more detailed quantitative analysis required for site-specific design of mitigation measures. Acknowledgements The authors gratefully acknowledge the financial support of the Australian Coal Association Research Program (ACARP). References Dorren, L.K.A. (2003) A review of rockfall mechanics and modelling approaches, Progress in Physical Geography 27(1), 69-87. Agliardi, F., Crosta, G.B., Frattini, P. (2009) Integrating rockfall risk assessment and countermeasure design by 3D modelling techniques. Natural Hazards and Earth System Sciences 9(4), 1059-1073. Ferrari, F., Thoeni, K., Giacomini, A., Lambert, C. (2016) A rapid approach to estimate the rockfall energies and distances at the base of rock cliffs. Georisk, DOI: 10.1080/17499518.2016.1139729.

  13. Lithium-Ion Battery Online Rapid State-of-Power Estimation under Multiple Constraints

    Directory of Open Access Journals (Sweden)

    Shun Xiang

    2018-01-01

    Full Text Available The paper aims to realize a rapid online estimation of the state-of-power (SOP with multiple constraints of a lithium-ion battery. Firstly, based on the improved first-order resistance-capacitance (RC model with one-state hysteresis, a linear state-space battery model is built; then, using the dual extended Kalman filtering (DEKF method, the battery parameters and states, including open-circuit voltage (OCV, are estimated. Secondly, by employing the estimated OCV as the observed value to build the second dual Kalman filters, the battery SOC is estimated. Thirdly, a novel rapid-calculating peak power/SOP method with multiple constraints is proposed in which, according to the bisection judgment method, the battery’s peak state is determined; then, one or two instantaneous peak powers are used to determine the peak power during T seconds. In addition, in the battery operating process, the actual constraint that the battery is under is analyzed specifically. Finally, three simplified versions of the Federal Urban Driving Schedule (SFUDS with inserted pulse experiments are conducted to verify the effectiveness and accuracy of the proposed online SOP estimation method.

  14. Rapid Estimation of Gustatory Sensitivity Thresholds with SIAM and QUEST

    Directory of Open Access Journals (Sweden)

    Richard Höchenberger

    2017-06-01

    Full Text Available Adaptive methods provide quick and reliable estimates of sensory sensitivity. Yet, these procedures are typically developed for and applied to the non-chemical senses only, i.e., to vision, audition, and somatosensation. The relatively long inter-stimulus-intervals in gustatory studies, which are required to minimize adaptation and habituation, call for time-efficient threshold estimations. We therefore tested the suitability of two adaptive yes-no methods based on SIAM and QUEST for rapid estimation of taste sensitivity by comparing test-retest reliability for sucrose, citric acid, sodium chloride, and quinine hydrochloride thresholds. We show that taste thresholds can be obtained in a time efficient manner with both methods (within only 6.5 min on average using QUEST and ~9.5 min using SIAM. QUEST yielded higher test-retest correlations than SIAM in three of the four tastants. Either method allows for taste threshold estimation with low strain on participants, rendering them particularly advantageous for use in subjects with limited attentional or mnemonic capacities, and for time-constrained applications during cohort studies or in the testing of patients and children.

  15. A rapid and highly selective method for the estimation of pyro-, tri- and orthophosphates.

    Science.gov (United States)

    Kamat, D R; Savant, V V; Sathyanarayana, D N

    1995-03-01

    A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP(2)O(7)]4H(2)O and [Co(en)(2)H(2)P(3)O(10)]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.

  16. Rapid, Simple, and Sensitive Spectrofluorimetric Method for the Estimation of Ganciclovir in Bulk and Pharmaceutical Formulations

    Directory of Open Access Journals (Sweden)

    Garima Balwani

    2013-01-01

    Full Text Available A new, simple, rapid, sensitive, accurate, and affordable spectrofluorimetric method was developed and validated for the estimation of ganciclovir in bulk as well as in marketed formulations. The method was based on measuring the native fluorescence of ganciclovir in 0.2 M hydrochloric acid buffer of pH 1.2 at 374 nm after excitation at 257 nm. The calibration graph was found to be rectilinear in the concentration range of 0.25–2.00 μg mL−1. The limit of quantification and limit of detection were found to be 0.029 μg mL−1 and 0.010 μg mL−1, respectively. The method was fully validated for various parameters according to ICH guidelines. The results demonstrated that the procedure is accurate, precise, and reproducible (relative standard deviation <2% and can be successfully applied for the determination of ganciclovir in its commercial capsules with average percentage recovery of 101.31 ± 0.90.

  17. Development of a new, rapid and sensitive HPTLC method for estimation of Milnacipran in bulk, formulation and compatibility study

    Directory of Open Access Journals (Sweden)

    Gautam Singhvi

    2017-05-01

    Full Text Available A simple, sensitive and rapid high performance thin layer chromatographic (HPTLC method has been developed and validated for quantitative determination of Milnacipran Hydrochloride (MIL in bulk and formulations. The chromatographic development was carried out on HPTLC plates precoated with silica gel 60 F254 using a mixture of acetonitrile, water and ammonia (6:0.6:1.6 (v/v/v as mobile phase. Detection was carried out densitometrically at 220 nm. The Rf value of drug was found to be 0.63 ± 0.02. The method was validated as per ICH guideline with respect to linearity, accuracy, precision, robustness etc. The calibration curve was found to be linear over a range of 100–1000 ng μL−1 with a regression coefficient of 0.999. The accuracy was found to be very high (99.12–100.87%. %RSD values for intra-day and inter-day variation were not more than 1.43. The method has demonstrated high sensitivity and specificity. The method was applied for compatibility studies also. The method is new, simple and economic for routine estimation of MIL in bulk, preformulation studies and pharmaceutical formulation to help the industries as well as researchers for their sensitive determination of MIL rapidly at low cost in routine analysis.

  18. Rapid estimation of the economic consequences of global earthquakes

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.

    2011-01-01

    to reduce this time gap to more rapidly and effectively mobilize response. We present here a procedure to rapidly and approximately ascertain the economic impact immediately following a large earthquake anywhere in the world. In principle, the approach presented is similar to the empirical fatality estimation methodology proposed and implemented by Jaiswal and others (2009). In order to estimate economic losses, we need an assessment of the economic exposure at various levels of shaking intensity. The economic value of all the physical assets exposed at different locations in a given area is generally not known and extremely difficult to compile at a global scale. In the absence of such a dataset, we first estimate the total Gross Domestic Product (GDP) exposed at each shaking intensity by multiplying the per-capita GDP of the country by the total population exposed at that shaking intensity level. We then scale the total GDP estimated at each intensity by an exposure correction factor, which is a multiplying factor to account for the disparity between wealth and/or economic assets to the annual GDP. The economic exposure obtained using this procedure is thus a proxy estimate for the economic value of the actual inventory that is exposed to the earthquake. The economic loss ratio, defined in terms of a country-specific lognormal cumulative distribution function of shaking intensity, is derived and calibrated against the losses from past earthquakes. This report describes the development of a country or region-specific economic loss ratio model using economic loss data available for global earthquakes from 1980 to 2007. The proposed model is a potential candidate for directly estimating economic losses within the currently-operating PAGER system. PAGER's other loss models use indirect methods that require substantially more data (such as building/asset inventories, vulnerabilities, and the asset values exposed at the time of earthquake) to implement on a global basis

  19. An evaluation of rapid methods for monitoring vegetation characteristics of wetland bird habitat

    Science.gov (United States)

    Tavernia, Brian G.; Lyons, James E.; Loges, Brian W.; Wilson, Andrew; Collazo, Jaime A.; Runge, Michael C.

    2016-01-01

    Wetland managers benefit from monitoring data of sufficient precision and accuracy to assess wildlife habitat conditions and to evaluate and learn from past management decisions. For large-scale monitoring programs focused on waterbirds (waterfowl, wading birds, secretive marsh birds, and shorebirds), precision and accuracy of habitat measurements must be balanced with fiscal and logistic constraints. We evaluated a set of protocols for rapid, visual estimates of key waterbird habitat characteristics made from the wetland perimeter against estimates from (1) plots sampled within wetlands, and (2) cover maps made from aerial photographs. Estimated percent cover of annuals and perennials using a perimeter-based protocol fell within 10 percent of plot-based estimates, and percent cover estimates for seven vegetation height classes were within 20 % of plot-based estimates. Perimeter-based estimates of total emergent vegetation cover did not differ significantly from cover map estimates. Post-hoc analyses revealed evidence for observer effects in estimates of annual and perennial covers and vegetation height. Median time required to complete perimeter-based methods was less than 7 percent of the time needed for intensive plot-based methods. Our results show that rapid, perimeter-based assessments, which increase sample size and efficiency, provide vegetation estimates comparable to more intensive methods.

  20. A rapid method to estimate uranium using ionic liquid as extracting agent from basic aqueous media

    International Nuclear Information System (INIS)

    Prabhath Ravi, K.; Sathyapriya, R.S.; Rao, D.D.; Ghosh, S.K.

    2016-01-01

    Room temperature ionic liquids, as their name suggests are salts with a low melting point typically less than 100 °C and exist as liquid at room temperature. The common cationic parts of ionic liquids are imidazolium, pyridinium, pyrrolidinium, quaternary ammonium, or phosphonium ions, and common anionic parts are chloride, bromide, boron tetrafluorate, phosphorous hexafluorate, triflimide etc. The physical properties of ionic liquids can be tuned by choosing appropriate cations with differing alkyl chain lengths and anions. Application of ionic liquids in organic synthesis, liquid-liquid extractions, electrochemistry, catalysis, speciation studies, nuclear reprocessing is being studied extensively in recent times. In this paper a rapid method to estimate the uranium content in aqueous media by extraction with room temperature ionic liquid tricaprylammoniumthiosalicylate ((A- 336)(TS)) followed by liquid scintillation analysis is described. Re-extraction of uranium from ionic liquid phase to aqueous phase was also studied

  1. A rapid method for the separation and estimation of uranium in geological materials using ion chromatography

    International Nuclear Information System (INIS)

    Prakash, Satya; Bangroo, P.N.

    2013-01-01

    Ion Chromatography is an elegant analytical technique which was primarily developed for the analysis of anionic species and over the years it has been used successfully to analyse various elements in different matrices. In this work the potential of Ion Chromatography has been used for the rapid separation and estimation of uranium in hydrogeochemical and other geological materials

  2. CTER—Rapid estimation of CTF parameters with error assessment

    Energy Technology Data Exchange (ETDEWEB)

    Penczek, Pawel A., E-mail: Pawel.A.Penczek@uth.tmc.edu [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Fang, Jia [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Li, Xueming; Cheng, Yifan [The Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, CA 94158 (United States); Loerke, Justus; Spahn, Christian M.T. [Institut für Medizinische Physik und Biophysik, Charité – Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin (Germany)

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å.

  3. Rapid prototyping: een veelbelovende methode

    NARCIS (Netherlands)

    Haverman, T.M.; Karagozoglu, K.H.; Prins, H.; Schulten, E.A.J.M.; Forouzanfar, T.

    2013-01-01

    Rapid prototyping is a method which makes it possible to produce a three-dimensional model based on two-dimensional imaging. Various rapid prototyping methods are available for modelling, such as stereolithography, selective laser sintering, direct laser metal sintering, two-photon polymerization,

  4. Research on parafoil stability using a rapid estimate model

    Directory of Open Access Journals (Sweden)

    Hua YANG

    2017-10-01

    Full Text Available With the consideration of rotation between canopy and payload of parafoil system, a four-degree-of-freedom (4-DOF longitudinal static model was used to solve parafoil state variables in straight steady flight. The aerodynamic solution of parafoil system was a combination of vortex lattice method (VLM and engineering estimation method. Based on small disturbance assumption, a 6-DOF linear model that considers canopy additional mass was established with benchmark state calculated by 4-DOF static model. Modal analysis of a dynamic model was used to calculate the stability parameters. This method, which is based on a small disturbance linear model and modal analysis, is high-efficiency to the study of parafoil stability. It is well suited for rapid stability analysis in the preliminary stage of parafoil design. Using this method, this paper shows that longitudinal and lateral stability will both decrease when a steady climbing angle increases. This explains the wavy track of the parafoil observed during climbing.

  5. A rapid method to estimate Westergren sedimentation rates.

    Science.gov (United States)

    Alexy, Tamas; Pais, Eszter; Meiselman, Herbert J

    2009-09-01

    The erythrocyte sedimentation rate (ESR) is a nonspecific but simple and inexpensive test that was introduced into medical practice in 1897. Although it is commonly utilized in the diagnosis and follow-up of various clinical conditions, ESR has several limitations including the required 60 min settling time for the test. Herein we introduce a novel use for a commercially available computerized tube viscometer that allows the accurate prediction of human Westergren ESR rates in as little as 4 min. Owing to an initial pressure gradient, blood moves between two vertical tubes through a horizontal small-bore tube and the top of the red blood cell (RBC) column in each vertical tube is monitored continuously with an accuracy of 0.083 mm. Using data from the final minute of a blood viscosity measurement, a sedimentation index (SI) was calculated and correlated with results from the conventional Westergren ESR test. To date, samples from 119 human subjects have been studied and our results indicate a strong correlation between SI and ESR values (R(2)=0.92). In addition, we found a close association between SI and RBC aggregation indices as determined by an automated RBC aggregometer (R(2)=0.71). Determining SI on human blood is rapid, requires no special training and has minimal biohazard risk, thus allowing physicians to rapidly screen for individuals with elevated ESR and to monitor therapeutic responses.

  6. Rapid estimation of the moment magnitude of large earthquake from static strain changes

    Science.gov (United States)

    Itaba, S.

    2014-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred on March 11, 2011. Based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9, and it was considerably smaller than an actual value. On the other hand, using nine borehole strainmeters of Geological Survey of Japan, AIST, we estimated a fault model with Mw 8.7 for the earthquake on the boundary between the Pacific and North American plates. This model can be estimated about seven minutes after the origin time, and five minute after wave arrival. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami (e.g., Ohta et al., 2012). Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  7. CTER-rapid estimation of CTF parameters with error assessment.

    Science.gov (United States)

    Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Solvent extraction method for rapid separation of strontium-90 in milk and food samples

    International Nuclear Information System (INIS)

    Hingorani, S.B.; Sathe, A.P.

    1991-01-01

    A solvent extraction method, using tributyl phosphate, for rapid separation of strontium-90 in milk and other food samples has been presented in this report in view of large number of samples recieved after Chernobyl accident for checking radioactive contamination. The earlier nitration method in use for the determination of 90 Sr through its daughter 90 Y takes over two weeks for analysis of a sample. While by this extraction method it takes only 4 to 5 hours for sample analysis. Complete estimation including initial counting can be done in a single day. The chemical recovery varies between 80-90% compared to nitration method which is 65-80%. The purity of the method has been established by following the decay of yttrium-90 separated. Some of the results obtained by adopting this chemical method for food analysis are included. The method is, thus, found to be rapid and convenient for accurate estimation of strontium-90 in milk and food samples. (author). 2 tabs., 1 fig

  9. Rapid and accurate species tree estimation for phylogeographic investigations using replicated subsampling.

    Science.gov (United States)

    Hird, Sarah; Kubatko, Laura; Carstens, Bryan

    2010-11-01

    We describe a method for estimating species trees that relies on replicated subsampling of large data matrices. One application of this method is phylogeographic research, which has long depended on large datasets that sample intensively from the geographic range of the focal species; these datasets allow systematicists to identify cryptic diversity and understand how contemporary and historical landscape forces influence genetic diversity. However, analyzing any large dataset can be computationally difficult, particularly when newly developed methods for species tree estimation are used. Here we explore the use of replicated subsampling, a potential solution to the problem posed by large datasets, with both a simulation study and an empirical analysis. In the simulations, we sample different numbers of alleles and loci, estimate species trees using STEM, and compare the estimated to the actual species tree. Our results indicate that subsampling three alleles per species for eight loci nearly always results in an accurate species tree topology, even in cases where the species tree was characterized by extremely rapid divergence. Even more modest subsampling effort, for example one allele per species and two loci, was more likely than not (>50%) to identify the correct species tree topology, indicating that in nearly all cases, computing the majority-rule consensus tree from replicated subsampling provides a good estimate of topology. These results were supported by estimating the correct species tree topology and reasonable branch lengths for an empirical 10-locus great ape dataset. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Rapid validated HPTLC method for estimation of piperine and piperlongumine in root of Piper longum extract and its commercial formulation

    Directory of Open Access Journals (Sweden)

    Anagha A. Rajopadhye

    2012-12-01

    Full Text Available Piperine and piperlongumine, alkaloids having diverse biological activities, commonly occur in roots of Piper longum L., Piperaceae, which have high commercial, economical and medicinal value. In present study, rapid, validated HPTLC method has been established for the determination of piperine and piperlongumine in methanolic root extract and its commercial formulation 'Mahasudarshan churna®' using ICH guidelines. The use of Accelerated Solvent Extraction (ASE as an alternative to conventional techniques has been explored. The methanol extracts of root, its formulation and both standard solutions were applied on silica gel F254 HPTLC plates. The plates were developed in Twin chamber using mobile phase toluene: ethyl acetate (6:4, v/v and scanned at 342 and 325 nm (λmax of piperine and piperlongumine, respectively using Camag TLC scanner 3 with CATS 4 software. A linear relationship was obtained between response (peak area and amount of piperine and piperlongumine in the range of 20-100 and 30-150 ng/spot, respectively; the correlation coefficient was 0.9957 and 0.9941 respectively. Sharp, symmetrical and well resolved peaks of piperine and piperlongumine spots resolved at Rf 0.51 and 0.74, respectively from other components of the sample extracts. The HPTLC method showed good linearity, recovery and high precision of both markers. Extraction of plant using ASE and rapid HPTLC method provides a new and powerful approach to estimate piperine and piperlongumine as phytomarkers in the extract as well as its commercial formulations for routine quality control.

  11. Rapid methods for detection of bacteria

    DEFF Research Database (Denmark)

    Corfitzen, Charlotte B.; Andersen, B.Ø.; Miller, M.

    2006-01-01

    Traditional methods for detection of bacteria in drinking water e.g. Heterotrophic Plate Counts (HPC) or Most Probable Number (MNP) take 48-72 hours to give the result. New rapid methods for detection of bacteria are needed to protect the consumers against contaminations. Two rapid methods...

  12. [Rapid prototyping: a very promising method].

    Science.gov (United States)

    Haverman, T M; Karagozoglu, K H; Prins, H-J; Schulten, E A J M; Forouzanfar, T

    2013-03-01

    Rapid prototyping is a method which makes it possible to produce a three-dimensional model based on two-dimensional imaging. Various rapid prototyping methods are available for modelling, such as stereolithography, selective laser sintering, direct laser metal sintering, two-photon polymerization, laminated object manufacturing, three-dimensional printing, three-dimensional plotting, polyjet inkjet technology,fused deposition modelling, vacuum casting and milling. The various methods currently being used in the biomedical sector differ in production, materials and properties of the three-dimensional model which is produced. Rapid prototyping is mainly usedforpreoperative planning, simulation, education, and research into and development of bioengineering possibilities.

  13. Development and Validation of a Rapid RP-UPLC Method for the Simultaneous Estimation of Bambuterol Hydrochloride and Montelukast Sodium from Tablets.

    Science.gov (United States)

    Yanamandra, R; Vadla, C S; Puppala, U M; Patro, B; Murthy, Y L N; Parimi, A R

    2012-03-01

    A rapid, simple, sensitive and selective analytical method was developed by using reverse phase ultra performance liquid chromatographic technique for the simultaneous estimation of bambuterol hydrochloride and montelukast sodium in combined tablet dosage form. The developed method is superior in technology to conventional high performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Elution time for the separation was 6 min and ultra violet detection was carried out at 210 nm. Efficient separation was achieved on BEH C18 sub-2-μm Acquity UPLC column using 0.025% (v/v) trifluoro acetic acid in water and acetonitrile as organic solvent in a linear gradient program. Resolutions between bambuterol hydrochloride and montelukast sodium were found to be more than 31. The active pharmaceutical ingredient was extracted from tablet dosage from using a mixture of methanol, acetonitrile and water as diluent. The calibration graphs were linear for bambuterol hydrochloride and montelukast sodium in the range of 6.25-37.5 μg/ml. The percentage recoveries for bambuterol hydrochloride and montelukast sodium were found to be in the range of 99.1-100.0% and 98.0-101.6%, respectively. The test solution was found to be stable for 7 days when stored in the refrigerator between 2-8°. Developed UPLC method was validated as per International Conference on Harmonization specifications for method validation. This method can be successfully employed for simultaneous estimation of bambuterol hydrochloride and montelukast sodium in bulk drugs and formulations.

  14. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    International Nuclear Information System (INIS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-01-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally. -- Highlights: • A rapid interpolation method of system matrices (H) is proposed, named DW-GIMGPE. • Reduce H acquisition time by 15.2× with simplified grid scan and 2× interpolation. • Reconstructions of a hot-rod phantom with measured and DW-GIMGPE H were similar. • The imaging study of normal

  15. A new ore reserve estimation method, Yang Chizhong filtering and inferential measurement method, and its application

    International Nuclear Information System (INIS)

    Wu Jingqin.

    1989-01-01

    Yang Chizhong filtering and inferential measurement method is a new method used for variable statistics of ore deposits. In order to apply this theory to estimate the uranium ore reserves under the circumstances of regular or irregular prospecting grids, small ore bodies, less sampling points, and complex occurrence, the author has used this method to estimate the ore reserves in five ore bodies of two deposits and achieved satisfactory results. It is demonstrated that compared with the traditional block measurement method, this method is simple and clear in formula, convenient in application, rapid in calculation, accurate in results, less expensive, and high economic benefits. The procedure and experience in the application of this method and the preliminary evaluation of its results are mainly described

  16. Evaluation of three paediatric weight estimation methods in Singapore.

    Science.gov (United States)

    Loo, Pei Ying; Chong, Shu-Ling; Lek, Ngee; Bautista, Dianne; Ng, Kee Chong

    2013-04-01

    Rapid paediatric weight estimation methods in the emergency setting have not been evaluated for South East Asian children. This study aims to assess the accuracy and precision of three such methods in Singapore children: Broselow-Luten (BL) tape, Advanced Paediatric Life Support (APLS) (estimated weight (kg) = 2 (age + 4)) and Luscombe (estimated weight (kg) = 3 (age) + 7) formulae. We recruited 875 patients aged 1-10 years in a Paediatric Emergency Department in Singapore over a 2-month period. For each patient, true weight and height were determined. True height was cross-referenced to the BL tape markings and used to derive estimated weight (virtual BL tape method), while patient's round-down age (in years) was used to derive estimated weights using APLS and Luscombe formulae, respectively. The percentage difference between the true and estimated weights was calculated. For each method, the bias and extent of agreement were quantified using Bland-Altman method (mean percentage difference (MPD) and 95% limits of agreement (LOA)). The proportion of weight estimates within 10% of true weight (p₁₀) was determined. The BL tape method marginally underestimated weights (MPD +0.6%; 95% LOA -26.8% to +28.1%; p₁₀ 58.9%). The APLS formula underestimated weights (MPD +7.6%; 95% LOA -26.5% to +41.7%; p₁₀ 45.7%). The Luscombe formula overestimated weights (MPD -7.4%; 95% LOA -51.0% to +36.2%; p₁₀ 37.7%). Of the three methods we evaluated, the BL tape method provided the most accurate and precise weight estimation for Singapore children. The APLS and Luscombe formulae underestimated and overestimated the children's weights, respectively, and were considerably less precise. © 2013 The Authors. Journal of Paediatrics and Child Health © 2013 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  17. A scoping review of rapid review methods.

    Science.gov (United States)

    Tricco, Andrea C; Antony, Jesmin; Zarin, Wasifa; Strifler, Lisa; Ghassemi, Marco; Ivory, John; Perrier, Laure; Hutton, Brian; Moher, David; Straus, Sharon E

    2015-09-16

    Rapid reviews are a form of knowledge synthesis in which components of the systematic review process are simplified or omitted to produce information in a timely manner. Although numerous centers are conducting rapid reviews internationally, few studies have examined the methodological characteristics of rapid reviews. We aimed to examine articles, books, and reports that evaluated, compared, used or described rapid reviews or methods through a scoping review. MEDLINE, EMBASE, the Cochrane Library, internet websites of rapid review producers, and reference lists were searched to identify articles for inclusion. Two reviewers independently screened literature search results and abstracted data from included studies. Descriptive analysis was conducted. We included 100 articles plus one companion report that were published between 1997 and 2013. The studies were categorized as 84 application papers, seven development papers, six impact papers, and four comparison papers (one was included in two categories). The rapid reviews were conducted between 1 and 12 months, predominantly in Europe (58 %) and North America (20 %). The included studies failed to report 6 % to 73 % of the specific systematic review steps examined. Fifty unique rapid review methods were identified; 16 methods occurred more than once. Streamlined methods that were used in the 82 rapid reviews included limiting the literature search to published literature (24 %) or one database (2 %), limiting inclusion criteria by date (68 %) or language (49 %), having one person screen and another verify or screen excluded studies (6 %), having one person abstract data and another verify (23 %), not conducting risk of bias/quality appraisal (7 %) or having only one reviewer conduct the quality appraisal (7 %), and presenting results as a narrative summary (78 %). Four case studies were identified that compared the results of rapid reviews to systematic reviews. Three studies found that the conclusions between

  18. Method to Locate Contaminant Source and Estimate Emission Strength

    Directory of Open Access Journals (Sweden)

    Qu Hongquan

    2013-01-01

    Full Text Available People greatly concern the issue of air quality in some confined spaces, such as spacecraft, aircraft, and submarine. With the increase of residence time in such confined space, contaminant pollution has become a main factor which endangers life. It is urgent to identify a contaminant source rapidly so that a prompt remedial action can be taken. A procedure of source identification should be able to locate the position and to estimate the emission strength of the contaminant source. In this paper, an identification method was developed to realize these two aims. This method was developed based on a discrete concentration stochastic model. With this model, a sensitivity analysis algorithm was induced to locate the source position, and a Kalman filter was used to further estimate the contaminant emission strength. This method could track and predict the source strength dynamically. Meanwhile, it can predict the distribution of contaminant concentration. Simulation results have shown the virtues of the method.

  19. Rapid Estimation Method for State of Charge of Lithium-Ion Battery Based on Fractional Continual Variable Order Model

    Directory of Open Access Journals (Sweden)

    Xin Lu

    2018-03-01

    Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.

  20. Diagnostic Performance of a Rapid Magnetic Resonance Imaging Method of Measuring Hepatic Steatosis

    Science.gov (United States)

    House, Michael J.; Gan, Eng K.; Adams, Leon A.; Ayonrinde, Oyekoya T.; Bangma, Sander J.; Bhathal, Prithi S.; Olynyk, John K.; St. Pierre, Tim G.

    2013-01-01

    Objectives Hepatic steatosis is associated with an increased risk of developing serious liver disease and other clinical sequelae of the metabolic syndrome. However, visual estimates of steatosis from histological sections of biopsy samples are subjective and reliant on an invasive procedure with associated risks. The aim of this study was to test the ability of a rapid, routinely available, magnetic resonance imaging (MRI) method to diagnose clinically relevant grades of hepatic steatosis in a cohort of patients with diverse liver diseases. Materials and Methods Fifty-nine patients with a range of liver diseases underwent liver biopsy and MRI. Hepatic steatosis was quantified firstly using an opposed-phase, in-phase gradient echo, single breath-hold MRI methodology and secondly, using liver biopsy with visual estimation by a histopathologist and by computer-assisted morphometric image analysis. The area under the receiver operating characteristic (ROC) curve was used to assess the diagnostic performance of the MRI method against the biopsy observations. Results The MRI approach had high sensitivity and specificity at all hepatic steatosis thresholds. Areas under ROC curves were 0.962, 0.993, and 0.972 at thresholds of 5%, 33%, and 66% liver fat, respectively. MRI measurements were strongly associated with visual (r2 = 0.83) and computer-assisted morphometric (r2 = 0.84) estimates of hepatic steatosis from histological specimens. Conclusions This MRI approach, using a conventional, rapid, gradient echo method, has high sensitivity and specificity for diagnosing liver fat at all grades of steatosis in a cohort with a range of liver diseases. PMID:23555650

  1. Rapid assessment methods in eye care: An overview

    Directory of Open Access Journals (Sweden)

    Srinivas Marmamula

    2012-01-01

    Full Text Available Reliable information is required for the planning and management of eye care services. While classical research methods provide reliable estimates, they are prohibitively expensive and resource intensive. Rapid assessment (RA methods are indispensable tools in situations where data are needed quickly and where time- or cost-related factors prohibit the use of classical epidemiological surveys. These methods have been developed and field tested, and can be applied across almost the entire gamut of health care. The 1990s witnessed the emergence of RA methods in eye care for cataract, onchocerciasis, and trachoma and, more recently, the main causes of avoidable blindness and visual impairment. The important features of RA methods include the use of local resources, simplified sampling methodology, and a simple examination protocol/data collection method that can be performed by locally available personnel. The analysis is quick and easy to interpret. The entire process is inexpensive, so the survey may be repeated once every 5-10 years to assess the changing trends in disease burden. RA survey methods are typically linked with an intervention. This article provides an overview of the RA methods commonly used in eye care, and emphasizes the selection of appropriate methods based on the local need and context.

  2. A new method for rapid Canine retraction

    Directory of Open Access Journals (Sweden)

    "Khavari A

    2001-06-01

    Full Text Available Distraction osteogenesis method (Do in bone lengthening and rapid midpalatal expansion have shown the great ability of osteognic tissues for rapid bone formation under distraction force and special protocol with optimum rate of one millimeter per day. Periodontal membrane of teeth (PDM is the extension of periostium in the alveolar socked. Orthodontic force distracts PDM fibers in the tension side and then bone formation will begin.Objects: Rapid retraction of canine tooth into extraction space of first premolar by DO protocol in order to show the ability of the PDM in rapid bone formation. The other objective was reducing total orthodontic treatment time of extraction cases.Patients and Methods: Tweleve maxillary canines in six patients were retracted rapidly in three weeks by a custom-made tooth-born appliance. Radiographic records were taken to evaluate the effects of heavy applied force on canine and anchorage teeth.Results: Average retraction was 7.05 mm in three weeks (2.35 mm/week. Canines rotated distal- in by mean 3.5 degrees.Anchorage loss was from 0 to 0.8 mm with average of 0.3 mm.Root resorption of canines was negligible, and was not significant clinically. Periodontium was normal after rapid retraction. No hazard for pulp vitality was observed.Discussion: PDM responded well to heavy distraction force by Do protocol. Rapid canine retraction seems to be a safe method and can considerabely reduce orthodontic time.

  3. Estimation of body fluids with bioimpedance spectroscopy: state of the art methods and proposal of novel methods

    International Nuclear Information System (INIS)

    Buendia, R; Seoane, F; Lindecrantz, K; Bosaeus, I; Gil-Pita, R; Johannsson, G; Ellegård, L; Ward, L C

    2015-01-01

    Determination of body fluids is a useful common practice in determination of disease mechanisms and treatments. Bioimpedance spectroscopy (BIS) methods are non-invasive, inexpensive and rapid alternatives to reference methods such as tracer dilution. However, they are indirect and their robustness and validity are unclear. In this article, state of the art methods are reviewed, their drawbacks identified and new methods are proposed. All methods were tested on a clinical database of patients receiving growth hormone replacement therapy. Results indicated that most BIS methods are similarly accurate (e.g.  <  0.5   ±   3.0% mean percentage difference for total body water) for estimation of body fluids. A new model for calculation is proposed that performs equally well for all fluid compartments (total body water, extra- and intracellular water). It is suggested that the main source of error in extracellular water estimation is due to anisotropy, in total body water estimation to the uncertainty associated with intracellular resistivity and in determination of intracellular water a combination of both. (paper)

  4. Using a Regression Method for Estimating Performance in a Rapid Serial Visual Presentation Target-Detection Task

    Science.gov (United States)

    2017-12-01

    Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR and FAR. Analysis was...distribution is unlimited. 8 Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR...stimuli. Simulations show that this regression method results in an unbiased and accurate estimate of target detection performance. The regression

  5. A Method for Rapid Measurement of Contrast Sensitivity on Mobile Touch-Screens

    Science.gov (United States)

    Mulligan, Jeffrey B.

    2016-01-01

    Touch-screen displays in cell phones and tablet computers are now pervasive, making them an attractive option for vision testing outside of the laboratory or clinic. Here we de- scribe a novel method in which subjects use a finger swipe to indicate the transition from visible to invisible on a grating which is swept in both contrast and frequency. Because a single image can be swiped in about a second, it is practical to use a series of images to zoom in on particular ranges of contrast or frequency, both to increase the accuracy of the measurements and to obtain an estimate of the reliability of the subject. Sensitivities to chromatic and spatio-temporal modulations are easily measured using the same method. A proto- type has been developed for Apple Computer's iPad/iPod/iPhone family of devices, implemented using an open-source scripting environment known as QuIP (QUick Image Processing, http://hsi.arc.nasa.gov/groups/scanpath/research.php). Preliminary data show good agreement with estimates obtained from traditional psychophysical methods as well as newer rapid estimation techniques. Issues relating to device calibration are also discussed.

  6. Estimating evolutionary rates using time-structured data: a general comparison of phylogenetic methods.

    Science.gov (United States)

    Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W

    2016-11-15

    In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Standardization of HPTLC method for the estimation of oxytocin in edibles.

    Science.gov (United States)

    Rani, Roopa; Medhe, Sharad; Raj, Kumar Rohit; Srivastava, Manmohan

    2013-12-01

    Adulteration in food stuff has been regarded as a major social evil and is a mind-boggling problem in society. In this study, a rapid, reliable and cost effective High Performance thin layer Chromatography (HPTLC) has been established for the estimation of oxytocin (adulterant) in vegetables, fruits and milk samples. Oxytocin is one of the most frequently used adulterant added in vegetables and fruits for increasing the growth rate and also to enhance milk production from lactating animals. The standardization of the method was based on simulation parameters of mobile phase, stationary phase and saturation time. The mobile phase used was MeOH: Ammonia (pH 6.8), optimized stationary phase was silica gel and saturation time of 5 min. The method was validated by testing its linearity, accuracy, precision, repeatability and limits of detection and quantification. Thus, the proposed method is simple, rapid and specific and was successfully employed for quality and quantity monitoring of oxytocin content in edible products.

  8. Rapid Radiochemical Methods for Asphalt Paving Material ...

    Science.gov (United States)

    Technical Brief Validated rapid radiochemical methods for alpha and beta emitters in solid matrices that are commonly encountered in urban environments were previously unavailable for public use by responding laboratories. A lack of tested rapid methods would delay the quick determination of contamination levels and the assessment of acceptable site-specific exposure levels. Of special concern are matrices with rough and porous surfaces, which allow the movement of radioactive material deep into the building material making it difficult to detect. This research focuses on methods that address preparation, radiochemical separation, and analysis of asphalt paving materials and asphalt roofing shingles. These matrices, common to outdoor environments, challenge the capability and capacity of very experienced radiochemistry laboratories. Generally, routine sample preparation and dissolution techniques produce liquid samples (representative of the original sample material) that can be processed using available radiochemical methods. The asphalt materials are especially difficult because they do not readily lend themselves to these routine sample preparation and dissolution techniques. The HSRP and ORIA coordinate radiological reference laboratory priorities and activities in conjunction with HSRP’s Partner Process. As part of the collaboration, the HSRP worked with ORIA to publish rapid radioanalytical methods for selected radionuclides in building material matrice

  9. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  10. Rapid estimation of the moment magnitude of the 2011 off the Pacific coast of Tohoku earthquake from coseismic strain steps

    Science.gov (United States)

    Itaba, S.; Matsumoto, N.; Kitagawa, Y.; Koizumi, N.

    2012-12-01

    The 2011 off the Pacific coast of Tohoku earthquake, of moment magnitude (Mw) 9.0, occurred at 14:46 Japan Standard Time (JST) on March 11, 2011. The coseismic strain steps caused by the fault slip of this earthquake were observed in the Tokai, Kii Peninsula and Shikoku by the borehole strainmeters which were carefully set by Geological Survey of Japan, AIST. Using these strain steps, we estimated a fault model for the earthquake on the boundary between the Pacific and North American plates. Our model, which is estimated only from several minutes' strain data, is largely consistent with the final fault models estimated from GPS and seismic wave data. The moment magnitude can be estimated about 6 minutes after the origin time, and 4 minutes after wave arrival. According to the fault model, the moment magnitude of the earthquake is 8.7. On the other hand, based on the seismic wave, the prompt report of the magnitude which the Japan Meteorological Agency announced just after earthquake occurrence was 7.9. Generally coseismic strain steps are considered to be less reliable than seismic waves and GPS data. However our results show that the coseismic strain steps observed by the borehole strainmeters, which were carefully set and monitored, can be relied enough to decide the earthquake magnitude precisely and rapidly. In order to grasp the magnitude of a great earthquake earlier, several methods are now being suggested to reduce the earthquake disasters including tsunami. Our simple method of using strain steps is one of the strong methods for rapid estimation of the magnitude of great earthquakes.

  11. Rapid cable tension estimation using dynamic and mechanical properties

    Science.gov (United States)

    Martínez-Castro, Rosana E.; Jang, Shinae; Christenson, Richard E.

    2016-04-01

    Main tension elements are critical to the overall stability of cable-supported bridges. A dependable and rapid determination of cable tension is desired to assess the state of a cable-supported bridge and evaluate its operability. A portable smart sensor setup is presented to reduce post-processing time and deployment complexity while reliably determining cable tension using dynamic characteristics extracted from spectral analysis. A self-recording accelerometer is coupled with a single-board microcomputer that communicates wirelessly with a remote host computer. The portable smart sensing device is designed such that additional algorithms, sensors and controlling devices for various monitoring applications can be installed and operated for additional structural assessment. The tension-estimating algorithms are based on taut string theory and expand to consider bending stiffness. The successful combination of cable properties allows the use of a cable's dynamic behavior to determine tension force. The tension-estimating algorithms are experimentally validated on a through-arch steel bridge subject to ambient vibration induced by passing traffic. The tension estimation is determined in well agreement with previously determined tension values for the structure.

  12. Influence function method for fast estimation of BWR core performance

    International Nuclear Information System (INIS)

    Rahnema, F.; Martin, C.L.; Parkos, G.R.; Williams, R.D.

    1993-01-01

    The model, which is based on the influence function method, provides rapid estimate of important quantities such as margins to fuel operating limits, the effective multiplication factor, nodal power and void and bundle flow distributions as well as the traversing in-core probe (TIP) and local power range monitor (LPRM) readings. The fast model has been incorporated into GE's three-dimensional core monitoring system (3D Monicore). In addition to its predicative capability, the model adapts to LPRM readings in the monitoring mode. Comparisons have shown that the agreement between the results of the fast method and those of the standard 3D Monicore is within a few percent. (orig.)

  13. Survey of methods for rapid spin reversal

    International Nuclear Information System (INIS)

    McKibben, J.L.

    1980-01-01

    The need for rapid spin reversal technique in polarization experiments is discussed. The ground-state atomic-beam source equipped with two rf transitions for hydrogen can be reversed rapidly, and is now in use on several accelerators. It is the optimum choice provided the accelerator can accept H + ions. At present all rapid reversal experiments using H - ions are done with Lamb-shift sources; however, this is not a unique choice. Three methods for the reversal of the spin of the atomic beam within the Lamb-shift source are discussed in order of development. Coherent intensity and perhaps focus modulation seem to be the biggest problems in both types of sources. Methods for reducing these modulations in the Lamb-shift source are discussed. The same Lamb-shift apparatus is easily modified to provide information on the atomic physics of quenching of the 2S/sub 1/2/ states versus spin orientation, and this is also discussed. 2 figures

  14. Testing survey-based methods for rapid monitoring of child mortality, with implications for summary birth history data.

    Science.gov (United States)

    Brady, Eoghan; Hill, Kenneth

    2017-01-01

    Under-five mortality estimates are increasingly used in low and middle income countries to target interventions and measure performance against global development goals. Two new methods to rapidly estimate under-5 mortality based on Summary Birth Histories (SBH) were described in a previous paper and tested with data available. This analysis tests the methods using data appropriate to each method from 5 countries that lack vital registration systems. SBH data are collected across many countries through censuses and surveys, and indirect methods often rely upon their quality to estimate mortality rates. The Birth History Imputation method imputes data from a recent Full Birth History (FBH) onto the birth, death and age distribution of the SBH to produce estimates based on the resulting distribution of child mortality. DHS FBHs and MICS SBHs are used for all five countries. In the implementation, 43 of 70 estimates are within 20% of validation estimates (61%). Mean Absolute Relative Error is 17.7.%. 1 of 7 countries produces acceptable estimates. The Cohort Change method considers the differences in births and deaths between repeated Summary Birth Histories at 1 or 2-year intervals to estimate the mortality rate in that period. SBHs are taken from Brazil's PNAD Surveys 2004-2011 and validated against IGME estimates. 2 of 10 estimates are within 10% of validation estimates. Mean absolute relative error is greater than 100%. Appropriate testing of these new methods demonstrates that they do not produce sufficiently good estimates based on the data available. We conclude this is due to the poor quality of most SBH data included in the study. This has wider implications for the next round of censuses and future household surveys across many low- and middle- income countries.

  15. Rapid flow imaging method

    International Nuclear Information System (INIS)

    Pelc, N.J.; Spritzer, C.E.; Lee, J.N.

    1988-01-01

    A rapid, phase-contrast, MR imaging method of imaging flow has been implemented. The method, called VIGRE (velocity imaging with gradient recalled echoes), consists of two interleaved, narrow flip angle, gradient-recalled acquisitions. One is flow compensated while the second has a specified flow encoding (both peak velocity and direction) that causes signals to contain additional phase in proportion to velocity in the specified direction. Complex image data from the first acquisition are used as a phase reference for the second, yielding immunity from phase accumulation due to causes other than motion. Images with pixel values equal to MΔΘ where M is the magnitude of the flow compensated image and ΔΘ is the phase difference at the pixel, are produced. The magnitude weighting provides additional vessel contrast, suppresses background noise, maintains the flow direction information, and still allows quantitative data to be retrieved. The method has been validated with phantoms and is undergoing initial clinical evaluation. Early results are extremely encouraging

  16. Rapid spectrographic method for determining microcomponents in solutions

    International Nuclear Information System (INIS)

    Karpenko, L.I.; Fadeeva, L.A.; Gordeeva, A.N.; Ermakova, N.V.

    1984-01-01

    Rapid spectrographic method foe determining microcomponents (Cd, V, Mo, Ni, rare earths and other elements) in industrial and natural solutions has been developed. The analyses were conducted in argon medium and in the air. Calibration charts for determining individual rare earths in solutions are presented. The accuracy of analysis (Sr) was detection limit was 10 -3 -10 -4 mg/ml, that for rare earths - 1.10 -2 mg/ml. The developed method enables to rapidly analyze solutions (sewages and industrialllwaters, wine products) for 20 elements including 6 rare earths, using strandard equipment

  17. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  18. The use of maturity method in estimating concrete strength

    International Nuclear Information System (INIS)

    Salama, A.E.; Abd El-Baky, S.M.; Ali, E.E.; Ghanem, G.M.

    2005-01-01

    Prediction of the early age strength of concrete is essential for modernized concrete for construction as well as for manufacturing of structural parts. Safe and economic scheduling of such critical operations as form removal and re shoring, application of post-tensioning or other mechanical treatment, and in process transportation and rapid delivery of products all should be based upon a good grasp of the strength development of the concrete in use. For many years, it has been proposed that the strength of concrete can be related to a simple mathematical function of time and temperature so that strength could be assessed by calculation without mechanical testing. Such functions are used to compute what is called the m aturity o f concrete, and the computed value is believed to obtain a correlation with the strength of concrete. With its simplicity and low cost, the application of maturity concept as in situ testing method has received wide attention and found its use in engineering practice. This research work investigates the use of M aturity method' in estimating the concrete strength. An experimental program is designed to estimate the concrete strength by using the maturity method. Using different concrete mixes, with available local materials. Ordinary Portland Cement, crushed stone, silica fume, fly ash and admixtures with different contents are used . All the specimens were exposed to different curing temperatures (10, 25 and 40 degree C), in order to get a simplified expression of maturity that fits in with the influence of temperature. Mix designs and charts obtained from this research can be used as guide information for estimating concrete strength by using the maturity method

  19. A rapid assessment method to estimate the distribution of juvenile Chinook Salmon in tributary habitats using eDNA and occupancy estimation

    Science.gov (United States)

    Matter, A.; Falke, Jeffrey A.; López, J. Andres; Savereide, James W.

    2018-01-01

    Identification and protection of water bodies used by anadromous species are critical in light of increasing threats to fish populations, yet often challenging given budgetary and logistical limitations. Noninvasive, rapid‐assessment, sampling techniques may reduce costs and effort while increasing species detection efficiencies. We used an intrinsic potential (IP) habitat model to identify high‐quality rearing habitats for Chinook Salmon Oncorhynchus tshawytscha and select sites to sample throughout the Chena River basin, Alaska, for juvenile occupancy using an environmental DNA (eDNA) approach. Water samples were collected from 75 tributary sites in 2014 and 2015. The presence of Chinook Salmon DNA in water samples was assessed using a species‐specific quantitative PCR (qPCR) assay. The IP model predicted over 900 stream kilometers in the basin to support high‐quality (IP ≥ 0.75) rearing habitat. Occupancy estimation based on eDNA samples indicated that 80% and 56% of previously unsampled sites classified as high or low IP (IP Salmon DNA from three replicate water samples was high (p = 0.76) but varied with drainage area (km2). A power analysis indicated high power to detect proportional changes in occupancy based on parameter values estimated from eDNA occupancy models, although power curves were not symmetrical around zero, indicating greater power to detect positive than negative proportional changes in occupancy. Overall, the combination of IP habitat modeling and occupancy estimation provided a useful, rapid‐assessment method to predict and subsequently quantify the distribution of juvenile salmon in previously unsampled tributary habitats. Additionally, these methods are flexible and can be modified for application to other species and in other locations, which may contribute towards improved population monitoring and management.

  20. Rapid estimation of compost enzymatic activity by spectral analysis method combined with machine learning.

    Science.gov (United States)

    Chakraborty, Somsubhra; Das, Bhabani S; Ali, Md Nasim; Li, Bin; Sarathjith, M C; Majumdar, K; Ray, D P

    2014-03-01

    The aim of this study was to investigate the feasibility of using visible near-infrared (VisNIR) diffuse reflectance spectroscopy (DRS) as an easy, inexpensive, and rapid method to predict compost enzymatic activity, which traditionally measured by fluorescein diacetate hydrolysis (FDA-HR) assay. Compost samples representative of five different compost facilities were scanned by DRS, and the raw reflectance spectra were preprocessed using seven spectral transformations for predicting compost FDA-HR with six multivariate algorithms. Although principal component analysis for all spectral pretreatments satisfactorily identified the clusters by compost types, it could not separate different FDA contents. Furthermore, the artificial neural network multilayer perceptron (residual prediction deviation=3.2, validation r(2)=0.91 and RMSE=13.38 μg g(-1) h(-1)) outperformed other multivariate models to capture the highly non-linear relationships between compost enzymatic activity and VisNIR reflectance spectra after Savitzky-Golay first derivative pretreatment. This work demonstrates the efficiency of VisNIR DRS for predicting compost enzymatic as well as microbial activity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Adjustment of a rapid method for quantification of Fusarium spp. spore suspensions in plant pathology.

    Science.gov (United States)

    Caligiore-Gei, Pablo F; Valdez, Jorge G

    2015-01-01

    The use of a Neubauer chamber is a broadly employed method when cell suspensions need to be quantified. However, this technique may take a long time and needs trained personnel. Spectrophotometry has proved to be a rapid, simple and accurate method to estimate the concentration of spore suspensions of isolates of the genus Fusarium. In this work we present a linear formula to relate absorbance measurements at 530nm with the number of microconidia/ml in a suspension. Copyright © 2014 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.

  2. A structured sparse regression method for estimating isoform expression level from multi-sample RNA-seq data.

    Science.gov (United States)

    Zhang, L; Liu, X J

    2016-06-03

    With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.

  3. Estimation and Validation of RapidEye-Based Time-Series of Leaf Area Index for Winter Wheat in the Rur Catchment (Germany

    Directory of Open Access Journals (Sweden)

    Muhammad Ali

    2015-03-01

    Full Text Available Leaf Area Index (LAI is an important variable for numerous processes in various disciplines of bio- and geosciences. In situ measurements are the most accurate source of LAI among the LAI measuring methods, but the in situ measurements have the limitation of being labor intensive and site specific. For spatial-explicit applications (from regional to continental scales, satellite remote sensing is a promising source for obtaining LAI with different spatial resolutions. However, satellite-derived LAI measurements using empirical models require calibration and validation with the in situ measurements. In this study, we attempted to validate a direct LAI retrieval method from remotely sensed images (RapidEye with in situ LAI (LAIdestr. Remote sensing LAI (LAIrapideye were derived using different vegetation indices, namely SAVI (Soil Adjusted Vegetation Index and NDVI (Normalized Difference Vegetation Index. Additionally, applicability of the newly available red-edge band (RE was also analyzed through Normalized Difference Red-Edge index (NDRE and Soil Adjusted Red-Edge index (SARE. The LAIrapideye obtained from vegetation indices with red-edge band showed better correlation with LAIdestr (r = 0.88 and Root Mean Square Devation, RMSD = 1.01 & 0.92. This study also investigated the need to apply radiometric/atmospheric correction methods to the time-series of RapidEye Level 3A data prior to LAI estimation. Analysis of the the RapidEye Level 3A data set showed that application of the radiometric/atmospheric correction did not improve correlation of the estimated LAI with in situ LAI.

  4. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  5. Automatic performance estimation of conceptual temperature control system design for rapid development of real system

    International Nuclear Information System (INIS)

    Jang, Yu Jin

    2013-01-01

    This paper presents an automatic performance estimation scheme of conceptual temperature control system with multi-heater configuration prior to constructing the physical system for achieving rapid validation of the conceptual design. An appropriate low-order discrete-time model, which will be used in the controller design, is constructed after determining several basic factors including the geometric shape of controlled object and heaters, material properties, heater arrangement, etc. The proposed temperature controller, which adopts the multivariable GPC (generalized predictive control) scheme with scale factors, is then constructed automatically based on the above model. The performance of the conceptual temperature control system is evaluated by using a FEM (finite element method) simulation combined with the controller.

  6. Automatic performance estimation of conceptual temperature control system design for rapid development of real system

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Yu Jin [Dongguk University, GyeongJu (Korea, Republic of)

    2013-07-15

    This paper presents an automatic performance estimation scheme of conceptual temperature control system with multi-heater configuration prior to constructing the physical system for achieving rapid validation of the conceptual design. An appropriate low-order discrete-time model, which will be used in the controller design, is constructed after determining several basic factors including the geometric shape of controlled object and heaters, material properties, heater arrangement, etc. The proposed temperature controller, which adopts the multivariable GPC (generalized predictive control) scheme with scale factors, is then constructed automatically based on the above model. The performance of the conceptual temperature control system is evaluated by using a FEM (finite element method) simulation combined with the controller.

  7. Experimental study on rapid embankment construction methods

    International Nuclear Information System (INIS)

    Hirano, Hideaki; Egawa, Kikuji; Hyodo, Kazuya; Kannoto, Yasuo; Sekimoto, Tsuyoshi; Kobayashi, Kokichi.

    1982-01-01

    In the construction of a thermal or nuclear power plant in a coastal area, shorter embankment construction period has come to be called for recently. This tendency is remarkable where construction period is limited due to meteorological or sea conditions. To meet this requirement, the authors have been conducting basic experimental studies on two methods for the rapid execution of embankment construction, that is, Steel Plate Cellular Bulkhead Embedding Method and Ship Hull Caisson Method. This paper presents an outline of the results of the experimental study on these two methods. (author)

  8. Methods for Rapid Screening in Woody Plant Herbicide Development

    Directory of Open Access Journals (Sweden)

    William Stanley

    2014-07-01

    Full Text Available Methods for woody plant herbicide screening were assayed with the goal of reducing resources and time required to conduct preliminary screenings for new products. Rapid screening methods tested included greenhouse seedling screening, germinal screening, and seed screening. Triclopyr and eight experimental herbicides from Dow AgroSciences (DAS 313, 402, 534, 548, 602, 729, 779, and 896 were tested on black locust, loblolly pine, red maple, sweetgum, and water oak. Screening results detected differences in herbicide and species in all experiments in much less time (days to weeks than traditional field screenings and consumed significantly less resources (<500 mg acid equivalent per herbicide per screening. Using regression analysis, various rapid screening methods were linked into a system capable of rapidly and inexpensively assessing herbicide efficacy and spectrum of activity. Implementation of such a system could streamline early-stage herbicide development leading to field trials, potentially freeing resources for use in development of beneficial new herbicide products.

  9. Properties of the particles emitted at mid-rapidity

    International Nuclear Information System (INIS)

    Lefort, T.; Cussol, D; Peter, J.; Bocage, F.; Bougault, R.; Brou, R.; Colin, J; Durand, D.; Genouin-Duhamel, E.; Gulminelli, F.; Lecolley, J.F.; Le Neindre, N.; Lopez, O.; Louvel, M.; Nguyen, A.D.; Steckmeyer, J. C.; Tamain, B.; Vient, E.

    1997-01-01

    Mid-rapidity emission studies allow the access at the very first instances of collision between two nuclei. Its study as a function of the energy of incident projectile permits to follow the evolution of the phenomena in the entrance channel from the lowest energies where these phenomena are essentially collective up to high energies where they are essentially governed by nucleon-nucleon collisions. The first method called the 'method E' consists in evaluating first the contribution of the quasi-projectile to the rapidity distribution and then to subtract it from the total spectrum to obtain the contribution from the mid-rapidity particles. For light particles the mid-rapidity emissions have a spectrum extended up to rapidities close to that of the quasi-projectile. This method under-estimates the contribution due to mid-rapidity. The second method called the 'method M' consists in determining directly the contribution from the mid-rapidity particles supposing that their rapidity spectrum is homothetic to the triton spectrum. This method over-estimates the contribution of mid-rapidity particles to the measure where the rapidity spectrum of the evaporated particles may extend up to Y nn , the rapidity of the nucleon-nucleon frame. The relative proportion of the particles coming from mid-rapidity emissions as a function of the experimental impact parameter and incident energy for the system Ar+Ni is shown. Also, results concerning the energy spectrum of light particles emitted at mid-rapidity as a function of the excitation energy of quasi-projectile are shown for the same system. Conclusions concerning the mid-rapidity emission are the following: the amount of particles depends essentially on the geometric overlap between the projectile and the target; these particles are issued out of a zone richer in neutrons than the total system; the energy per nucleon stored in this zone is independent of the violence of collision what indicates a production process essentially

  10. Comparing rapid methods for detecting Listeria in seafood and environmental samples using the most probably number (MPN) technique.

    Science.gov (United States)

    Cruz, Cristina D; Win, Jessicah K; Chantarachoti, Jiraporn; Mutukumira, Anthony N; Fletcher, Graham C

    2012-02-15

    The standard Bacteriological Analytical Manual (BAM) protocol for detecting Listeria in food and on environmental surfaces takes about 96 h. Some studies indicate that rapid methods, which produce results within 48 h, may be as sensitive and accurate as the culture protocol. As they only give presence/absence results, it can be difficult to compare the accuracy of results generated. We used the Most Probable Number (MPN) technique to evaluate the performance and detection limits of six rapid kits for detecting Listeria in seafood and on an environmental surface compared with the standard protocol. Three seafood products and an environmental surface were inoculated with similar known cell concentrations of Listeria and analyzed according to the manufacturers' instructions. The MPN was estimated using the MPN-BAM spreadsheet. For the seafood products no differences were observed among the rapid kits and efficiency was similar to the BAM method. On the environmental surface the BAM protocol had a higher recovery rate (sensitivity) than any of the rapid kits tested. Clearview™, Reveal®, TECRA® and VIDAS® LDUO detected the cells but only at high concentrations (>10(2) CFU/10 cm(2)). Two kits (VIP™ and Petrifilm™) failed to detect 10(4) CFU/10 cm(2). The MPN method was a useful tool for comparing the results generated by these presence/absence test kits. There remains a need to develop a rapid and sensitive method for detecting Listeria in environmental samples that performs as well as the BAM protocol, since none of the rapid tests used in this study achieved a satisfactory result. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Rapid prototyping of soil moisture estimates using the NASA Land Information System

    Science.gov (United States)

    Anantharaj, V.; Mostovoy, G.; Li, B.; Peters-Lidard, C.; Houser, P.; Moorhead, R.; Kumar, S.

    2007-12-01

    The Land Information System (LIS), developed at the NASA Goddard Space Flight Center, is a functional Land Data Assimilation System (LDAS) that incorporates a suite of land models in an interoperable computational framework. LIS has been integrated into a computational Rapid Prototyping Capabilities (RPC) infrastructure. LIS consists of a core, a number of community land models, data servers, and visualization systems - integrated in a high-performance computing environment. The land surface models (LSM) in LIS incorporate surface and atmospheric parameters of temperature, snow/water, vegetation, albedo, soil conditions, topography, and radiation. Many of these parameters are available from in-situ observations, numerical model analysis, and from NASA, NOAA, and other remote sensing satellite platforms at various spatial and temporal resolutions. The computational resources, available to LIS via the RPC infrastructure, support e- Science experiments involving the global modeling of land-atmosphere studies at 1km spatial resolutions as well as regional studies at finer resolutions. The Noah Land Surface Model, available with-in the LIS is being used to rapidly prototype soil moisture estimates in order to evaluate the viability of other science applications for decision making purposes. For example, LIS has been used to further extend the utility of the USDA Soil Climate Analysis Network of in-situ soil moisture observations. In addition, LIS also supports data assimilation capabilities that are used to assimilate remotely sensed soil moisture retrievals from the AMSR-E instrument onboard the Aqua satellite. The rapid prototyping of soil moisture estimates using LIS and their applications will be illustrated during the presentation.

  12. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    International Nuclear Information System (INIS)

    Norris, Edward T.; Liu, Xin; Hsieh, Jiang

    2015-01-01

    . Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed

  13. A novel method for rapid in vitro radiobioassay

    Science.gov (United States)

    Crawford, Evan Bogert

    Rapid and accurate analysis of internal human exposure to radionuclides is essential to the effective triage and treatment of citizens who have possibly been exposed to radioactive materials in the environment. The two most likely scenarios in which a large number of citizens would be exposed are the detonation of a radiation dispersal device (RDD, "dirty bomb") or the accidental release of an isotope from an industrial source such as a radioisotopic thermal generator (RTG). In the event of the release and dispersion of radioactive materials into the environment in a large city, the entire population of the city -- including all commuting workers and tourists -- would have to be rapidly tested, both to satisfy the psychological needs of the citizens who were exposed to the mental trauma of a possible radiation dose, and to satisfy the immediate medical needs of those who received the highest doses and greatest levels of internal contamination -- those who would best benefit from rapid, intensive medical care. In this research a prototype rapid screening method to screen urine samples for the presence of up to five isotopes, both individually and in a mixture, has been developed. The isotopes used to develop this method are Co-60, Sr-90, Cs-137, Pu-238, and Am-241. This method avoids time-intensive chemical separations via the preparation and counting of a single sample on multiple detectors, and analyzing the spectra for isotope-specific markers. A rapid liquid-liquid separation using an organic extractive scintillator can be used to help quantify the activity of the alpha-emitting isotopes. The method provides quantifiable results in less than five minutes for the activity of beta/gamma-emitting isotopes when present in the sample at the intervention level as defined by the Centers for Disease Control and Prevention (CDC), and quantifiable results for the activity levels of alpha-emitting isotopes present at their respective intervention levels in approximately 30

  14. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  15. Evaluation of rapid radiometric method for drug susceptibility testing of Mycobacterium tuberculosis

    International Nuclear Information System (INIS)

    Siddiqi, S.H.; Libonati, J.P.; Middlebrook, G.

    1981-01-01

    A total of 106 isolates of Mycobacterium tuberculosis were tested for drug susceptibility by the conventional 7H11 plate method and by a new rapid radiometric method using special 7H12 liquid medium with 14 C-labeled substrate. Results obtained by the two methods were compared for rapidity, sensitivity, and specificity of the new test method. There was 98% overall agreement between the results obtained by the two methods. Of a total of 424 drug tests, only 8 drug results did not agree, mostly in the case of streptomycin. This new procedure was found to be rapid, with 87% of the tests results reportable within 4 days and 98% reportable within 5 days as compared to the usual 3 weeks required with the conventional indirect susceptibility test method. The results of this preliminary study indicate that the rapid radiometric method seems to have the potential for routine laboratory use and merits further investigations

  16. Estimating Aquifer Transmissivity Using the Recession-Curve-Displacement Method in Tanzania’s Kilombero Valley

    Directory of Open Access Journals (Sweden)

    William Senkondo

    2017-12-01

    Full Text Available Information on aquifer processes and characteristics across scales has long been a cornerstone for understanding water resources. However, point measurements are often limited in extent and representativeness. Techniques that increase the support scale (footprint of measurements or leverage existing observations in novel ways can thus be useful. In this study, we used a recession-curve-displacement method to estimate regional-scale aquifer transmissivity (T from streamflow records across the Kilombero Valley of Tanzania. We compare these estimates to local-scale estimates made from pumping tests across the Kilombero Valley. The median T from the pumping tests was 0.18 m2/min. This was quite similar to the median T estimated from the recession-curve-displacement method applied during the wet season for the entire basin (0.14 m2/min and for one of the two sub-basins tested (0.16 m2/min. On the basis of our findings, there appears to be reasonable potential to inform water resource management and hydrologic model development through streamflow-derived transmissivity estimates, which is promising for data-limited environments facing rapid development, such as the Kilombero Valley.

  17. Joko Tingkir program for estimating tsunami potential rapidly

    Energy Technology Data Exchange (ETDEWEB)

    Madlazim,, E-mail: m-lazim@physics.its.ac.id; Hariyono, E., E-mail: m-lazim@physics.its.ac.id [Department of Physics, Faculty of Mathematics and Natural Sciences, Universitas Negeri Surabaya (UNESA) , Jl. Ketintang, Surabaya 60231 (Indonesia)

    2014-09-25

    The purpose of the study was to estimate P-wave rupture durations (T{sub dur}), dominant periods (T{sub d}) and exceeds duration (T{sub 50Ex}) simultaneously for local events, shallow earthquakes which occurred off the coast of Indonesia. Although the all earthquakes had parameters of magnitude more than 6,3 and depth less than 70 km, part of the earthquakes generated a tsunami while the other events (Mw=7.8) did not. Analysis using Joko Tingkir of the above stated parameters helped understand the tsunami generation of these earthquakes. Measurements from vertical component broadband P-wave quake velocity records and determination of the above stated parameters can provide a direct procedure for assessing rapidly the potential for tsunami generation. The results of the present study and the analysis of the seismic parameters helped explain why the events generated a tsunami, while the others did not.

  18. Heuristic introduction to estimation methods

    International Nuclear Information System (INIS)

    Feeley, J.J.; Griffith, J.M.

    1982-08-01

    The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems

  19. Rapid earthquake magnitude determination for Vrancea early warning system

    International Nuclear Information System (INIS)

    Marmureanu, Alexandru

    2009-01-01

    Due to the huge amount of recorded data, an automatic procedure was developed and used to test different methods to rapidly evaluate earthquake magnitude from the first seconds of the P wave. In order to test all the algorithms involved in detection and rapid earthquake magnitude estimation, several tests were performed, in order to avoid false alarms. A special detection algorithm was developed, that is based on the classical STA/LTA algorithm and tuned for early warning purpose. A method to rapidly estimate magnitude in 4 seconds from detection of P wave in the epicenter is proposed. The method was tested on al recorded data, and the magnitude error determination is acceptable taking into account that it is computed from only 3 stations in a very short time interval. (author)

  20. A Robust WLS Power System State Estimation Method Integrating a Wide-Area Measurement System and SCADA Technology

    Directory of Open Access Journals (Sweden)

    Tao Jin

    2015-04-01

    Full Text Available With the development of modern society, the scale of the power system is rapidly increased accordingly, and the framework and mode of running of power systems are trending towards more complexity. It is nowadays much more important for the dispatchers to know exactly the state parameters of the power network through state estimation. This paper proposes a robust power system WLS state estimation method integrating a wide-area measurement system (WAMS and SCADA technology, incorporating phasor measurements and the results of the traditional state estimator in a post-processing estimator, which greatly reduces the scale of the non-linear estimation problem as well as the number of iterations and the processing time per iteration. This paper firstly analyzes the wide-area state estimation model in detail, then according to the issue that least squares does not account for bad data and outliers, the paper proposes a robust weighted least squares (WLS method that combines a robust estimation principle with least squares by equivalent weight. The performance assessment is discussed through setting up mathematical models of the distribution network. The effectiveness of the proposed method was proved to be accurate and reliable by simulations and experiments.

  1. Rapid Methods to Estimate Potential Exposure to Semivolatile Organic Compounds in the Indoor Environment

    DEFF Research Database (Denmark)

    Little, John C.; Weschler, Charles J.; Nazaroff, William W

    2012-01-01

    A systematic and efficient strategy is needed to assess and manage potential risks to human health that arise from the manufacture and use of thousands of chemicals. Among available tools for rapid assessment of large numbers of chemicals, significant gaps are associated with the capability...

  2. Study on tube rupture strength evaluation method for rapid overheating

    International Nuclear Information System (INIS)

    Komine, Ryuji; Wada, Yusaku

    1998-08-01

    A sodium-water reaction derived from the single tube break in steam generator might overheat neighbor tubes rapidly under internal pressure loadings. If the temperature of tube wall becomes too high, it has to be evaluated that the stress of tube does not exceed the material strength limit to prevent the propagation of tube rupture. In the present study this phenomenon was recognized as the fracture of cylindrical tube with the large deformation due to overheating, and the evaluation method was investigated based on both of experimental and analytical approaches. The results obtained are as follows. (1) As for the nominal stress estimation, it was clarified through the experimental data and the detailed FEM elasto-plastic large deformation analysis that the formula used in conventional designs can be applied. (2) Within the overheating temperature limits of tubes, the creep effect is dominant, even if the loading time is too short. So the strain rate on the basis of JIS elevated temperature tensile test method for steels and heat-resisting alloys is too late and almost of total strain is composed by creep one. As a result the time dependent effect cannot be evaluated under JIS strain rate condition. (3) Creep tests in shorter time condition than a few minutes and tensile tests in higher strain rate condition than 10%/min of JIS are carried out for 2 1/4Cr-1Mo(NT) steel, and the standard values for tube rupture strength evaluation are formulated. (4) The above evaluation method based on both of the stress estimation and the strength standard values application is justified by using the tube burst test data under internal pressure. (5) The strength standard values on Type 321 ss is formulated in accordance with the procedure applied for 2 1/4Cr-1Mo(NT) steel. (author)

  3. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  4. A rapid method for counting nucleated erythrocytes on stained blood smears by digital image analysis

    Science.gov (United States)

    Gering, E.; Atkinson, C.T.

    2004-01-01

    Measures of parasitemia by intraerythrocytic hematozoan parasites are normally expressed as the number of infected erythrocytes per n erythrocytes and are notoriously tedious and time consuming to measure. We describe a protocol for generating rapid counts of nucleated erythrocytes from digital micrographs of thin blood smears that can be used to estimate intensity of hematozoan infections in nonmammalian vertebrate hosts. This method takes advantage of the bold contrast and relatively uniform size and morphology of erythrocyte nuclei on Giemsa-stained blood smears and uses ImageJ, a java-based image analysis program developed at the U.S. National Institutes of Health and available on the internet, to recognize and count these nuclei. This technique makes feasible rapid and accurate counts of total erythrocytes in large numbers of microscope fields, which can be used in the calculation of peripheral parasitemias in low-intensity infections.

  5. Rapid, nondestructive estimation of surface polymer layer thickness using attenuated total reflection fourier transform infrared (ATR FT-IR) spectroscopy and synthetic spectra derived from optical principles.

    Science.gov (United States)

    Weinstock, B André; Guiney, Linda M; Loose, Christopher

    2012-11-01

    We have developed a rapid, nondestructive analytical method that estimates the thickness of a surface polymer layer with high precision but unknown accuracy using a single attenuated total reflection Fourier transform infrared (ATR FT-IR) measurement. Because the method is rapid, nondestructive, and requires no sample preparation, it is ideal as a process analytical technique. Prior to implementation, the ATR FT-IR spectrum of the substrate layer pure component and the ATR FT-IR and real refractive index spectra of the surface layer pure component must be known. From these three input spectra a synthetic mid-infrared spectral matrix of surface layers 0 nm to 10,000 nm thick on substrate is created de novo. A minimum statistical distance match between a process sample's ATR FT-IR spectrum and the synthetic spectral matrix provides the thickness of that sample. We show that this method can be used to successfully estimate the thickness of polysulfobetaine surface modification, a hydrated polymeric surface layer covalently bonded onto a polyetherurethane substrate. A database of 1850 sample spectra was examined. Spectrochemical matrix-effect unknowns, such as the nonuniform and molecularly novel polysulfobetaine-polyetherurethane interface, were found to be minimal. A partial least squares regression analysis of the database spectra versus their thicknesses as calculated by the method described yielded an estimate of precision of ±52 nm.

  6. Method-related estimates of sperm vitality.

    Science.gov (United States)

    Cooper, Trevor G; Hellenkemper, Barbara

    2009-01-01

    Comparison of methods that estimate viability of human spermatozoa by monitoring head membrane permeability revealed that wet preparations (whether using positive or negative phase-contrast microscopy) generated significantly higher percentages of nonviable cells than did air-dried eosin-nigrosin smears. Only with the latter method did the sum of motile (presumed live) and stained (presumed dead) preparations never exceed 100%, making this the method of choice for sperm viability estimates.

  7. [A new method of fabricating photoelastic model by rapid prototyping].

    Science.gov (United States)

    Fan, Li; Huang, Qing-feng; Zhang, Fu-qiang; Xia, Yin-pei

    2011-10-01

    To explore a novel method of fabricating the photoelastic model using rapid prototyping technique. A mandible model was made by rapid prototyping with computerized three-dimensional reconstruction, then the photoelastic model with teeth was fabricated by traditional impression duplicating and mould casting. The photoelastic model of mandible with teeth, which was fabricated indirectly by rapid prototyping, was very similar to the prototype in geometry and physical parameters. The model was of high optical sensibility and met the experimental requirements. Photoelastic model of mandible with teeth indirectly fabricated by rapid prototyping meets the photoelastic experimental requirements well.

  8. Rapid surface-water volume estimations in beaver ponds

    Science.gov (United States)

    Karran, Daniel J.; Westbrook, Cherie J.; Wheaton, Joseph M.; Johnston, Carol A.; Bedard-Haughn, Angela

    2017-02-01

    Beaver ponds are surface-water features that are transient through space and time. Such qualities complicate the inclusion of beaver ponds in local and regional water balances, and in hydrological models, as reliable estimates of surface-water storage are difficult to acquire without time- and labour-intensive topographic surveys. A simpler approach to overcome this challenge is needed, given the abundance of the beaver ponds in North America, Eurasia, and southern South America. We investigated whether simple morphometric characteristics derived from readily available aerial imagery or quickly measured field attributes of beaver ponds can be used to approximate surface-water storage among the range of environmental settings in which beaver ponds are found. Studied were a total of 40 beaver ponds from four different sites in North and South America. The simplified volume-area-depth (V-A-h) approach, originally developed for prairie potholes, was tested. With only two measurements of pond depth and corresponding surface area, this method estimated surface-water storage in beaver ponds within 5 % on average. Beaver pond morphometry was characterized by a median basin coefficient of 0.91, and dam length and pond surface area were strongly correlated with beaver pond storage capacity, regardless of geographic setting. These attributes provide a means for coarsely estimating surface-water storage capacity in beaver ponds. Overall, this research demonstrates that reliable estimates of surface-water storage in beaver ponds only requires simple measurements derived from aerial imagery and/or brief visits to the field. Future research efforts should be directed at incorporating these simple methods into both broader beaver-related tools and catchment-scale hydrological models.

  9. Considerations for Task Analysis Methods and Rapid E-Learning Development Techniques

    Directory of Open Access Journals (Sweden)

    Dr. Ismail Ipek

    2014-02-01

    Full Text Available The purpose of this paper is to provide basic dimensions for rapid training development in e-learning courses in education and business. Principally, it starts with defining task analysis and how to select tasks for analysis and task analysis methods for instructional design. To do this, first, learning and instructional technologies as visions of the future were discussed. Second, the importance of task analysis methods in rapid e-learning was considered, with learning technologies as asynchronous and synchronous e-learning development. Finally, rapid instructional design concepts and e-learning design strategies were defined and clarified with examples, that is, all steps for effective task analysis and rapid training development techniques based on learning and instructional design approaches were discussed, such as m-learning and other delivery systems. As a result, the concept of task analysis, rapid e-learning development strategies and the essentials of online course design were discussed, alongside learner interface design features for learners and designers.

  10. Verification of rapid method for estimation of added food colorant type in boiled sausages based on measurement of cross section color

    Science.gov (United States)

    Jovanović, J.; Petronijević, R. B.; Lukić, M.; Karan, D.; Parunović, N.; Branković-Lazić, I.

    2017-09-01

    During the previous development of a chemometric method for estimating the amount of added colorant in meat products, it was noticed that the natural colorant most commonly added to boiled sausages, E 120, has different CIE-LAB behavior compared to artificial colors that are used for the same purpose. This has opened the possibility of transforming the developed method into a method for identifying the addition of natural or synthetic colorants in boiled sausages based on the measurement of the color of the cross-section. After recalibration of the CIE-LAB method using linear discriminant analysis, verification was performed on 76 boiled sausages, of either frankfurters or Parisian sausage types. The accuracy and reliability of the classification was confirmed by comparison with the standard HPLC method. Results showed that the LDA + CIE-LAB method can be applied with high accuracy, 93.42 %, to estimate food color type in boiled sausages. Natural orange colors can give false positive results. Pigments from spice mixtures had no significant effect on CIE-LAB results.

  11. Rapid Enzymatic Method for Pectin Methyl Esters Determination

    Directory of Open Access Journals (Sweden)

    Lucyna Łękawska-Andrinopoulou

    2013-01-01

    Full Text Available Pectin is a natural polysaccharide used in food and pharma industries. Pectin degree of methylation is an important parameter having significant influence on pectin applications. A rapid, fully automated, kinetic flow method for determination of pectin methyl esters has been developed. The method is based on a lab-made analyzer using the reverse flow-injection/stopped flow principle. Methanol is released from pectin by pectin methylesterase in the first mixing coil. Enzyme working solution is injected further downstream and it is mixed with pectin/pectin methylesterase stream in the second mixing coil. Methanol is oxidized by alcohol oxidase releasing formaldehyde and hydrogen peroxide. This reaction is coupled to horse radish peroxidase catalyzed reaction, which gives the colored product 4-N-(p-benzoquinoneimine-antipyrine. Reaction rate is proportional to methanol concentration and it is followed using Ocean Optics USB 2000+ spectrophotometer. The analyzer is fully regulated by a lab written LabVIEW program. The detection limit was 1.47 mM with an analysis rate of 7 samples h−1. A paired t-test with results from manual method showed that the automated method results are equivalent to the manual method at the 95% confidence interval. The developed method is rapid and sustainable and it is the first application of flow analysis in pectin analysis.

  12. Rapid Estimates of Rupture Extent for Large Earthquakes Using Aftershocks

    Science.gov (United States)

    Polet, J.; Thio, H. K.; Kremer, M.

    2009-12-01

    The spatial distribution of aftershocks is closely linked to the rupture extent of the mainshock that preceded them and a rapid analysis of aftershock patterns therefore has potential for use in near real-time estimates of earthquake impact. The correlation between aftershocks and slip distribution has frequently been used to estimate the fault dimensions of large historic earthquakes for which no, or insufficient, waveform data is available. With the advent of earthquake inversions that use seismic waveforms and geodetic data to constrain the slip distribution, the study of aftershocks has recently been largely focused on enhancing our understanding of the underlying mechanisms in a broader earthquake mechanics/dynamics framework. However, in a near real-time earthquake monitoring environment, in which aftershocks of large earthquakes are routinely detected and located, these data may also be effective in determining a fast estimate of the mainshock rupture area, which would aid in the rapid assessment of the impact of the earthquake. We have analyzed a considerable number of large recent earthquakes and their aftershock sequences and have developed an effective algorithm that determines the rupture extent of a mainshock from its aftershock distribution, in a fully automatic manner. The algorithm automatically removes outliers by spatial binning, and subsequently determines the best fitting “strike” of the rupture and its length by projecting the aftershock epicenters onto a set of lines that cross the mainshock epicenter with incremental azimuths. For strike-slip or large dip-slip events, for which the surface projection of the rupture is recti-linear, the calculated strike correlates well with the strike of the fault and the corresponding length, determined from the distribution of aftershocks projected onto the line, agrees well with the rupture length. In the case of a smaller dip-slip rupture with an aspect ratio closer to 1, the procedure gives a measure

  13. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  14. Multivariate regression methods for estimating velocity of ictal discharges from human microelectrode recordings

    Science.gov (United States)

    Liou, Jyun-you; Smith, Elliot H.; Bateman, Lisa M.; McKhann, Guy M., II; Goodman, Robert R.; Greger, Bradley; Davis, Tyler S.; Kellis, Spencer S.; House, Paul A.; Schevon, Catherine A.

    2017-08-01

    Objective. Epileptiform discharges, an electrophysiological hallmark of seizures, can propagate across cortical tissue in a manner similar to traveling waves. Recent work has focused attention on the origination and propagation patterns of these discharges, yielding important clues to their source location and mechanism of travel. However, systematic studies of methods for measuring propagation are lacking. Approach. We analyzed epileptiform discharges in microelectrode array recordings of human seizures. The array records multiunit activity and local field potentials at 400 micron spatial resolution, from a small cortical site free of obstructions. We evaluated several computationally efficient statistical methods for calculating traveling wave velocity, benchmarking them to analyses of associated neuronal burst firing. Main results. Over 90% of discharges met statistical criteria for propagation across the sampled cortical territory. Detection rate, direction and speed estimates derived from a multiunit estimator were compared to four field potential-based estimators: negative peak, maximum descent, high gamma power, and cross-correlation. Interestingly, the methods that were computationally simplest and most efficient (negative peak and maximal descent) offer non-inferior results in predicting neuronal traveling wave velocities compared to the other two, more complex methods. Moreover, the negative peak and maximal descent methods proved to be more robust against reduced spatial sampling challenges. Using least absolute deviation in place of least squares error minimized the impact of outliers, and reduced the discrepancies between local field potential-based and multiunit estimators. Significance. Our findings suggest that ictal epileptiform discharges typically take the form of exceptionally strong, rapidly traveling waves, with propagation detectable across millimeter distances. The sequential activation of neurons in space can be inferred from clinically

  15. Rapid Estimation of Tocopherol Content in Linseed and Sunflower Oils-Reactivity and Assay

    Directory of Open Access Journals (Sweden)

    Tjaša Prevc

    2015-08-01

    Full Text Available The reactivity of tocopherols with 2,2-diphenyl-1-picrylhydrazyl (DPPH was studied in model systems in order to establish a method for quantifying vitamin E in plant oils. The method was optimized with respect to solvent composition of the assay medium, which has a large influence on the course of reaction of tocopherols with DPPH. The rate of reaction of α-tocopherol with DPPH is higher than that of γ-tocopherol in both protic and aprotic solvents. In ethyl acetate, routinely applied for the analysis of antioxidant potential (AOP of plant oils, reactions of tocopherols with DPPH are slower and concentration of tocopherols in the assay has a large influence on their molar reactivity. In 2-propanol, however, two electrons are exchanged for both α- and γ-tocopherols, independent of their concentration. 2-propanol is not toxic and is fully compatible with polypropylene labware. The chromatographically determined content of tocopherols and their molar reactivity in the DPPH assay reveal that only tocopherols contribute to the AOP of sunflower oil, whereas the contribution of tocopherols to the AOP of linseed oil is 75%. The DPPH assay in 2-propanol can be applied for rapid and cheap estimation of vitamin E content in plant oils where tocopherols are major antioxidants.

  16. Rapid Estimation of Tocopherol Content in Linseed and Sunflower Oils-Reactivity and Assay.

    Science.gov (United States)

    Prevc, Tjaša; Levart, Alenka; Cigić, Irena Kralj; Salobir, Janez; Ulrih, Nataša Poklar; Cigić, Blaž

    2015-08-13

    The reactivity of tocopherols with 2,2-diphenyl-1-picrylhydrazyl (DPPH) was studied in model systems in order to establish a method for quantifying vitamin E in plant oils. The method was optimized with respect to solvent composition of the assay medium, which has a large influence on the course of reaction of tocopherols with DPPH. The rate of reaction of α-tocopherol with DPPH is higher than that of γ-tocopherol in both protic and aprotic solvents. In ethyl acetate, routinely applied for the analysis of antioxidant potential (AOP) of plant oils, reactions of tocopherols with DPPH are slower and concentration of tocopherols in the assay has a large influence on their molar reactivity. In 2-propanol, however, two electrons are exchanged for both α- and γ-tocopherols, independent of their concentration. 2-propanol is not toxic and is fully compatible with polypropylene labware. The chromatographically determined content of tocopherols and their molar reactivity in the DPPH assay reveal that only tocopherols contribute to the AOP of sunflower oil, whereas the contribution of tocopherols to the AOP of linseed oil is 75%. The DPPH assay in 2-propanol can be applied for rapid and cheap estimation of vitamin E content in plant oils where tocopherols are major antioxidants.

  17. Use of ethyl-α-isonitrosoacetoacetate in the rapid estimation and radiochemical separation of gold

    International Nuclear Information System (INIS)

    Sawant, A.D.; Haldar, B.C.

    1978-01-01

    The use of ethyl-α-isonitrosoacetoacetate in the rapid estimation and radiochemical separation of gold is reported. As low as 5.00 mg of Au can be estimated with an accuracy better than 1%. Decontamination values against platinum metals and other metals usually associated with Au are greater than 10 5 . Isotopes and results are tabulated. The time required for radiochemical separation is around 20 min and the recovery of Au is better than 80%. γ-activities were measured with a single channel analyser and NaI(Tl) detector. β-activities were counted on a thin end-window type GM counter. (T.I.)

  18. The Use of Rapid Review Methods for the U.S. Preventive Services Task Force.

    Science.gov (United States)

    Patnode, Carrie D; Eder, Michelle L; Walsh, Emily S; Viswanathan, Meera; Lin, Jennifer S

    2018-01-01

    Rapid review products are intended to synthesize available evidence in a timely fashion while still meeting the needs of healthcare decision makers. Various methods and products have been applied for rapid evidence syntheses, but no single approach has been uniformly adopted. Methods to gain efficiency and compress the review time period include focusing on a narrow clinical topic and key questions; limiting the literature search; performing single (versus dual) screening of abstracts and full-text articles for relevance; and limiting the analysis and synthesis. In order to maintain the scientific integrity, including transparency, of rapid evidence syntheses, it is imperative that procedures used to streamline standard systematic review methods are prespecified, based on sound review principles and empiric evidence when possible, and provide the end user with an accurate and comprehensive synthesis. The collection of clinical preventive service recommendations maintained by the U.S. Preventive Services Task Force, along with its commitment to rigorous methods development, provide a unique opportunity to refine, implement, and evaluate rapid evidence synthesis methods and add to an emerging evidence base on rapid review methods. This paper summarizes the U.S. Preventive Services Task Force's use of rapid review methodology, its criteria for selecting topics for rapid evidence syntheses, and proposed methods to streamline the review process. Copyright © 2018 American Journal of Preventive Medicine. All rights reserved.

  19. A Rapid Method for Measuring Strontium-90 Activity in Crops in China

    Science.gov (United States)

    Pan, Lingjing Pan; Yu, Guobing; Wen, Deyun; Chen, Zhi; Sheng, Liusi; Liu, Chung-King; Xu, X. George

    2017-09-01

    A rapid method for measuring Sr-90 activity in crop ashes is presented. Liquid scintillation counting, combined with ion exchange columns 4`, 4"(5")-di-t-butylcyclohexane-18-crown-6, is used to determine the activity of Sr-90 in crops. The yields of chemical procedure are quantified using gravimetric analysis. The conventional method that uses ion-exchange resin with HDEHP could not completely remove all the bismuth when comparatively large lead and bismuth exist in the samples. This is overcome by the rapid method. The chemical yield of this method is about 60% and the MDA for Sr-90 is found to be 2:32 Bq/kg. The whole procedure together with using spectrum analysis to determine the activity only takes about one day, which is really a large improvement compared with the conventional method. A modified conventional method is also described here to verify the value of the rapid one. These two methods can meet di_erent needs of daily monitoring and emergency situation.

  20. Population size estimation of men who have sex with men through the network scale-up method in Japan.

    Directory of Open Access Journals (Sweden)

    Satoshi Ezoe

    Full Text Available BACKGROUND: Men who have sex with men (MSM are one of the groups most at risk for HIV infection in Japan. However, size estimates of MSM populations have not been conducted with sufficient frequency and rigor because of the difficulty, high cost and stigma associated with reaching such populations. This study examined an innovative and simple method for estimating the size of the MSM population in Japan. We combined an internet survey with the network scale-up method, a social network method for estimating the size of hard-to-reach populations, for the first time in Japan. METHODS AND FINDINGS: An internet survey was conducted among 1,500 internet users who registered with a nationwide internet-research agency. The survey participants were asked how many members of particular groups with known population sizes (firepersons, police officers, and military personnel they knew as acquaintances. The participants were also asked to identify the number of their acquaintances whom they understood to be MSM. Using these survey results with the network scale-up method, the personal network size and MSM population size were estimated. The personal network size was estimated to be 363.5 regardless of the sex of the acquaintances and 174.0 for only male acquaintances. The estimated MSM prevalence among the total male population in Japan was 0.0402% without adjustment, and 2.87% after adjusting for the transmission error of MSM. CONCLUSIONS: The estimated personal network size and MSM prevalence seen in this study were comparable to those from previous survey results based on the direct-estimation method. Estimating population sizes through combining an internet survey with the network scale-up method appeared to be an effective method from the perspectives of rapidity, simplicity, and low cost as compared with more-conventional methods.

  1. The scope of application of incremental rapid prototyping methods in foundry engineering

    Directory of Open Access Journals (Sweden)

    M. Stankiewicz

    2010-01-01

    Full Text Available The article presents the scope of application of selected incremental Rapid Prototyping methods in the process of manufacturing casting models, casting moulds and casts. The Rapid Prototyping methods (SL, SLA, FDM, 3DP, JS are predominantly used for the production of models and model sets for casting moulds. The Rapid Tooling methods, such as: ZCast-3DP, ProMetalRCT and VoxelJet, enable the fabrication of casting moulds in the incremental process. The application of the RP methods in cast production makes it possible to speed up the prototype preparation process. This is particularly vital to elements of complex shapes. The time required for the manufacture of the model, the mould and the cast proper may vary from a few to several dozen hours.

  2. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien; Claudel, Christian G.

    2015-01-01

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  3. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien

    2015-12-30

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  4. Rapid estimation of the vertebral body volume: a combination of the Cavalieri principle and computed tomography images

    International Nuclear Information System (INIS)

    Odaci, Ersan; Sahin, Buenyamin; Sonmez, Osman Fikret; Kaplan, Sueleyman; Bas, Orhan; Bilgic, Sait; Bek, Yueksel; Erguer, Hayati

    2003-01-01

    Objective: The exact volume of the vertebral body is necessary for the evaluation, treatment and surgical application of related vertebral body. Thereby, the volume changes of the vertebral body are monitored, such as infectious diseases of vertebra and traumatic or non-traumatic fractures and deformities of the spine. Several studies have been conducted for the assessment of the vertebral body size based on the evaluation of the different criteria of the spine using different techniques. However, we have not found any detailed study in the literature describing the combination of the Cavalieri principle and vertebral body volume estimation. Materials and methods: In the present study we describe a rapid, simple, accurate and practical technique for estimating the volume of vertebral body. Two specimens were taken from the cadavers including ten lumbar vertebras and were scanned in axial, sagittal and coronal section planes by a computed tomography (CT) machine. The consecutive sections in 5 and 3 mm thicknesses were used to estimate the total volume of the vertebral bodies by means of the Cavalieri principle. Furthermore, to evaluate inter-observer differences the volume estimations were carried out by three performers. Results: There were no significant differences between the performers' estimates and real volumes of the vertebral bodies (P>0.05) and also between the performers' volume estimates (P>0.05). The section thickness and the section plains did not affect the accuracy of the estimates (P>0.05). A high correlation was seen between the estimates of performers and the real volumes of the vertebral bodies (r=0.881). Conclusion: We concluded that the combination of CT scanning with the Cavalieri principle is a direct and accurate technique that can be safely applied to estimate the volume of the vertebral body with the mean of 5 min and 11 s workload per vertebra

  5. Rapid Methods for the Detection of Foodborne Bacterial Pathogens: Principles, Applications, Advantages and Limitations

    Directory of Open Access Journals (Sweden)

    Law eJodi Woan-Fei

    2015-01-01

    Full Text Available The incidence of foodborne diseases has increased over the years and resulted in major public health problem globally. Foodborne pathogens can be found in various foods and it is important to detect foodborne pathogens to provide safe food supply and to prevent foodborne diseases. The conventional methods used to detect foodborne pathogen are time consuming and laborious. Hence, a variety of methods have been developed for rapid detection of foodborne pathogens as it is required in many food analyses. Rapid detection methods can be categorized into nucleic acid-based, biosensor-based and immunological-based methods. This review emphasizes on the principles and application of recent rapid methods for the detection of foodborne bacterial pathogens. Detection methods included are simple polymerase chain reaction (PCR, multiplex PCR, real-time PCR, nucleic acid sequence-based amplification (NASBA, loop-mediated isothermal amplification (LAMP and oligonucleotide DNA microarray which classified as nucleic acid-based methods; optical, electrochemical and mass-based biosensors which classified as biosensor-based methods; enzyme-linked immunosorbent assay (ELISA and lateral flow immunoassay which classified as immunological-based methods. In general, rapid detection methods are generally time-efficient, sensitive, specific and labor-saving. The developments of rapid detection methods are vital in prevention and treatment of foodborne diseases.

  6. A method for fast energy estimation and visualization of protein-ligand interaction

    Science.gov (United States)

    Tomioka, Nobuo; Itai, Akiko; Iitaka, Yoichi

    1987-10-01

    A new computational and graphical method for facilitating ligand-protein docking studies is developed on a three-dimensional computer graphics display. Various physical and chemical properties inside the ligand binding pocket of a receptor protein, whose structure is elucidated by X-ray crystal analysis, are calculated on three-dimensional grid points and are stored in advance. By utilizing those tabulated data, it is possible to estimate the non-bonded and electrostatic interaction energy and the number of possible hydrogen bonds between protein and ligand molecules in real time during an interactive docking operation. The method also provides a comprehensive visualization of the local environment inside the binding pocket. With this method, it becomes easier to find a roughly stable geometry of ligand molecules, and one can therefore make a rapid survey of the binding capability of many drug candidates. The method will be useful for drug design as well as for the examination of protein-ligand interactions.

  7. Rapid, convenient method for screening imidazole-containing compounds for heme oxygenase inhibition.

    Science.gov (United States)

    Vlahakis, Jason Z; Rahman, Mona N; Roman, Gheorghe; Jia, Zongchao; Nakatsu, Kanji; Szarek, Walter A

    2011-01-01

    Sensitive assays for measuring heme oxygenase activity have been based on the gas-chromatographic detection of carbon monoxide using elaborate, expensive equipment. The present study describes a rapid and convenient method for screening imidazole-containing candidates for inhibitory activity against heme oxygenase using a plate reader, based on the spectroscopic evaluation of heme degradation. A PowerWave XS plate reader was used to monitor the absorbance (as a function of time) of heme bound to purified truncated human heme oxygenase-1 (hHO-1) in the individual wells of a standard 96-well plate (with or without the addition of a test compound). The degradation of heme by heme oxygenase-1 was initiated using l-ascorbic acid, and the collected relevant absorbance data were analyzed by three different methods to calculate the percent control activity occurring in wells containing test compounds relative to that occurring in control wells with no test compound present. In the cases of wells containing inhibitory compounds, significant shifts in λ(max) from 404 to near 412 nm were observed as well as a decrease in the rate of heme degradation relative to that of the control. Each of the three methods of data processing (overall percent drop in absorbance over 1.5h, initial rate of reaction determined over the first 5 min, and estimated pseudo first-order reaction rate constant determined over 1.5h) gave similar and reproducible results for percent control activity. The fastest and easiest method of data analysis was determined to be that using initial rates, involving data acquisition for only 5 min once reactions have been initiated using l-ascorbic acid. The results of the study demonstrate that this simple assay based on the spectroscopic detection of heme represents a rapid, convenient method to determine the relative inhibitory activity of candidate compounds, and is useful in quickly screening a series or library of compounds for heme oxygenase inhibition

  8. Reverse survival method of fertility estimation: An evaluation

    Directory of Open Access Journals (Sweden)

    Thomas Spoorenberg

    2014-07-01

    Full Text Available Background: For the most part, demographers have relied on the ever-growing body of sample surveys collecting full birth history to derive total fertility estimates in less statistically developed countries. Yet alternative methods of fertility estimation can return very consistent total fertility estimates by using only basic demographic information. Objective: This paper evaluates the consistency and sensitivity of the reverse survival method -- a fertility estimation method based on population data by age and sex collected in one census or a single-round survey. Methods: A simulated population was first projected over 15 years using a set of fertility and mortality age and sex patterns. The projected population was then reverse survived using the Excel template FE_reverse_4.xlsx, provided with Timæus and Moultrie (2012. Reverse survival fertility estimates were then compared for consistency to the total fertility rates used to project the population. The sensitivity was assessed by introducing a series of distortions in the projection of the population and comparing the difference implied in the resulting fertility estimates. Results: The reverse survival method produces total fertility estimates that are very consistent and hardly affected by erroneous assumptions on the age distribution of fertility or by the use of incorrect mortality levels, trends, and age patterns. The quality of the age and sex population data that is 'reverse survived' determines the consistency of the estimates. The contribution of the method for the estimation of past and present trends in total fertility is illustrated through its application to the population data of five countries characterized by distinct fertility levels and data quality issues. Conclusions: Notwithstanding its simplicity, the reverse survival method of fertility estimation has seldom been applied. The method can be applied to a large body of existing and easily available population data

  9. Motion Analysis Based on Invertible Rapid Transform

    Directory of Open Access Journals (Sweden)

    J. Turan

    1999-06-01

    Full Text Available This paper presents the results of a study on the use of invertible rapid transform (IRT for the motion estimation in a sequence of images. Motion estimation algorithms based on the analysis of the matrix of states (produced in the IRT calculation are described. The new method was used experimentally to estimate crowd and traffic motion from the image data sequences captured at railway stations and at high ways in large cities. The motion vectors may be used to devise a polar plot (showing velocity magnitude and direction for moving objects where the dominant motion tendency can be seen. The experimental results of comparison of the new motion estimation methods with other well known block matching methods (full search, 2D-log, method based on conventional (cross correlation (CC function or phase correlation (PC function for application of crowd motion estimation are also presented.

  10. Statistically Efficient Methods for Pitch and DOA Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    , it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...

  11. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    Science.gov (United States)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  12. Petrifilm rapid S. aureus Count Plate method for rapid enumeration of Staphylococcus aureus in selected foods: collaborative study.

    Science.gov (United States)

    Silbernagel, K M; Lindberg, K G

    2001-01-01

    A rehydratable dry-film plating method for Staphylococcus aureus in foods, the 3M Petrifilm Rapid S. aureus Count Plate method, was compared with AOAC Official Method 975.55 (Staphylococcus aureus in Foods). Nine foods-instant nonfat dried milk, dry seasoned vegetable coating, frozen hash browns, frozen cooked chicken patty, frozen ground raw pork, shredded cheddar cheese, fresh green beans, pasta filled with beef and cheese, and egg custard-were analyzed for S. aureus by 13 collaborating laboratories. For each food tested, the collaborators received 8 blind test samples consisting of a control sample and 3 levels of inoculated test sample, each in duplicate. The mean log counts for the methods were comparable for pasta filled with beef and cheese; frozen hash browns; cooked chicken patty; egg custard; frozen ground raw pork; and instant nonfat dried milk. The repeatability and reproducibility variances of the Petrifilm Rapid S. aureus Count Plate method were similar to those of the standard method.

  13. A new method for rapid determination of carbohydrate and total carbon concentrations using UV spectrophotometry.

    Science.gov (United States)

    Albalasmeh, Ammar A; Berhe, Asmeret Asefaw; Ghezzehei, Teamrat A

    2013-09-12

    A new UV spectrophotometry based method for determining the concentration and carbon content of carbohydrate solution was developed. This method depends on the inherent UV absorption potential of hydrolysis byproducts of carbohydrates formed by reaction with concentrated sulfuric acid (furfural derivatives). The proposed method is a major improvement over the widely used Phenol-Sulfuric Acid method developed by DuBois, Gilles, Hamilton, Rebers, and Smith (1956). In the old method, furfural is allowed to develop color by reaction with phenol and its concentration is detected by visible light absorption. Here we present a method that eliminates the coloration step and avoids the health and environmental hazards associated with phenol use. In addition, avoidance of this step was shown to improve measurement accuracy while significantly reducing waiting time prior to light absorption reading. The carbohydrates for which concentrations and carbon content can be reliably estimated with this new rapid Sulfuric Acid-UV technique include: monosaccharides, disaccharides and polysaccharides with very high molecular weight. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  15. Coalescent methods for estimating phylogenetic trees.

    Science.gov (United States)

    Liu, Liang; Yu, Lili; Kubatko, Laura; Pearl, Dennis K; Edwards, Scott V

    2009-10-01

    We review recent models to estimate phylogenetic trees under the multispecies coalescent. Although the distinction between gene trees and species trees has come to the fore of phylogenetics, only recently have methods been developed that explicitly estimate species trees. Of the several factors that can cause gene tree heterogeneity and discordance with the species tree, deep coalescence due to random genetic drift in branches of the species tree has been modeled most thoroughly. Bayesian approaches to estimating species trees utilizes two likelihood functions, one of which has been widely used in traditional phylogenetics and involves the model of nucleotide substitution, and the second of which is less familiar to phylogeneticists and involves the probability distribution of gene trees given a species tree. Other recent parametric and nonparametric methods for estimating species trees involve parsimony criteria, summary statistics, supertree and consensus methods. Species tree approaches are an appropriate goal for systematics, appear to work well in some cases where concatenation can be misleading, and suggest that sampling many independent loci will be paramount. Such methods can also be challenging to implement because of the complexity of the models and computational time. In addition, further elaboration of the simplest of coalescent models will be required to incorporate commonly known issues such as deviation from the molecular clock, gene flow and other genetic forces.

  16. Statistical error estimation of the Feynman-α method using the bootstrap method

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho

    2016-01-01

    Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)

  17. Evaluation of rapid methods for in-situ characterization of organic contaminant load and biodegradation rates in winery wastewater.

    Science.gov (United States)

    Carvallo, M J; Vargas, I; Vega, A; Pizarro, G; Pizarr, G; Pastén, P

    2007-01-01

    Rapid methods for the in-situ evaluation of the organic load have recently been developed and successfully implemented in municipal wastewater treatment systems. Their direct application to winery wastewater treatment is questionable due to substantial differences between municipal and winery wastewater. We critically evaluate the use of UV-VIS spectrometry, buffer capacity testing (BCT), and respirometry as rapid methods to determine organic load and biodegradation rates of winery wastewater. We tested three types of samples: actual and treated winery wastewater, synthetic winery wastewater, and samples from a biological batch reactor. Not surprisingly, respirometry gave a good estimation of biodegradation rates for substrate of different complexities, whereas UV-VIS and BCT did not provide a quantitative measure of the easily degradable sugars and ethanol, typically the main components of the COD in the influent. However, our results strongly suggest that UV-VIS and BCT can be used to identify and estimate the concentration of complex substrates in the influent and soluble microbial products (SMP) in biological reactors and their effluent. Furthermore, the integration of UV-VIS spectrometry, BCT, and mathematical modeling was able to differentiate between the two components of SMPs: substrate utilization associated products (UAP) and biomass associated products (BAP). Since the effluent COD in biologically treated wastewaters is composed primarily by SMPs, the quantitative information given by these techniques may be used for plant control and optimization.

  18. SIMPLE METHOD FOR ESTIMATING POLYCHLORINATED BIPHENYL CONCENTRATIONS ON SOILS AND SEDIMENTS USING SUBCRITICAL WATER EXTRACTION COUPLED WITH SOLID-PHASE MICROEXTRACTION. (R825368)

    Science.gov (United States)

    A rapid method for estimating polychlorinated biphenyl (PCB) concentrations in contaminated soils and sediments has been developed by coupling static subcritical water extraction with solid-phase microextraction (SPME). Soil, water, and internal standards are placed in a seale...

  19. Examination of an indicative tool for rapidly estimating viable organism abundance in ballast water

    Science.gov (United States)

    Vanden Byllaardt, Julie; Adams, Jennifer K.; Casas-Monroy, Oscar; Bailey, Sarah A.

    2018-03-01

    Regulatory discharge standards stipulating a maximum allowable number of viable organisms in ballast water have led to a need for rapid, easy and accurate compliance assessment tools and protocols. Some potential tools presume that organisms present in ballast water samples display the same characteristics of life as the native community (e.g. rates of fluorescence). This presumption may not prove true, particularly when ships' ballast tanks present a harsh environment and long transit times, negatively impacting organism health. Here, we test the accuracy of a handheld pulse amplitude modulated (PAM) fluorometer, the Hach BW680, for detecting photosynthetic protists at concentrations above or below the discharge standard (< 10 cells·ml- 1) in comparison to microscopic counts using fluorescein diacetate as a viability probe. Testing was conducted on serial dilutions of freshwater harbour samples in the lab and in situ untreated ballast water samples originating from marine, freshwater and brackish sources utilizing three preprocessing techniques to target organisms in the size range of ≥ 10 and < 50 μm. The BW680 numeric estimates were in agreement with microscopic counts when analyzing freshly collected harbour water at all but the lowest concentrations (< 38 cells·ml- 1). Chi-square tests determined that error is not independent of preprocessing methods: using the filtrate method or unfiltered water, in addition to refining the conversion factor of raw fluorescence to cell size, can decrease the grey area where exceedance of the discharge standard cannot be measured with certainty (at least for the studied populations). When examining in situ ballast water, the BW680 detected significantly fewer viable organisms than microscopy, possibly due to factors such as organism size or ballast water age. Assuming both the BW680 and microscopy with FDA stain were measuring fluorescence and enzymatic activity/membrane integrity correctly, the observed discrepancy

  20. Rapid separation method for {sup 237}Np and Pu isotopes in large soil samples

    Energy Technology Data Exchange (ETDEWEB)

    Maxwell, Sherrod L., E-mail: sherrod.maxwell@srs.go [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States); Culligan, Brian K.; Noyes, Gary W. [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States)

    2011-07-15

    A new rapid method for the determination of {sup 237}Np and Pu isotopes in soil and sediment samples has been developed at the Savannah River Site Environmental Lab (Aiken, SC, USA) that can be used for large soil samples. The new soil method utilizes an acid leaching method, iron/titanium hydroxide precipitation, a lanthanum fluoride soil matrix removal step, and a rapid column separation process with TEVA Resin. The large soil matrix is removed easily and rapidly using these two simple precipitations with high chemical recoveries and effective removal of interferences. Vacuum box technology and rapid flow rates are used to reduce analytical time.

  1. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  2. Priors engaged in long-latency responses to mechanical perturbations suggest a rapid update in state estimation.

    Directory of Open Access Journals (Sweden)

    Frédéric Crevecoeur

    Full Text Available In every motor task, our brain must handle external forces acting on the body. For example, riding a bike on cobblestones or skating on irregular surface requires us to appropriately respond to external perturbations. In these situations, motor predictions cannot help anticipate the motion of the body induced by external factors, and direct use of delayed sensory feedback will tend to generate instability. Here, we show that to solve this problem the motor system uses a rapid sensory prediction to correct the estimated state of the limb. We used a postural task with mechanical perturbations to address whether sensory predictions were engaged in upper-limb corrective movements. Subjects altered their initial motor response in ∼60 ms, depending on the expected perturbation profile, suggesting the use of an internal model, or prior, in this corrective process. Further, we found trial-to-trial changes in corrective responses indicating a rapid update of these perturbation priors. We used a computational model based on Kalman filtering to show that the response modulation was compatible with a rapid correction of the estimated state engaged in the feedback response. Such a process may allow us to handle external disturbances encountered in virtually every physical activity, which is likely an important feature of skilled motor behaviour.

  3. Evaluating methods for estimating home ranges using GPS collars: A comparison using proboscis monkeys (Nasalis larvatus).

    Science.gov (United States)

    Stark, Danica J; Vaughan, Ian P; Ramirez Saldivar, Diana A; Nathan, Senthilvel K S S; Goossens, Benoit

    2017-01-01

    The development of GPS tags for tracking wildlife has revolutionised the study of home ranges, habitat use and behaviour. Concomitantly, there have been rapid developments in methods for estimating habitat use from GPS data. In combination, these changes can cause challenges in choosing the best methods for estimating home ranges. In primatology, this issue has received little attention, as there have been few GPS collar-based studies to date. However, as advancing technology is making collaring studies more feasible, there is a need for the analysis to advance alongside the technology. Here, using a high quality GPS collaring data set from 10 proboscis monkeys (Nasalis larvatus), we aimed to: 1) compare home range estimates from the most commonly used method in primatology, the grid-cell method, with three recent methods designed for large and/or temporally correlated GPS data sets; 2) evaluate how well these methods identify known physical barriers (e.g. rivers); and 3) test the robustness of the different methods to data containing either less frequent or random losses of GPS fixes. Biased random bridges had the best overall performance, combining a high level of agreement between the raw data and estimated utilisation distribution with a relatively low sensitivity to reduced fixed frequency or loss of data. It estimated the home range of proboscis monkeys to be 24-165 ha (mean 80.89 ha). The grid-cell method and approaches based on local convex hulls had some advantages including simplicity and excellent barrier identification, respectively, but lower overall performance. With the most suitable model, or combination of models, it is possible to understand more fully the patterns, causes, and potential consequences that disturbances could have on an animal, and accordingly be used to assist in the management and restoration of degraded landscapes.

  4. Evaluating methods for estimating home ranges using GPS collars: A comparison using proboscis monkeys (Nasalis larvatus.

    Directory of Open Access Journals (Sweden)

    Danica J Stark

    Full Text Available The development of GPS tags for tracking wildlife has revolutionised the study of home ranges, habitat use and behaviour. Concomitantly, there have been rapid developments in methods for estimating habitat use from GPS data. In combination, these changes can cause challenges in choosing the best methods for estimating home ranges. In primatology, this issue has received little attention, as there have been few GPS collar-based studies to date. However, as advancing technology is making collaring studies more feasible, there is a need for the analysis to advance alongside the technology. Here, using a high quality GPS collaring data set from 10 proboscis monkeys (Nasalis larvatus, we aimed to: 1 compare home range estimates from the most commonly used method in primatology, the grid-cell method, with three recent methods designed for large and/or temporally correlated GPS data sets; 2 evaluate how well these methods identify known physical barriers (e.g. rivers; and 3 test the robustness of the different methods to data containing either less frequent or random losses of GPS fixes. Biased random bridges had the best overall performance, combining a high level of agreement between the raw data and estimated utilisation distribution with a relatively low sensitivity to reduced fixed frequency or loss of data. It estimated the home range of proboscis monkeys to be 24-165 ha (mean 80.89 ha. The grid-cell method and approaches based on local convex hulls had some advantages including simplicity and excellent barrier identification, respectively, but lower overall performance. With the most suitable model, or combination of models, it is possible to understand more fully the patterns, causes, and potential consequences that disturbances could have on an animal, and accordingly be used to assist in the management and restoration of degraded landscapes.

  5. Novel rapid method for the characterisation of polymeric sugars from macroalgae.

    Science.gov (United States)

    Spicer, S E; Adams, J M M; Thomas, D S; Gallagher, J A; Winters, Ana L

    2017-01-01

    Laminarins are storage polysaccharides found only in brown seaweeds, specifically Laminarialaes and Fucales. Laminarin has been shown to have anti-apoptotic and anti-tumoural activities and is considered as a nutraceutical component that can positively influence human health. The structure is species dependent, generally composed of linear ß(1-3) glucans with intrachain β(1-6) branching and varies according to harvest season and environmental factors. Current methods for analysis of molar mass and DP length are technically demanding and are not widely available. Here, we present a simple inexpensive method which enables rapid analysis of laminarins from macroalgal biomass using high-performance anion exchange chromatography with pulsed amperometric detection (HPAEC-PAD) without the need for hydrolysis or further processing. This is based on the linear relationship observed between log 10 DP and retention time following separation of laminarins on a CarboPac PA-100 column (Dionex) using standard 1,3-β-d-gluco-oligosaccharides ranging in DP from 2 to 8. This method was applied to analyse laminarin oligomers in extracts from different species harvested from within the intertidal zone on Welsh rocky shores containing laminarin polymers with different ranges of DP. The degree of polymerisation and extrapolated molar mass agreed well with values estimated by LC-ESI/MS n analysis and those reported in the literature.

  6. A Generalized Autocovariance Least-Squares Method for Covariance Estimation

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2007-01-01

    A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....

  7. A rapid technique for estimating the depth and width of a two-dimensional plate from self-potential data

    International Nuclear Information System (INIS)

    Mehanee, Salah; Smith, Paul D; Essa, Khalid S

    2011-01-01

    Rapid techniques for self-potential (SP) data interpretation are of prime importance in engineering and exploration geophysics. Parameters (e.g. depth, width) estimation of the ore bodies has also been of paramount concern in mineral prospecting. In many cases, it is useful to assume that the SP anomaly is due to an ore body of simple geometric shape and to use the data to determine its parameters. In light of this, we describe a rapid approach to determine the depth and horizontal width of a two-dimensional plate from the SP anomaly. The rationale behind the scheme proposed in this paper is that, unlike the two- (2D) and three-dimensional (3D) SP rigorous source current inversions, it does not demand a priori information about the subsurface resistivity distribution nor high computational resources. We apply the second-order moving average operator on the SP anomaly to remove the unwanted (regional) effect, represented by up to a third-order polynomial, using filters of successive window lengths. By defining a function F at a fixed window length (s) in terms of the filtered anomaly computed at two points symmetrically distributed about the origin point of the causative body, the depth (z) corresponding to each half-width (w) is estimated by solving a nonlinear equation in the form ξ(s, w, z) = 0. The estimated depths are then plotted against their corresponding half-widths on a graph representing a continuous curve for this window length. This procedure is then repeated for each available window length. The depth and half-width solution of the buried structure is read at the common intersection of these various curves. The improvement of this method over the published first-order moving average technique for SP data is demonstrated on a synthetic data set. It is then verified on noisy synthetic data, complicated structures and successfully applied to three field examples for mineral exploration and we have found that the estimated depth is in good agreement with

  8. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  9. Recent applications for rapid estimation of earthquake shaking and losses with ELER Software

    International Nuclear Information System (INIS)

    Demircioglu, M.B.; Erdik, M.; Kamer, Y.; Sesetyan, K.; Tuzun, C.

    2012-01-01

    A methodology and software package entitled Earthquake Loss Estimation Routine (ELER) was developed for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region. The work was carried out under the Joint Research Activity-3 (JRA3) of the EC FP6 project entitled Network of Research Infrastructures for European Seismology (NERIES). The ELER methodology anticipates: 1) finding of the most likely location of the source of the earthquake using regional seismo-tectonic data base; 2) estimation of the spatial distribution of selected ground motion parameters at engineering bedrock through region specific ground motion prediction models, bias-correcting the ground motion estimations with strong ground motion data, if available; 3) estimation of the spatial distribution of site-corrected ground motion parameters using regional geology database using appropriate amplification models; and 4) estimation of the losses and uncertainties at various orders of sophistication (buildings, casualties). The multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships which are coded into ELER. The present paper provides brief information on the methodology of ELER and provides an example application with the recent major earthquake that hit the Van province in the east of Turkey on 23 October 2011 with moment magnitude (Mw) of 7.2. For this earthquake, Kandilli Observatory and Earthquake Research Institute (KOERI) provided almost real time estimations in terms of building damage and casualty distribution using ELER. (author)

  10. On the Methods for Estimating the Corneoscleral Limbus.

    Science.gov (United States)

    Jesus, Danilo A; Iskander, D Robert

    2017-08-01

    The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.

  11. Comparison of methods for estimating carbon in harvested wood products

    International Nuclear Information System (INIS)

    Claudia Dias, Ana; Louro, Margarida; Arroja, Luis; Capela, Isabel

    2009-01-01

    There is a great diversity of methods for estimating carbon storage in harvested wood products (HWP) and, therefore, it is extremely important to agree internationally on the methods to be used in national greenhouse gas inventories. This study compares three methods for estimating carbon accumulation in HWP: the method suggested by Winjum et al. (Winjum method), the tier 2 method proposed by the IPCC Good Practice Guidance for Land Use, Land-Use Change and Forestry (GPG LULUCF) (GPG tier 2 method) and a method consistent with GPG LULUCF tier 3 methods (GPG tier 3 method). Carbon accumulation in HWP was estimated for Portugal under three accounting approaches: stock-change, production and atmospheric-flow. The uncertainty in the estimates was also evaluated using Monte Carlo simulation. The estimates of carbon accumulation in HWP obtained with the Winjum method differed substantially from the estimates obtained with the other methods, because this method tends to overestimate carbon accumulation with the stock-change and the production approaches and tends to underestimate carbon accumulation with the atmospheric-flow approach. The estimates of carbon accumulation provided by the GPG methods were similar, but the GPG tier 3 method reported the lowest uncertainties. For the GPG methods, the atmospheric-flow approach produced the largest estimates of carbon accumulation, followed by the production approach and the stock-change approach, by this order. A sensitivity analysis showed that using the ''best'' available data on production and trade of HWP produces larger estimates of carbon accumulation than using data from the Food and Agriculture Organization. (author)

  12. A novel method of rapidly modeling optical properties of actual photonic crystal fibres

    International Nuclear Information System (INIS)

    Li-Wen, Wang; Shu-Qin, Lou; Wei-Guo, Chen; Hong-Lei, Li

    2010-01-01

    The flexible structure of photonic crystal fibre not only offers novel optical properties but also brings some difficulties in keeping the fibre structure in the fabrication process which inevitably cause the optical properties of the resulting fibre to deviate from the designed properties. Therefore, a method of evaluating the optical properties of the actual fibre is necessary for the purpose of application. Up to now, the methods employed to measure the properties of the actual photonic crystal fibre often require long fibre samples or complex expensive equipments. To our knowledge, there are few studies of modeling an actual photonic crystal fibre and evaluating its properties rapidly. In this paper, a novel method, based on the combination model of digital image processing and the finite element method, is proposed to rapidly model the optical properties of the actual photonic crystal fibre. Two kinds of photonic crystal fibres made by Crystal Fiber A/S are modeled. It is confirmed from numerical results that the proposed method is simple, rapid and accurate for evaluating the optical properties of the actual photonic crystal fibre without requiring complex equipment. (rapid communication)

  13. Methods for estimating residential building energy consumption by application of artificial intelligence; Methode d'estimation energetique des batiments d'habitation basee sur l'application de l'intelligence artificielle

    Energy Technology Data Exchange (ETDEWEB)

    Kajl, S.; Roberge, M-A. [Quebec Univ., Ecole de technologie superieure, Montreal, PQ (Canada)

    1999-02-01

    A method for estimating energy requirements in buildings five to twenty-five stories in height using artificial intelligence techniques is proposed. In developing this technique, the pre-requisites specified were rapid execution, the ability to generate a wide range of results, including total energy consumption, power demands, heating and cooling consumption, and accuracy comparable to that of a detailed building energy simulation software. The method proposed encompasses (1) the creation of various databases such as classification of the parameters used in the energy simulation, modelling using the Department of Energy (DOE)-2 software and validation of the DOE-2 models; (2) application of the neural networks inclusive of teaching the neural network and validation of the neural network's learning; (3) designing an energy estimate assessment (EEA) system for residential buildings; and (4) validation of the EEA system. The system has been developed in the MATLAB software environment, specifically for the climate in the Ottawa region. For use under different climatic conditions appropriate adjustments need to be made for the heating and cooling consumption. 12 refs., tabs., figs., 2 appendices.

  14. Reevaluation of nasal swab method for dose estimation at nuclear emergency accident

    International Nuclear Information System (INIS)

    Yamada, Yuji; Fukutsu, Kumiko; Kurihara, Osamu; Akashi, Makoto

    2008-01-01

    ICRP Publication 66 human respiratory tract model has been used extensively over in exposure dose assessment. It is well known that respiratory deposition efficiency of inhaled aerosol and its deposition region strongly depend on the particle size. In most of exposure accidents, however, nobody knows a size of inhaled aerosol. And thus two default aerosol sizes of 5μ in AMAD for the workers and 1μ in AMAD for the public are given as being representative in the ICRP model, but both sizes are not linked directly to the maximum dose. In this study, the most hazardous size to our health effects and how to estimate an intake activity was discussed from a viewpoint of emergency medicine. In exposure accident of alpha emitter such as Pu-239, lung monitor and bioassay measurements are not the best methods for rapid estimation with high sensitivity, so that an applicability of nasal swab method has been investigated. A computer software, LUDEP, was used in the calculation of respiratory deposition. It showed that the effective dose per unit intake activity strongly depended on the inhaled aerosol size. In case of Pu-239 dioxide aerosols, it was confirmed that the maximum of dose conversion factor was observed around 0.01μ. It means that this 0.01μ is the most hazardous size at exposure accident of Pu-239. From analysis of the relationship between AI and ET l deposition, it was found that the dose conversion factor from the activity deposited in ET l region also was affected by the aerosol size. The usage of the ICRP's default size in nasal swab method might cause obvious underestimation of the intake activity. Dose estimation based on nasal swab method is possible from safety side at nuclear emergency, and the availability in quantity should be reevaluated for emergency medicine considering of chelating agent administration. (author)

  15. Erratum: Hansen, Lund, Sangill, and Jespersen. Experimentally and Computationally Fast Method for Estimation of a Mean Kurtosis. Magnetic Resonance in Medicine 69:1754–1760 (2013)

    DEFF Research Database (Denmark)

    Hansen, Brian; Lund, Torben Ellegaard; Sangill, Ryan

    2014-01-01

    PURPOSE: Results from several recent studies suggest the magnetic resonance diffusion-derived metric mean kurtosis (MK) to be a sensitive marker for tissue pathology; however, lengthy acquisition and postprocessing time hamper further exploration. The purpose of this study is to introduce...... and evaluate a new MK metric and a rapid protocol for its estimation. METHODS: The protocol requires acquisition of 13 standard diffusion-weighted images, followed by linear combination of log diffusion signals, thus avoiding nonlinear optimization. The method was evaluated on an ex vivo rat brain...... for full human brain coverage, with a postprocessing time of a few seconds. Scan-rescan reproducibility was comparable with MK. CONCLUSION: The framework offers a robust and rapid method for estimating MK, with a protocol easily adapted on commercial scanners, as it requires only minimal modification...

  16. Evaluation of non cyanide methods for hemoglobin estimation

    Directory of Open Access Journals (Sweden)

    Vinaya B Shah

    2011-01-01

    Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers

  17. Development of iodimetric redox method for routine estimation of ascorbic acid from fresh fruit and vegetables

    International Nuclear Information System (INIS)

    Munir, M.; Baloch, A. K.; Khan, W. A.; Ahmad, F.; Jamil, M.

    2013-01-01

    The iodimetric method (Im) is developed for rapid estimation of ascorbic acid from fresh fruit and vegetables. The efficiency of Im was compared with standard with standard dye method (Dm) utilizing a variety of model solutions and aqueous extracts from fresh fruit and vegetables of different colors. The Im presented consistently accurate and precise results from colorless to colored model solutions and from fruit/vegetable extracts with standard deviation (Stdev) in the range of +-0.013 - +-0.405 and +-0.019 - +-0.428 respectively with no significant difference between the replicates. The Dm worked also satisfactorily for colorless model solutions and extracts (Stdev range +-0.235 - +-0.309) while producing unsatisfactory results (+-0.464 - +-3.281) for colored counterparts. Severe discrepancies/ overestimates continued to pileup (52% to 197%) estimating the nutrient from high (3.0 mg/10mL) to low (0.5 mg/10mL) concentration levels, respectively. On the basis of precision and reliability, the Im technique is suggested for adoption in general laboratories for routine estimation of ascorbic acid from fruit and vegetables possessing any shade. (author)

  18. Development of a novel and simple method to evaluate disintegration of rapidly disintegrating tablets.

    Science.gov (United States)

    Hoashi, Yohei; Tozuka, Yuichi; Takeuchi, Hirofumi

    2013-01-01

    The purpose of this study was to develop and test a novel and simple method for evaluating the disintegration time of rapidly disintegrating tablets (RDTs) in vitro, since the conventional disintegration test described in the pharmacopoeia produces poor results due to the difference of its environmental conditions from those of an actual oral cavity. Six RDTs prepared in our laboratory and 5 types of commercial RDTs were used as model formulations. Using our original apparatus, a good correlation was observed between in vivo and in vitro disintegration times by adjusting the height from which the solution was dropped to 8 cm and the weight of the load to 10 or 20 g. Properties of RDTs, such as the pattern of their disintegrating process, can be assessed by verifying the load. These findings confirmed that our proposed method for an in vitro disintegration test apparatus is an excellent one for estimating disintegration time and the disintegration profile of RDTs.

  19. Estimating bacterial diversity for ecological studies: methods, metrics, and assumptions.

    Directory of Open Access Journals (Sweden)

    Julia Birtel

    Full Text Available Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5. Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques.

  20. A novel ultra-performance liquid chromatography hyphenated with quadrupole time of flight mass spectrometry method for rapid estimation of total toxic retronecine-type of pyrrolizidine alkaloids in herbs without requiring corresponding standards.

    Science.gov (United States)

    Zhu, Lin; Ruan, Jian-Qing; Li, Na; Fu, Peter P; Ye, Yang; Lin, Ge

    2016-03-01

    Nearly 50% of naturally-occurring pyrrolizidine alkaloids (PAs) are hepatotoxic, and the majority of hepatotoxic PAs are retronecine-type PAs (RET-PAs). However, quantitative measurement of PAs in herbs/foodstuffs is often difficult because most of reference PAs are unavailable. In this study, a rapid, selective, and sensitive UHPLC-QTOF-MS method was developed for the estimation of RET-PAs in herbs without requiring corresponding standards. This method is based on our previously established characteristic and diagnostic mass fragmentation patterns and the use of retrorsine for calibration. The use of a single RET-PA (i.e. retrorsine) for construction of calibration was based on high similarities with no significant differences demonstrated by the calibration curves constructed by peak areas of extract ion chromatograms of fragment ion at m/z 120.0813 or 138.0919 versus concentrations of five representative RET-PAs. The developed method was successfully applied to measure a total content of toxic RET-PAs of diversified structures in fifteen potential PA-containing herbs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Methods for estimating the semivariogram

    DEFF Research Database (Denmark)

    Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle

    2002-01-01

    . In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...

  2. Application of Rapid Prototyping Methods to High-Speed Wind Tunnel Testing

    Science.gov (United States)

    Springer, A. M.

    1998-01-01

    This study was undertaken in MSFC's 14-Inch Trisonic Wind Tunnel to determine if rapid prototyping methods could be used in the design and manufacturing of high speed wind tunnel models in direct testing applications, and if these methods would reduce model design/fabrication time and cost while providing models of high enough fidelity to provide adequate aerodynamic data, and of sufficient strength to survive the test environment. Rapid prototyping methods utilized to construct wind tunnel models in a wing-body-tail configuration were: fused deposition method using both ABS plastic and PEEK as building materials, stereolithography using the photopolymer SL-5170, selective laser sintering using glass reinforced nylon, and laminated object manufacturing using plastic reinforced with glass and 'paper'. This study revealed good agreement between the SLA model, the metal model with an FDM-ABS nose, an SLA nose, and the metal model for most operating conditions, while the FDM-ABS data diverged at higher loading conditions. Data from the initial SLS model showed poor agreement due to problems in post-processing, resulting in a different configuration. A second SLS model was tested and showed relatively good agreement. It can be concluded that rapid prototyping models show promise in preliminary aerodynamic development studies at subsonic, transonic, and supersonic speeds.

  3. Incorporating indel information into phylogeny estimation for rapidly emerging pathogens

    Directory of Open Access Journals (Sweden)

    Suchard Marc A

    2007-03-01

    Full Text Available Abstract Background Phylogenies of rapidly evolving pathogens can be difficult to resolve because of the small number of substitutions that accumulate in the short times since divergence. To improve resolution of such phylogenies we propose using insertion and deletion (indel information in addition to substitution information. We accomplish this through joint estimation of alignment and phylogeny in a Bayesian framework, drawing inference using Markov chain Monte Carlo. Joint estimation of alignment and phylogeny sidesteps biases that stem from conditioning on a single alignment by taking into account the ensemble of near-optimal alignments. Results We introduce a novel Markov chain transition kernel that improves computational efficiency by proposing non-local topology rearrangements and by block sampling alignment and topology parameters. In addition, we extend our previous indel model to increase biological realism by placing indels preferentially on longer branches. We demonstrate the ability of indel information to increase phylogenetic resolution in examples drawn from within-host viral sequence samples. We also demonstrate the importance of taking alignment uncertainty into account when using such information. Finally, we show that codon-based substitution models can significantly affect alignment quality and phylogenetic inference by unrealistically forcing indels to begin and end between codons. Conclusion These results indicate that indel information can improve phylogenetic resolution of recently diverged pathogens and that alignment uncertainty should be considered in such analyses.

  4. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  5. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry – BREALD-30

    Science.gov (United States)

    Junkes, Monica C.; Fraiz, Fabian C.; Sardenberg, Fernanda; Lee, Jessica Y.; Paiva, Saul M.; Ferreira, Fernanda M.

    2015-01-01

    Objective The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. Methods After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. Results The BREALD-30 demonstrated good internal reliability. Cronbach’s alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent’s perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent’s perception regarding his/her child's oral health remained significant in the multivariate analysis. Conclusion The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil. PMID:26158724

  6. Gamma-H2AX biodosimetry for use in large scale radiation incidents: comparison of a rapid ‘96 well lyse/fix’ protocol with a routine method

    Directory of Open Access Journals (Sweden)

    Jayne Moquet

    2014-03-01

    Full Text Available Following a radiation incident, preliminary dose estimates made by γ-H2AX foci analysis can supplement the early triage of casualties based on clinical symptoms. Sample processing time is important when many individuals need to be rapidly assessed. A protocol was therefore developed for high sample throughput that requires less than 0.1 ml blood, thus potentially enabling finger prick sampling. The technique combines red blood cell lysis and leukocyte fixation in one step on a 96 well plate, in contrast to the routine protocol, where lymphocytes in larger blood volumes are typically separated by Ficoll density gradient centrifugation with subsequent washing and fixation steps. The rapid ‘96 well lyse/fix’ method reduced the estimated sample processing time for 96 samples to about 4 h compared to 15 h using the routine protocol. However, scoring 20 cells in 96 samples prepared by the rapid protocol took longer than for the routine method (3.1 versus 1.5 h at zero dose; 7.0 versus 6.1 h for irradiated samples. Similar foci yields were scored for both protocols and consistent dose estimates were obtained for samples exposed to 0, 0.2, 0.6, 1.1, 1.2, 2.1 and 4.3 Gy of 250 kVp X-rays at 0.5 Gy/min and incubated for 2 h. Linear regression coefficients were 0.87 ± 0.06 (R2 = 97.6% and 0.85 ± 0.05 (R2 = 98.3% for estimated versus actual doses for the routine and lyse/fix method, respectively. The lyse/fix protocol can therefore facilitate high throughput processing for γ-H2AX biodosimetry for use in large scale radiation incidents, at the cost of somewhat longer foci scoring times.

  7. The Most Probable Limit of Detection (MPL) for rapid microbiological methods

    NARCIS (Netherlands)

    Verdonk, G.P.H.T.; Willemse, M.J.; Hoefs, S.G.G.; Cremers, G.; Heuvel, E.R. van den

    Classical microbiological methods have nowadays unacceptably long cycle times. Rapid methods, available on the market for decades, are already applied within the clinical and food industry, but the implementation in pharmaceutical industry is hampered by for instance stringent regulations on

  8. The most probable limit of detection (MPL) for rapid microbiological methods

    NARCIS (Netherlands)

    Verdonk, G.P.H.T.; Willemse, M.J.; Hoefs, S.G.G.; Cremers, G.; Heuvel, van den E.R.

    2010-01-01

    Classical microbiological methods have nowadays unacceptably long cycle times. Rapid methods, available on the market for decades, are already applied within the clinical and food industry, but the implementation in pharmaceutical industry is hampered by for instance stringent regulations on

  9. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  10. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  11. Methods to estimate the genetic risk

    International Nuclear Information System (INIS)

    Ehling, U.H.

    1989-01-01

    The estimation of the radiation-induced genetic risk to human populations is based on the extrapolation of results from animal experiments. Radiation-induced mutations are stochastic events. The probability of the event depends on the dose; the degree of the damage dose not. There are two main approaches in making genetic risk estimates. One of these, termed the direct method, expresses risk in terms of expected frequencies of genetic changes induced per unit dose. The other, referred to as the doubling dose method or the indirect method, expresses risk in relation to the observed incidence of genetic disorders now present in man. The advantage of the indirect method is that not only can Mendelian mutations be quantified, but also other types of genetic disorders. The disadvantages of the method are the uncertainties in determining the current incidence of genetic disorders in human and, in addition, the estimasion of the genetic component of congenital anomalies, anomalies expressed later and constitutional and degenerative diseases. Using the direct method we estimated that 20-50 dominant radiation-induced mutations would be expected in 19 000 offspring born to parents exposed in Hiroshima and Nagasaki, but only a small proportion of these mutants would have been detected with the techniques used for the population study. These methods were used to predict the genetic damage from the fallout of the reactor accident at Chernobyl in the vicinity of Southern Germany. The lack of knowledge for the interaction of chemicals with ionizing radiation and the discrepancy between the high safety standards for radiation protection and the low level of knowledge for the toxicological evaluation of chemical mutagens will be emphasized. (author)

  12. A method of estimating log weights.

    Science.gov (United States)

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  13. Settling characteristics of nursery pig manure and nutrient estimation by the hydrometer method.

    Science.gov (United States)

    Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian

    2003-05-01

    The hydrometer method to measure manure specific gravity and subsequently relate it to manure nutrient contents was examined in this study. It was found that this method might be improved in estimation accuracy if only manure from a single growth stage of pigs was used (e.g., nursery pig manure used here). The total solids (TS) content of the test manure was well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.9944 and 0.9873, respectively. Also observed were good linear correlations between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.9836 and 0.9843, respectively). These correlations were much better than those reported by past researchers, in which lumped data for pigs at different growing stages were used. It may therefore be inferred that developing different linear equations for pigs at different ages should improve the accuracy in manure nutrient estimation using a hydrometer. Also, the error of using the hydrometer method to estimate manure TN and TP was found to increase, from +/- 10% to +/- 50%, with the decrease in TN (from 700 ppm to 100 ppm) and TP (from 130 ppm to 30 ppm) concentrations in the manure. The estimation errors for TN and TP may be larger than 50% if the total solids content is below 0.5%. In addition, the rapid settling of solids has long been considered characteristic of swine manure; however, in this study, the solids settling property appeared to be quite poor for nursery pig manure in that no conspicuous settling occurred after the manure was left statically for 5 hours. This information has not been reported elsewhere in the literature and may need further research to verify.

  14. A Fast LMMSE Channel Estimation Method for OFDM Systems

    Directory of Open Access Journals (Sweden)

    Zhou Wen

    2009-01-01

    Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.

  15. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  16. Earthquake magnitude estimation using the τ c and P d method for earthquake early warning systems

    Science.gov (United States)

    Jin, Xing; Zhang, Hongcai; Li, Jun; Wei, Yongxiang; Ma, Qiang

    2013-10-01

    Earthquake early warning (EEW) systems are one of the most effective ways to reduce earthquake disaster. Earthquake magnitude estimation is one of the most important and also the most difficult parts of the entire EEW system. In this paper, based on 142 earthquake events and 253 seismic records that were recorded by the KiK-net in Japan, and aftershocks of the large Wenchuan earthquake in Sichuan, we obtained earthquake magnitude estimation relationships using the τ c and P d methods. The standard variances of magnitude calculation of these two formulas are ±0.65 and ±0.56, respectively. The P d value can also be used to estimate the peak ground motion of velocity, then warning information can be released to the public rapidly, according to the estimation results. In order to insure the stability and reliability of magnitude estimation results, we propose a compatibility test according to the natures of these two parameters. The reliability of the early warning information is significantly improved though this test.

  17. A Comparative Study of Distribution System Parameter Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  18. Method for producing rapid pH changes

    Science.gov (United States)

    Clark, J.H.; Campillo, A.J.; Shapiro, S.L.; Winn, K.R.

    A method of initiating a rapid pH change in a solution comprises irradiating the solution with an intense flux of electromagnetic radiation of a frequency which produces a substantial pK change to a compound in solution. To optimize the resulting pH change, the compound being irradiated in solution should have an excited state lifetime substantially longer than the time required to establish an excited state acid-base equilibrium in the solution. Desired pH changes can be accomplished in nanoseconds or less by means of picosecond pulses of laser radiation.

  19. Satellite-derived land covers for runoff estimation using SCS-CN method in Chen-You-Lan Watershed, Taiwan

    Science.gov (United States)

    Zhang, Wen-Yan; Lin, Chao-Yuan

    2017-04-01

    The Soil Conservation Service Curve Number (SCS-CN) method, which was originally developed by the USDA Natural Resources Conservation Service, is widely used to estimate direct runoff volume from rainfall. The runoff Curve Number (CN) parameter is based on the hydrologic soil group and land use factors. In Taiwan, the national land use maps were interpreted from aerial photos in 1995 and 2008. Rapid updating of post-disaster land use map is limited due to the high cost of production, so the classification of satellite images is the alternative method to obtain the land use map. In this study, Normalized Difference Vegetation Index (NDVI) in Chen-You-Lan Watershed was derived from dry and wet season of Landsat imageries during 2003 - 2008. Land covers were interpreted from mean value and standard deviation of NDVI and were categorized into 4 groups i.e. forest, grassland, agriculture and bare land. Then, the runoff volume of typhoon events during 2005 - 2009 were estimated using SCS-CN method and verified with the measured runoff data. The result showed that the model efficiency coefficient is 90.77%. Therefore, estimating runoff by using the land cover map classified from satellite images is practicable.

  20. Joint Pitch and DOA Estimation Using the ESPRIT method

    DEFF Research Database (Denmark)

    Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom

    2015-01-01

    In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... the invariance property in the time domain is first used to estimate the multi pitch frequencies of multiple harmonic signals. Followed by the estimated pitch frequencies, the DOA estimations based on the ESPRIT method are also presented by using the shift invariance structure in the spatial domain. Compared...... to the existing stateof-the-art algorithms, the proposed method based on ESPRIT without 2-D searching is computationally more efficient but performs similarly. An asymptotic performance analysis of the DOA and pitch estimation of the proposed method are also presented. Finally, the effectiveness of the proposed...

  1. A new rapid method for isolating nucleoli.

    Science.gov (United States)

    Li, Zhou Fang; Lam, Yun Wah

    2015-01-01

    The nucleolus was one of the first subcellular organelles to be isolated from the cell. The advent of modern proteomic techniques has resulted in the identification of thousands of proteins in this organelle, and live cell imaging technology has allowed the study of the dynamics of these proteins. However, the limitations of current nucleolar isolation methods hinder the further exploration of this structure. In particular, these methods require the use of a large number of cells and tedious procedures. In this chapter we describe a new and improved nucleolar isolation method for cultured adherent cells. In this method cells are snap-frozen before direct sonication and centrifugation onto a sucrose cushion. The nucleoli can be obtained within a time as short as 20 min, and the high yield allows the use of less starting material. As a result, this method can capture rapid biochemical changes in nucleoli by freezing the cells at a precise time, hence faithfully reflecting the protein composition of nucleoli at the specified time point. This protocol will be useful for proteomic studies of dynamic events in the nucleolus and for better understanding of the biology of mammalian cells.

  2. [Rapid methods for the genus Salmonella bacteria detection in food and raw materials].

    Science.gov (United States)

    Sokolov, D M; Sokolov, M S

    2013-01-01

    The article considers sanitary and epidemiological aspects and the impact of Salmonella food poisoning in Russia and abroad. The main characteristics of the agent (Salmonella enterica subsp. Enteritidis) are summarized. The main sources of human Salmonella infection are products of poultry and livestock (poultry, eggs, dairy products, meat products, etc.). Standard methods of identifying the causative agent, rapid (alternative) methods of analysis of Salmonella using differential diagnostic medium (MSRV, Salmosyst, XLT4-agar, agar-Rambach et al.), rapid tests Singlepath-Salmonella and PCR (food proof Salmonella) in real time were stated. Rapid tests provide is a substantial (at 24-48 h) reducing the time to identify Salmonella.

  3. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  4. Liquid chromatography/tandem mass spectrometry method for quantitative estimation of solutol HS15 and its applications

    OpenAIRE

    Bhaskar, V. Vijaya; Middha, Anil; Srivastava, Pratima; Rajagopal, Sriram

    2015-01-01

    A rapid, sensitive and selective pseudoMRM (pMRM)-based method for the determination of solutol HS15 (SHS15) in rat plasma was developed using liquid chromatography/tandem mass spectrometry (LCâMS/MS). The most abundant ions corresponding to SHS15 free polyethyleneglycol (PEG) oligomers at m/z 481, 525, 569, 613, 657, 701, 745, 789, 833, 877, 921 and 965 were selected for pMRM in electrospray mode of ionization. Purity of the lipophilic and hydrophilic components of SHS15 was estimated using ...

  5. A Method for Estimating Meteorite Fall Mass from Weather Radar Data

    Science.gov (United States)

    Laird, C.; Fries, M.; Matson, R.

    2017-01-01

    Techniques such as weather RADAR, seismometers, and all-sky cameras allow new insights concerning the physics of meteorite fall dynamics and fragmentation during "dark flight", the period of time between the end of the meteor's luminous flight and the concluding impact on the Earth's surface. Understanding dark flight dynamics enables us to rapidly analyze the characteristics of new meteorite falls. This analysis will provide essential information to meteorite hunters to optimize recovery, increasing the frequency and total mass of scientifically important freshly-fallen meteorites available to the scientific community. We have developed a mathematical method to estimate meteorite fall mass using reflectivity data as recorded by National Oceanic and Atmospheric Administration (NOAA) Next Generation RADAR (NEXRAD) stations. This study analyzed eleven official and one unofficial meteorite falls in the United States and Canada to achieve this purpose.

  6. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  7. Population Estimation with Mark and Recapture Method Program

    International Nuclear Information System (INIS)

    Limohpasmanee, W.; Kaewchoung, W.

    1998-01-01

    Population estimation is the important information which required for the insect control planning especially the controlling with SIT. Moreover, It can be used to evaluate the efficiency of controlling method. Due to the complexity of calculation, the population estimation with mark and recapture methods were not used widely. So that, this program is developed with Qbasic on the purpose to make it accuracy and easier. The program evaluation consists with 6 methods; follow Seber's, Jolly-seber's, Jackson's Ito's, Hamada's and Yamamura's methods. The results are compared with the original methods, found that they are accuracy and more easier to applied

  8. Rapid identification of salmonella serotypes with stereo and hyperspectral microscope imaging Methods

    Science.gov (United States)

    The hyperspectral microscope imaging (HMI) method can reduce detection time within 8 hours including incubation process. The early and rapid detection with this method in conjunction with the high throughput capabilities makes HMI method a prime candidate for implementation for the food industry. Th...

  9. Rapid Moment Magnitude Estimation Using Strong Motion Derived Static Displacements

    OpenAIRE

    Muzli, Muzli; Asch, Guenter; Saul, Joachim; Murjaya, Jaya

    2015-01-01

    The static surface deformation can be recovered from strong motion records. Compared to satellite-based measurements such as GPS or InSAR, the advantage of strong motion records is that they have the potential to provide real-time coseismic static displacements. The use of these valuable data was optimized for the moment magnitude estimation. A centroid grid search method was introduced to calculate the moment magnitude by using1 model. The method to data sets was applied of the 2011...

  10. Mobile Image Ratiometry: A New Method for Instantaneous Analysis of Rapid Test Strips

    OpenAIRE

    Donald C. Cooper; Bryan Callahan; Phil Callahan; Lee Burnett

    2012-01-01

    Here we describe Mobile Image Ratiometry (MIR), a new method for the automated quantification of standardized rapid immunoassay strips using consumer-based mobile smartphone and tablet cameras. To demonstrate MIR we developed a standardized method using rapid immunotest strips directed against cocaine (COC) and its major metabolite, benzoylecgonine (BE). We performed image analysis of three brands of commercially available dye-conjugated anti-COC/BE antibody test strips in response to three d...

  11. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  12. Developing the RIAM method (rapid impact assessment matrix) in the context of impact significance assessment

    International Nuclear Information System (INIS)

    Ijaes, Asko; Kuitunen, Markku T.; Jalava, Kimmo

    2010-01-01

    In this paper the applicability of the RIAM method (rapid impact assessment matrix) is evaluated in the context of impact significance assessment. The methodological issues considered in the study are: 1) to test the possibilities of enlarging the scoring system used in the method, and 2) to compare the significance classifications of RIAM and unaided decision-making to estimate the consistency between these methods. The data used consisted of projects for which funding had been applied for via the European Union's Regional Development Trust in the area of Central Finland. Cases were evaluated with respect to their environmental, social and economic impacts using an assessment panel. The results showed the scoring framework used in RIAM could be modified according to the problem situation at hand, which enhances its application potential. However the changes made in criteria B did not significantly affect the final ratings of the method, which indicates the high importance of criteria A1 (importance) and A2 (magnitude) to the overall results. The significance classes obtained by the two methods diverged notably. In general the ratings given by RIAM tended to be smaller compared to intuitive judgement implying that the RIAM method may be somewhat conservative in character.

  13. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    Science.gov (United States)

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc

  14. [Accuracy of three methods for the rapid diagnosis of oral candidiasis].

    Science.gov (United States)

    Lyu, X; Zhao, C; Yan, Z M; Hua, H

    2016-10-09

    Objective: To explore a simple, rapid and efficient method for the diagnosis of oral candidiasis in clinical practice. Methods: Totally 124 consecutive patients with suspected oral candidiasis were enrolled from Department of Oral Medicine, Peking University School and Hospital of Stomatology, Beijing, China. Exfoliated cells of oral mucosa and saliva or concentrated oral rinse) obtained from all participants were tested by three rapid smear methods(10% KOH smear, gram-stained smear, Congo red stained smear). The diagnostic efficacy(sensitivity, specificity, Youden's index, likelihood ratio, consistency, predictive value and area under curve(AUC) of each of the above mentioned three methods was assessed by comparing the results with the gold standard(combination of clinical diagnosis, laboratory diagnosis and expert opinion). Results: Gram-stained smear of saliva(or concentrated oral rinse) demonstrated highest sensitivity(82.3%). Test of 10%KOH smear of exfoliated cells showed highest specificity(93.5%). Congo red stained smear of saliva(or concentrated oral rinse) displayed highest diagnostic efficacy(79.0% sensitivity, 80.6% specificity, 0.60 Youden's index, 4.08 positive likelihood ratio, 0.26 negative likelihood ratio, 80% consistency, 80.3% positive predictive value, 79.4% negative predictive value and 0.80 AUC). Conclusions: Test of Congo red stained smear of saliva(or concentrated oral rinse) could be used as a point-of-care tool for the rapid diagnosis of oral candidiasis in clinical practice. Trial registration: Chinese Clinical Trial Registry, ChiCTR-DDD-16008118.

  15. A Rapid Aeroelasticity Optimization Method Based on the Stiffness characteristics

    OpenAIRE

    Yuan, Zhe; Huo, Shihui; Ren, Jianting

    2018-01-01

    A rapid aeroelasticity optimization method based on the stiffness characteristics was proposed in the present study. Large time expense in static aeroelasticity analysis based on traditional time domain aeroelasticity method is solved. Elastic axis location and torsional stiffness are discussed firstly. Both torsional stiffness and the distance between stiffness center and aerodynamic center have a direct impact on divergent velocity. The divergent velocity can be adjusted by changing the cor...

  16. Rapid determination of tannins in tanning baths by adaptation of BSA method.

    Science.gov (United States)

    Molinari, R; Buonomenna, M G; Cassano, A; Drioli, E

    2001-01-01

    A rapid and reproducible method for the determination of tannins in vegetable tanning baths is proposed as a modification of the BSA method for grain tannins existing in literature. The protein BSA was used instead of leather powder employed in the Filter Method, which is adopted in Italy and various others countries of Central Europe. In this rapid method the tannin contents is determined by means a spectrophotometric reading and not by means a gravimetric analysis of the Filter Method. The BSA method, which belongs to mixed methods (which use both precipitation and complexation of tannins), consists of selective precipitation of tannin from a solution containing also non tannins by BSA, the dissolution of precipitate and the quantification of free tannin amount by its complexation with Fe(III) in hydrochloric solutions. The absorbance values, read at 522 nm, have been expressed in terms of tannic acid concentration by using a calibration curve made with standard solutions of tannic acid; these have been correlated with the results obtained by using the Filter Method.

  17. A rapid method for monitoring the hydrodeoxygenation of coal-derived naphtha

    Energy Technology Data Exchange (ETDEWEB)

    Farnand, B.A.; Coulombe, S.; Smiley, G.T.; Fairbridge, C.

    1988-01-01

    A bonded polar poly(ethylene glycol) capillary column has been used for the identification and quantification of the phenolic components in synthetic crude naphthas. This provides a rapid and routine method for the determination of phenolic oxygen content with results comparable to combustion and neutron activation methods. The method is most useful in monitoring the removal of phenolic oxygen by hydroprocessing. 11 refs., 1 fig. 1 tab.

  18. MASS SPECTROMETRY PROTEOMICS METHOD AS A RAPID SCREENING TOOL FOR BACTERIAL CONTAMINATION OF FOOD

    Science.gov (United States)

    2017-06-01

    MASS SPECTROMETRY PROTEOMICS METHOD AS A RAPID SCREENING TOOL FOR BACTERIAL CONTAMINATION OF FOOD ECBC-TR...TITLE AND SUBTITLE Mass Spectrometry Proteomics Method as a Rapid Screening Tool for Bacterial Contamination of Food 5a. CONTRACT NUMBER 5b...the MSPM to correctly classify whether or not food samples were contaminated with Salmonella enterica serotype Newport in this blinded pilot study

  19. Rapid estimation of aquifer salinity structure from oil and gas geophysical logs

    Science.gov (United States)

    Shimabukuro, D.; Stephens, M.; Ducart, A.; Skinner, S. M.

    2016-12-01

    We describe a workflow for creating aquifer salinity maps using Archie's equation for areas that have geophysical data from oil and gas wells. We apply this method in California, where geophysical logs are available in raster format from the Division of Oil, Gas, and Geothermal Resource (DOGGR) online archive. This method should be applicable to any region where geophysical logs are readily available. Much of the work is controlled by computer code, allowing salinity estimates for new areas to be rapidly generated. For a region of interest, the DOGGR online database is scraped for wells that were logged with multi-tool suites, such as the Platform Express or Triple Combination Logging Tools. Then, well construction metadata, such as measured depth, spud date, and well orientation, is attached. The resultant local database allows a weighted criteria selection of wells that are most likely to have the shallow resistivity, deep resistivity, and density porosity measurements necessary to calculate salinity over the longest depth interval. The algorithm can be adjusted for geophysical log availability for older well fields and density of sampling. Once priority wells are identified, a student researcher team uses Neuralog software to digitize the raster geophysical logs. Total dissolved solid (TDS) concentration is then calculated in clean, wet sand intervals using the resistivity-porosity method, a modified form of Archie's equation. These sand intervals are automatically selected using a combination of spontaneous potential and the difference in shallow resistivity and deep resistivity measurements. Gamma ray logs are not used because arkosic sands common in California make it difficult to distinguish sand and shale. Computer calculation allows easy adjustment of Archie's parameters. The result is a semi-continuous TDS profile for the wells of interest. These profiles are combined and contoured using standard 3-d visualization software to yield preliminary salinity

  20. Fuji apple storage time rapid determination method using Vis/NIR spectroscopy

    Science.gov (United States)

    Liu, Fuqi; Tang, Xuxiang

    2015-01-01

    Fuji apple storage time rapid determination method using visible/near-infrared (Vis/NIR) spectroscopy was studied in this paper. Vis/NIR diffuse reflection spectroscopy responses to samples were measured for 6 days. Spectroscopy data were processed by stochastic resonance (SR). Principal component analysis (PCA) was utilized to analyze original spectroscopy data and SNR eigen value. Results demonstrated that PCA could not totally discriminate Fuji apples using original spectroscopy data. Signal-to-noise ratio (SNR) spectrum clearly classified all apple samples. PCA using SNR spectrum successfully discriminated apple samples. Therefore, Vis/NIR spectroscopy was effective for Fuji apple storage time rapid discrimination. The proposed method is also promising in condition safety control and management for food and environmental laboratories. PMID:25874818

  1. Rapid assessment of rice seed availability for wildlife in harvested fields

    Science.gov (United States)

    Halstead, B.J.; Miller, M.R.; Casazza, Michael L.; Coates, P.S.; Farinha, M.A.; Benjamin, Gustafson K.; Yee, J.L.; Fleskes, J.P.

    2011-01-01

    Rice seed remaining in commercial fields after harvest (waste rice) is a critical food resource for wintering waterfowl in rice-growing regions of North America. Accurate and precise estimates of the seed mass density of waste rice are essential for planning waterfowl wintering habitat extents and management. In the Sacramento Valley of California, USA, the existing method for obtaining estimates of availability of waste rice in harvested fields produces relatively precise estimates, but the labor-, time-, and machineryintensive process is not practical for routine assessments needed to examine long-term trends in waste rice availability. We tested several experimental methods designed to rapidly derive estimates that would not be burdened with disadvantages of the existing method. We first conducted a simulation study of the efficiency of each method and then conducted field tests. For each approach, methods did not vary in root mean squared error, although some methods did exhibit bias for both simulations and field tests. Methods also varied substantially in the time to conduct each sample and in the number of samples required to detect a standard trend. Overall, modified line-intercept methods performed well for estimating the density of rice seeds. Waste rice in the straw, although not measured directly, can be accounted for by a positive relationship with density of rice on the ground. Rapid assessment of food availability is a useful tool to help waterfowl managers establish and implement wetland restoration and agricultural habitat-enhancement goals for wintering waterfowl. ?? 2011 The Wildlife Society.

  2. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  3. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  4. Methods for the estimation of uranium ore reserves

    International Nuclear Information System (INIS)

    1985-01-01

    The Manual is designed mainly to provide assistance in uranium ore reserve estimation methods to mining engineers and geologists with limited experience in estimating reserves, especially to those working in developing countries. This Manual deals with the general principles of evaluation of metalliferous deposits but also takes into account the radioactivity of uranium ores. The methods presented have been generally accepted in the international uranium industry

  5. A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE

    Directory of Open Access Journals (Sweden)

    GEE-YONG PARK

    2014-02-01

    Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.

  6. A simple and rapid method for measurement of 10B-para-boronophenylalanine in the blood for boron neutron capture therapy using fluorescence spectrophotometry

    International Nuclear Information System (INIS)

    Kashino, Genro; Fukutani, Satoshi; Suzuki, Minoru

    2009-01-01

    10 B deriving from 10 B-para-boronophenylalanine (BPA) and 10 B-borocaptate sodium (BSH) have been detected in blood samples of patients undergoing boron neutron capture therapy (BNCT) using prompt gamma ray spectrometer or Inductively Coupled Plasma (ICP) method, respectively. However, the concentration of each compound cannot be ascertained because boron atoms in both molecules are the target in these assays. Here, we propose a simple and rapid method to measure only BPA by detecting fluorescence based on the characteristics of phenylalanine. 10 B concentrations of blood samples from human or mice were estimated by the fluorescence intensities at 275 nm of a BPA excited by light of wavelength 257 nm using a fluorescence spectrophotometer. The relationship between fluorescence to increased BPA concentration showed a positive linear correlation. Moreover, we established an adequate condition for BPA measurement in blood samples containing BPA, and the estimated 10 B concentrations of blood samples derived from BPA treated mice were similar between the values obtained by our method and those by ICP method. This new assay will be useful to estimate BPA concentration in blood samples obtained from patients undergoing BNCT especially in a combination use of BSH and BPA. (author)

  7. Technical note: Rapid image-based field methods improve the quantification of termite mound structures and greenhouse-gas fluxes

    Directory of Open Access Journals (Sweden)

    P. A. Nauer

    2018-06-01

    Full Text Available Termite mounds (TMs mediate biogeochemical processes with global relevance, such as turnover of the important greenhouse gas methane (CH4. However, the complex internal and external morphology of TMs impede an accurate quantitative description. Here we present two novel field methods, photogrammetry (PG and cross-sectional image analysis, to quantify TM external and internal mound structure of 29 TMs of three termite species. Photogrammetry was used to measure epigeal volume (VE, surface area (AE and mound basal area (AB by reconstructing 3-D models from digital photographs, and compared against a water-displacement method and the conventional approach of approximating TMs by simple geometric shapes. To describe TM internal structure, we introduce TM macro- and micro-porosity (θM and θμ, the volume fractions of macroscopic chambers, and microscopic pores in the wall material, respectively. Macro-porosity was estimated using image analysis of single TM cross sections, and compared against full X-ray computer tomography (CT scans of 17 TMs. For these TMs we present complete pore fractions to assess species-specific differences in internal structure. The PG method yielded VE nearly identical to a water-displacement method, while approximation of TMs by simple geometric shapes led to errors of 4–200 %. Likewise, using PG substantially improved the accuracy of CH4 emission estimates by 10–50 %. Comprehensive CT scanning revealed that investigated TMs have species-specific ranges of θM and θμ, but similar total porosity. Image analysis of single TM cross sections produced good estimates of θM for species with thick walls and evenly distributed chambers. The new image-based methods allow rapid and accurate quantitative characterisation of TMs to answer ecological, physiological and biogeochemical questions. The PG method should be applied when measuring greenhouse-gas emissions from TMs to avoid large errors from inadequate shape

  8. Measurement of 90Sr radioactivity in a rapid method of strontium estimation by solvent extraction with dicarbollides

    International Nuclear Information System (INIS)

    Svoboda, K.; Kyrs, M.

    1994-01-01

    The application of liquid scintillation counting to the measurement of 90 Sr radioactivity was studied, using a previously published rapid method of strontium separation, based on solvent extraction with a solution of cobalt dicarbollide and Slovafol 909 in a nitrobenzene-carbon tetrachloride mixture and subsequent stripping of strontium with a 0.15 M Chelaton IV (CDTA) solution at pH 10.2. With liquid scintillation counting, a more efficient elimination of the effect of 90 Y β-activity on 90 Sr counting is possible than when measuring the evaporated aliquot with the use of a solid scintillator. The adverse effect of traces of dicarbollide, nitrobenzene, and CCl 4 passed over in the aqueous 90 Sr solution prepared for counting, is caused by the (poorly reproducible) shift of the 90 Sr + 90 Y β-radiation spectral curve towards lower energies, the so-called quenching. The shift is independent of the aqueous phase concentration of the organic compounds mentioned. They can be removed by shaking the aqueous reextract with an equal volume of octanol or amyl acetate so that the undesirable spectral shift does not occur. No loss of strontium was found in this washing procedure. (author) 2 tabs., 6 figs., 5 refs

  9. Evaluation and reliability of bone histological age estimation methods

    African Journals Online (AJOL)

    Human age estimation at death plays a vital role in forensic anthropology and bioarchaeology. Researchers used morphological and histological methods to estimate human age from their skeletal remains. This paper discussed different histological methods that used human long bones and ribs to determine age ...

  10. Study on Top-Down Estimation Method of Software Project Planning

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jun-guang; L(U) Ting-jie; ZHAO Yu-mei

    2006-01-01

    This paper studies a new software project planning method under some actual project data in order to make software project plans more effective. From the perspective of system theory, our new method regards a software project plan as an associative unit for study. During a top-down estimation of a software project, Program Evaluation and Review Technique (PERT) method and analogy method are combined to estimate its size, then effort estimation and specific schedules are obtained according to distributions of the phase effort. This allows a set of practical and feasible planning methods to be constructed. Actual data indicate that this set of methods can lead to effective software project planning.

  11. Application of two electrical methods for the rapid assessment of freezing resistance in Salix epichloro

    Energy Technology Data Exchange (ETDEWEB)

    Tsarouhas, V.; Kenney, W.A.; Zsuffa, L. [University of Toronto, Ontario (Canada). Faculty of Forestry

    2000-09-01

    The importance of early selection of frost-resistant Salix clones makes it desirable to select a rapid and accurate screening method for assessing freezing resistance among several genotypes. Two electrical methods, stem electrical impedance to 1 and 10 khz alternating current, and electrolyte leakage of leaf tissue, were evaluated for detecting freezing resistance on three North America Salix epichloro Michx., clones after subjecting them to five different freezing temperatures (-1, -2, -3, -4, and -5 deg C). Differences in the electrical impedance to 1 and 10 kHz, and the ratio of the impedance at the two frequencies (low/high) before and after the freezing treatment (DZ{sub low}, DZ{sub high}, and DZ{sub ratio}, respectively) were estimated. Electrolyte leakage was expressed as relative conductivity (RC{sub t}) and index of injury (IDX{sub t}). Results from the two methods, obtained two days after the freezing stress, showed that both electrical methods were able to detect freezing injury in S. eriocephala. However, the electrolyte leakage method detected injury in more levels of freezing stress (-3, -4, and -5 deg C) than the impedance (-4, and -5 deg C), it assessed clonal differences in S. eriocephala freezing resistance, and it was best suited to correlate electrical methods with the visual assessed freezing injury. No significant impedance or leakage changes were found after the -1 and -2 deg C freezing temperatures. (author)

  12. Computerized method for rapid optimization of immunoassays

    International Nuclear Information System (INIS)

    Rousseau, F.; Forest, J.C.

    1990-01-01

    The authors have developed an one step quantitative method for radioimmunoassay optimization. The method is rapid and necessitates only to perform a series of saturation curves with different titres of the antiserum. After calculating the saturation point at several antiserum titres using the Scatchard plot, the authors have produced a table that predicts the main characteristics of the standard curve (Bo/T, Bo and T) that will prevail for any combination of antiserum titre and percentage of sites saturation. The authors have developed a microcomputer program able to interpolate all the data needed to produce such a table from the results of the saturation curves. This computer program permits also to predict the sensitivity of the assay at any experimental conditions if the antibody does not discriminate between the labeled and the non labeled antigen. The authors have tested the accuracy of this optimization table with two in house RIA systems: 17-β-estradiol, and hLH. The results obtained experimentally, including sensitivity determinations, were concordant with those predicted from the optimization table. This method accerelates and improves greatly the process of optimization of radioimmunoassays [fr

  13. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali; Pä tzold, Matthias

    2012-01-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  14. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali

    2012-04-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  15. Rapid dual-injection single-scan 13N-ammonia PET for quantification of rest and stress myocardial blood flows

    International Nuclear Information System (INIS)

    Rust, T C; DiBella, E V R; McGann, C J; Christian, P E; Hoffman, J M; Kadrmas, D J

    2006-01-01

    Quantification of myocardial blood flows at rest and stress using 13 N-ammonia PET is an established method; however, current techniques require a waiting period of about 1 h between scans. The objective of this study was to test a rapid dual-injection single-scan approach, where 13 N-ammonia injections are administered 10 min apart during rest and adenosine stress. Dynamic PET data were acquired in six human subjects using imaging protocols that provided separate single-injection scans as gold standards. Rest and stress data were combined to emulate rapid dual-injection data so that the underlying activity from each injection was known exactly. Regional blood flow estimates were computed from the dual-injection data using two methods: background subtraction and combined modelling. The rapid dual-injection approach provided blood flow estimates very similar to the conventional single-injection standards. Rest blood flow estimates were affected very little by the dual-injection approach, and stress estimates correlated strongly with separate single-injection values (r = 0.998, mean absolute difference = 0.06 ml min -1 g -1 ). An actual rapid dual-injection scan was successfully acquired in one subject and further demonstrates feasibility of the method. This study with a limited dataset demonstrates that blood flow quantification can be obtained in only 20 min by the rapid dual-injection approach with accuracy similar to that of conventional separate rest and stress scans. The rapid dual-injection approach merits further development and additional evaluation for potential clinical use

  16. Validity of rapid estimation of erythrocyte volume in the diagnosis of polycytemia vera

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, S.; Roedbro, P.

    1989-01-01

    In the diagnosis of polycytemia vera, estimation of erythrocyte volume (EV) from plasma volume (PV) and venous hematocrit (Hct/sub v/) is usually thought unadvisable, because the ratio of whole body hematocrit to venous hematocrit (f ratio) is higher in patients with splenomegaly than in normal subjects, and varies considerably between individuals. We determined the mean f ratio in 232 consecutive patients suspected of polycytemia vera (anti f=0.967; SD 0.048) and used it with each patient's PV and Hct/sub v/ to calculate an estimated normalised EV/sub n/. With measured EV as a reference value, EV/sub n/ was investigated as a diagnostic test. By means of two cut off levels the EV/sub n/ values could be divided into EV/sub n/ elevated, EV/sub n/ not elevated (both with high predictive values), and an EV/sub n/ borderline group. The size of the borderline EV/sub n/ group ranged from 5% to 46% depending on position of the cut off levels, i.e. with the efficiency demanded from the diagnostic test. EV can safely and rapidly be estimated from PV and Hct/sub v/, if anti f is determined from the relevant population, and if the results in an easily definable borderline range of EV/sub n/ values are supplemented by direct EV determination.

  17. A comparison of analysis methods to estimate contingency strength.

    Science.gov (United States)

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  18. Plant-available soil water capacity: estimation methods and implications

    Directory of Open Access Journals (Sweden)

    Bruno Montoani Silva

    2014-04-01

    Full Text Available The plant-available water capacity of the soil is defined as the water content between field capacity and wilting point, and has wide practical application in planning the land use. In a representative profile of the Cerrado Oxisol, methods for estimating the wilting point were studied and compared, using a WP4-T psychrometer and Richards chamber for undisturbed and disturbed samples. In addition, the field capacity was estimated by the water content at 6, 10, 33 kPa and by the inflection point of the water retention curve, calculated by the van Genuchten and cubic polynomial models. We found that the field capacity moisture determined at the inflection point was higher than by the other methods, and that even at the inflection point the estimates differed, according to the model used. By the WP4-T psychrometer, the water content was significantly lower found the estimate of the permanent wilting point. We concluded that the estimation of the available water holding capacity is markedly influenced by the estimation methods, which has to be taken into consideration because of the practical importance of this parameter.

  19. Nonparametric methods for volatility density estimation

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2009-01-01

    Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on

  20. Fusion rule estimation using vector space methods

    International Nuclear Information System (INIS)

    Rao, N.S.V.

    1997-01-01

    In a system of N sensors, the sensor S j , j = 1, 2 .... N, outputs Y (j) element-of Re, according to an unknown probability distribution P (Y(j) /X) , corresponding to input X element-of [0, 1]. A training n-sample (X 1 , Y 1 ), (X 2 , Y 2 ), ..., (X n , Y n ) is given where Y i = (Y i (1) , Y i (2) , . . . , Y i N ) such that Y i (j) is the output of S j in response to input X i . The problem is to estimate a fusion rule f : Re N → [0, 1], based on the sample, such that the expected square error is minimized over a family of functions Y that constitute a vector space. The function f* that minimizes the expected error cannot be computed since the underlying densities are unknown, and only an approximation f to f* is feasible. We estimate the sample size sufficient to ensure that f provides a close approximation to f* with a high probability. The advantages of vector space methods are two-fold: (a) the sample size estimate is a simple function of the dimensionality of F, and (b) the estimate f can be easily computed by well-known least square methods in polynomial time. The results are applicable to the classical potential function methods and also (to a recently proposed) special class of sigmoidal feedforward neural networks

  1. A Benchmark Estimate for the Capital Stock. An Optimal Consistency Method

    OpenAIRE

    Jose Miguel Albala-Bertrand

    2001-01-01

    There are alternative methods to estimate a capital stock for a benchmark year. These methods, however, do not allow for an independent check, which could establish whether the estimated benchmark level is too high or too low. I propose here an optimal consistency method (OCM), which may allow estimating a capital stock level for a benchmark year and/or checking the consistency of alternative estimates of a benchmark capital stock.

  2. Thermodynamic properties of organic compounds estimation methods, principles and practice

    CERN Document Server

    Janz, George J

    1967-01-01

    Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the

  3. N-nitrosodimethylamine in drinking water using a rapid, solid-phase extraction method

    Energy Technology Data Exchange (ETDEWEB)

    Jenkins, S W.D. [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch; Koester, C J [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch; Taguchi, V Y [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch; Wang, D T [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch; Palmentier, J P.F.P. [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch; Hong, K P [Ministery of Environment and Energy, Etobicoke, ON (Canada). Lab. Services Branch

    1995-12-01

    A simple, rapid method for the extraction of N-nitrosodimethylamine (NDMA) from drinking and surface waters was developed using Ambersorb 572. Development of an alternative method to classical liquid-liquid extraction techniques was necessary to handle the workload presented by implementation of a provincial guideline of 9 ppt for drinking water and a regulatory level of 200 ppt for effluents. A granular absorbent, Ambersorb 572, was used to extract the NDMA from the water in the sample bottle. The NDMA was extracted from the Ambersorb 572 with dichloromethane in the autosampler vial. Method characteristics include a precision of 4% for replicate analyses, and accuracy of 6% at 10 ppt and a detection limit of 1.0 ppt NDMA in water. Comparative data between the Ambersorb 572 method and liquid-liquid extraction showed excellent agreement (average difference of 12%). With the Ambersorb 572 method, dichloromethane use has been reduced by a factor of 1,000 and productivity has been increased by a factor of 3-4. Monitoring of a drinking water supply showed rapidly changing concentrations of NDMA from day to day. (orig.)

  4. A Group Contribution Method for Estimating Cetane and Octane Numbers

    Energy Technology Data Exchange (ETDEWEB)

    Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group

    2016-07-28

    Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.

  5. Rapid surface enhanced Raman scattering detection method for chloramphenicol residues

    Science.gov (United States)

    Ji, Wei; Yao, Weirong

    2015-06-01

    Chloramphenicol (CAP) is a widely used amide alcohol antibiotics, which has been banned from using in food producing animals in many countries. In this study, surface enhanced Raman scattering (SERS) coupled with gold colloidal nanoparticles was used for the rapid analysis of CAP. Density functional theory (DFT) calculations were conducted with Gaussian 03 at the B3LYP level using the 3-21G(d) and 6-31G(d) basis sets to analyze the assignment of vibrations. Affirmatively, the theoretical Raman spectrum of CAP was in complete agreement with the experimental spectrum. They both exhibited three strong peaks characteristic of CAP at 1104 cm-1, 1344 cm-1, 1596 cm-1, which were used for rapid qualitative analysis of CAP residues in food samples. The use of SERS as a method for the measurements of CAP was explored by comparing use of different solvents, gold colloidal nanoparticles concentration and absorption time. The method of the detection limit was determined as 0.1 μg/mL using optimum conditions. The Raman peak at 1344 cm-1 was used as the index for quantitative analysis of CAP in food samples, with a linear correlation of R2 = 0.9802. Quantitative analysis of CAP residues in foods revealed that the SERS technique with gold colloidal nanoparticles was sensitive and of a good stability and linear correlation, and suited for rapid analysis of CAP residue in a variety of food samples.

  6. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    Science.gov (United States)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  7. Early‐Stage Capital Cost Estimation of Biorefinery Processes: A Comparative Study of Heuristic Techniques

    Science.gov (United States)

    Couturier, Jean‐Luc; Kokossis, Antonis; Dubois, Jean‐Luc

    2016-01-01

    Abstract Biorefineries offer a promising alternative to fossil‐based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital‐intensive projects that involve state‐of‐the‐art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well‐documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early‐stage capital cost estimation tool suitable for biorefinery processes. PMID:27484398

  8. A Microfluidic Channel Method for Rapid Drug-Susceptibility Testing of Pseudomonas aeruginosa.

    Directory of Open Access Journals (Sweden)

    Yoshimi Matsumoto

    Full Text Available The recent global increase in the prevalence of antibiotic-resistant bacteria and lack of development of new therapeutic agents emphasize the importance of selecting appropriate antimicrobials for the treatment of infections. However, to date, the development of completely accelerated drug susceptibility testing methods has not been achieved despite the availability of a rapid identification method. We proposed an innovative rapid method for drug susceptibility testing for Pseudomonas aeruginosa that provides results within 3 h. The drug susceptibility testing microfluidic (DSTM device was prepared using soft lithography. It consisted of five sets of four microfluidic channels sharing one inlet slot, and the four channels are gathered in a small area, permitting simultaneous microscopic observation. Antimicrobials were pre-introduced into each channel and dried before use. Bacterial suspensions in cation-adjusted Mueller-Hinton broth were introduced from the inlet slot and incubated for 3 h. Susceptibilities were microscopically evaluated on the basis of differences in cell numbers and shapes between drug-treated and control cells, using dedicated software. The results of 101 clinically isolated strains of P. aeruginosa obtained using the DSTM method strongly correlated with results obtained using the ordinary microbroth dilution method. Ciprofloxacin, meropenem, ceftazidime, and piperacillin caused elongation in susceptible cells, while meropenem also induced spheroplast and bulge formation. Morphological observation could alternatively be used to determine the susceptibility of P. aeruginosa to these drugs, although amikacin had little effect on cell shape. The rapid determination of bacterial drug susceptibility using the DSTM method could also be applicable to other pathogenic species, and it could easily be introduced into clinical laboratories without the need for expensive instrumentation.

  9. Rapid analysis method for the determination of 14C specific activity in irradiated graphite.

    Science.gov (United States)

    Remeikis, Vidmantas; Lagzdina, Elena; Garbaras, Andrius; Gudelis, Arūnas; Garankin, Jevgenij; Plukienė, Rita; Juodis, Laurynas; Duškesas, Grigorijus; Lingis, Danielius; Abdulajev, Vladimir; Plukis, Artūras

    2018-01-01

    14C is one of the limiting radionuclides used in the categorization of radioactive graphite waste; this categorization is crucial in selecting the appropriate graphite treatment/disposal method. We propose a rapid analysis method for 14C specific activity determination in small graphite samples in the 1-100 μg range. The method applies an oxidation procedure to the sample, which extracts 14C from the different carbonaceous matrices in a controlled manner. Because this method enables fast online measurement and 14C specific activity evaluation, it can be especially useful for characterizing 14C in irradiated graphite when dismantling graphite moderator and reflector parts, or when sorting radioactive graphite waste from decommissioned nuclear power plants. The proposed rapid method is based on graphite combustion and the subsequent measurement of both CO2 and 14C, using a commercial elemental analyser and the semiconductor detector, respectively. The method was verified using the liquid scintillation counting (LSC) technique. The uncertainty of this rapid method is within the acceptable range for radioactive waste characterization purposes. The 14C specific activity determination procedure proposed in this study takes approximately ten minutes, comparing favorably to the more complicated and time consuming LSC method. This method can be potentially used to radiologically characterize radioactive waste or used in biomedical applications when dealing with the specific activity determination of 14C in the sample.

  10. Rapid analysis method for the determination of 14C specific activity in irradiated graphite.

    Directory of Open Access Journals (Sweden)

    Vidmantas Remeikis

    Full Text Available 14C is one of the limiting radionuclides used in the categorization of radioactive graphite waste; this categorization is crucial in selecting the appropriate graphite treatment/disposal method. We propose a rapid analysis method for 14C specific activity determination in small graphite samples in the 1-100 μg range. The method applies an oxidation procedure to the sample, which extracts 14C from the different carbonaceous matrices in a controlled manner. Because this method enables fast online measurement and 14C specific activity evaluation, it can be especially useful for characterizing 14C in irradiated graphite when dismantling graphite moderator and reflector parts, or when sorting radioactive graphite waste from decommissioned nuclear power plants. The proposed rapid method is based on graphite combustion and the subsequent measurement of both CO2 and 14C, using a commercial elemental analyser and the semiconductor detector, respectively. The method was verified using the liquid scintillation counting (LSC technique. The uncertainty of this rapid method is within the acceptable range for radioactive waste characterization purposes. The 14C specific activity determination procedure proposed in this study takes approximately ten minutes, comparing favorably to the more complicated and time consuming LSC method. This method can be potentially used to radiologically characterize radioactive waste or used in biomedical applications when dealing with the specific activity determination of 14C in the sample.

  11. Motion estimation using point cluster method and Kalman filter.

    Science.gov (United States)

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  12. Simultaneous Estimation of Sitagliptin and Metformin in ...

    African Journals Online (AJOL)

    A rapid, simple, specific and precise high-performance thin-layer chromatography (HPTLC) method was developed for the simultaneous estimation of sitagliptin (STG) and metformin (MET) content in a fixed dose pharmaceutical formulation and also in bulk drug. In the developed method, aluminium backed silica gel 60 ...

  13. Rapid in vivo screening method for the evaluation of new anti ...

    African Journals Online (AJOL)

    Rapid in vivo screening method for the evaluation of new anti helicobacter ... Six to eight week-old mice pre-treated (7 days) with Amoxicillin/Metronidazole (25 ... These findings were used as a mouse model of Helicobacter pylori infection to ...

  14. A rapid estimation of near field tsunami run-up

    Science.gov (United States)

    Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie

    2015-01-01

    Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.

  15. An Estimation Method for number of carrier frequency

    Directory of Open Access Journals (Sweden)

    Xiong Peng

    2015-01-01

    Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.

  16. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  17. Consumptive use of upland rice as estimated by different methods

    International Nuclear Information System (INIS)

    Chhabda, P.R.; Varade, S.B.

    1985-01-01

    The consumptive use of upland rice (Oryza sativa Linn.) grown during the wet season (kharif) as estimated by modified Penman, radiation, pan-evaporation and Hargreaves methods showed a variation from computed consumptive use estimated by the gravimetric method. The variability increased with an increase in the irrigation interval, and decreased with an increase in the level of N applied. The average variability was less in pan-evaporation method, which could reliably be used for estimating water requirement of upland rice if percolation losses are considered

  18. Estimation of Total Glomerular Number Using an Integrated Disector Method in Embryonic and Postnatal Kidneys

    Directory of Open Access Journals (Sweden)

    Michel G Arsenault

    2014-06-01

    Full Text Available Congenital Anomalies of the Kidney and Urinary Tract (CAKUT are a polymorphic group of clinical disorders comprising the major cause of renal failure in children. Included within CAKUT is a wide spectrum of developmental malformations ranging from renal agenesis, renal hypoplasia and renal dysplasia (maldifferentiation of renal tissue, each characterized by varying deficits in nephron number. First presented in the Brenner Hypothesis, low congenital nephron endowment is becoming recognized as an antecedent cause of adult-onset hypertension, a leading cause of coronary heart disease, stroke, and renal failure in North America. Genetic mouse models of impaired nephrogenesis and nephron endowment provide a critical framework for understanding the origins of human kidney disease. Current methods to quantitate nephron number include (i acid maceration (ii estimation of nephron number from a small number of tissue sections (iii imaging modalities such as MRI and (iv the gold standard physical disector/fractionator method. Despite its accuracy, the physical disector/fractionator method is rarely employed because it is labour-intensive, time-consuming and costly to perform. Consequently, less rigourous methods of nephron estimation are routinely employed by many laboratories. Here we present an updated, digitized version of the physical disector/fractionator method using free open source Fiji software, which we have termed the integrated disector method. This updated version of the gold standard modality accurately, rapidly and cost-effectively quantitates nephron number in embryonic and post-natal mouse kidneys, and can be easily adapted for stereological measurements in other organ systems.

  19. [Experimental rationale for the parameters of a rapid method for oxidase activity determination].

    Science.gov (United States)

    Butorina, N N

    2010-01-01

    Experimental rationale is provided for the parameters of a rapid (1-2-min) test to concurrently determine the oxidase activity of all bacteria grown on the membrane filter after water filtration. Oxidase reagents that are the aqueous solutions of tetramethyl-p-phenylenediamine dihydrochloride and demethyl-p-phenylenediamine dihydrochloride have been first ascertained to exert no effect on the viability and enzymatic activity of bacteria after one-hour contact. An algorithm has been improved for the rapid oxidase activity test: the allowable time for bacteria to contact oxidase reagents and procedures for minimizing the effect on bacterial biochemical activity following the contact. An accelerated method based on lactose medium with tergitol 7 and Endo agar has been devised to determine coliform bacteria, by applying the rapid oxidase test: the time of a final response is 18-24 hours. The method has been included into GOST 52426-2005.

  20. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  1. Methods for estimating low-flow statistics for Massachusetts streams

    Science.gov (United States)

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The

  2. Comparing Methods for Estimating Direct Costs of Adverse Drug Events.

    Science.gov (United States)

    Gyllensten, Hanna; Jönsson, Anna K; Hakkarainen, Katja M; Svensson, Staffan; Hägg, Staffan; Rehnberg, Clas

    2017-12-01

    To estimate how direct health care costs resulting from adverse drug events (ADEs) and cost distribution are affected by methodological decisions regarding identification of ADEs, assigning relevant resource use to ADEs, and estimating costs for the assigned resources. ADEs were identified from medical records and diagnostic codes for a random sample of 4970 Swedish adults during a 3-month study period in 2008 and were assessed for causality. Results were compared for five cost evaluation methods, including different methods for identifying ADEs, assigning resource use to ADEs, and for estimating costs for the assigned resources (resource use method, proportion of registered cost method, unit cost method, diagnostic code method, and main diagnosis method). Different levels of causality for ADEs and ADEs' contribution to health care resource use were considered. Using the five methods, the maximum estimated overall direct health care costs resulting from ADEs ranged from Sk10,000 (Sk = Swedish krona; ~€1,500 in 2016 values) using the diagnostic code method to more than Sk3,000,000 (~€414,000) using the unit cost method in our study population. The most conservative definitions for ADEs' contribution to health care resource use and the causality of ADEs resulted in average costs per patient ranging from Sk0 using the diagnostic code method to Sk4066 (~€500) using the unit cost method. The estimated costs resulting from ADEs varied considerably depending on the methodological choices. The results indicate that costs for ADEs need to be identified through medical record review and by using detailed unit cost data. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. Development and Validation of Spectrophotometric Methods for Simultaneous Estimation of Valsartan and Hydrochlorothiazide in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    Monika L. Jadhav

    2014-01-01

    Full Text Available Two UV-spectrophotometric methods have been developed and validated for simultaneous estimation of valsartan and hydrochlorothiazide in a tablet dosage form. The first method employed solving of simultaneous equations based on the measurement of absorbance at two wavelengths, 249.4 nm and 272.6 nm, λmax for valsartan and hydrochlorothiazide, respectively. The second method was absorbance ratio method, which involves formation of Q-absorbance equation at 258.4 nm (isoabsorptive point and also at 272.6 nm (λmax of hydrochlorothiazide. The methods were found to be linear between the range of 5–30 µg/mL for valsartan and 4–24 μg/mL for hydrochlorothiazide using 0.1 N NaOH as solvent. The mean percentage recovery was found to be 100.20% and 100.19% for the simultaneous equation method and 98.56% and 97.96% for the absorbance ratio method, for valsartan and hydrochlorothiazide, respectively, at three different levels of standard additions. The precision (intraday, interday of methods was found within limits (RSD<2%. It could be concluded from the results obtained in the present investigation that the two methods for simultaneous estimation of valsartan and hydrochlorothiazide in tablet dosage form are simple, rapid, accurate, precise and economical and can be used, successfully, in the quality control of pharmaceutical formulations and other routine laboratory analysis.

  4. Phase difference estimation method based on data extension and Hilbert transform

    International Nuclear Information System (INIS)

    Shen, Yan-lin; Tu, Ya-qing; Chen, Lin-jun; Shen, Ting-ao

    2015-01-01

    To improve the precision and anti-interference performance of phase difference estimation for non-integer periods of sampling signals, a phase difference estimation method based on data extension and Hilbert transform is proposed. Estimated phase difference is obtained by means of data extension, Hilbert transform, cross-correlation, auto-correlation, and weighted phase average. Theoretical analysis shows that the proposed method suppresses the end effects of Hilbert transform effectively. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of phase difference estimation and has better performance of phase difference estimation than the correlation, Hilbert transform, and data extension-based correlation methods, which contribute to improving the measurement precision of the Coriolis mass flowmeter. (paper)

  5. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  6. Effectiveness of Rapid Cooling as a Method of Euthanasia for Young Zebrafish (Danio rerio).

    Science.gov (United States)

    Wallace, Chelsea K; Bright, Lauren A; Marx, James O; Andersen, Robert P; Mullins, Mary C; Carty, Anthony J

    2018-01-01

    Despite increased use of zebrafish (Danio rerio) in biomedical research, consistent information regarding appropriate euthanasia methods, particularly for embryos, is sparse. Current literature indicates that rapid cooling is an effective method of euthanasia for adult zebrafish, yet consistent guidelines regarding zebrafish younger than 6 mo are unavailable. This study was performed to distinguish the age at which rapid cooling is an effective method of euthanasia for zebrafish and the exposure times necessary to reliably euthanize zebrafish using this method. Zebrafish at 3, 4, 7, 14, 16, 19, 21, 28, 60, and 90 d postfertilization (dpf) were placed into an ice water bath for 5, 10, 30, 45, or 60 min (n = 12 to 40 per group). In addition, zebrafish were placed in ice water for 12 h (age ≤14 dpf) or 30 s (age ≥14 dpf). After rapid cooling, fish were transferred to a recovery tank and the number of fish alive at 1, 4, and 12-24 h after removal from ice water was documented. Euthanasia was defined as a failure when evidence of recovery was observed at any point after removal from ice water. Results showed that younger fish required prolonged exposure to rapid cooling for effective euthanasia, with the required exposure time decreasing as fish age. Although younger fish required long exposure times, animals became immobilized immediately upon exposure to the cold water, and behavioral indicators of pain or distress rarely occurred. We conclude that zebrafish 14 dpf and younger require as long as 12 h, those 16 to 28 dpf of age require 5 min, and those older than 28 dpf require 30 s minimal exposure to rapid cooling for reliable euthanasia.

  7. Simple method for the estimation of glomerular filtration rate

    Energy Technology Data Exchange (ETDEWEB)

    Groth, T [Group for Biomedical Informatics, Uppsala Univ. Data Center, Uppsala (Sweden); Tengstroem, B [District General Hospital, Skoevde (Sweden)

    1977-02-01

    A simple method is presented for indirect estimation of the glomerular filtration rate from two venous blood samples, drawn after a single injection of a small dose of (/sup 125/I)sodium iothalamate (10 ..mu..Ci). The method does not require exact dosage, as the first sample, taken after a few minutes (t=5 min) after injection, is used to normilize the value of the second sample, which should be taken in between 2 to 4 h after injection. The glomerular filtration rate, as measured by standard insulin clearance, may then be predicted from the logarithm of the normalized value and linear regression formulas with a standard error of estimate of the order of 1 to 2 ml/min/1.73 m/sup 2/. The slope-intercept method for direct estimation of glomerular filtration rate is also evaluated and found to significantly underestimate standard insulin clearance. The normalized 'single-point' method is concluded to be superior to the slope-intercept method and more sophisticated methods using curve fitting technique, with regard to predictive force and clinical applicability.

  8. Rapid estimation of 4DCT motion-artifact severity based on 1D breathing-surrogate periodicity

    Energy Technology Data Exchange (ETDEWEB)

    Li, Guang, E-mail: lig2@mskcc.org; Caraveo, Marshall [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Wei, Jie [Department of Computer Science, City College of New York, New York, New York 10031 (United States); Rimner, Andreas; Wu, Abraham J.; Goodman, Karyn A. [Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Yorke, Ellen [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States)

    2014-11-01

    Purpose: Motion artifacts are common in patient four-dimensional computed tomography (4DCT) images, leading to an ill-defined tumor volume with large variations for radiotherapy treatment and a poor foundation with low imaging fidelity for studying respiratory motion. The authors developed a method to estimate 4DCT image quality by establishing a correlation between the severity of motion artifacts in 4DCT images and the periodicity of the corresponding 1D respiratory waveform (1DRW) used for phase binning in 4DCT reconstruction. Methods: Discrete Fourier transformation (DFT) was applied to analyze 1DRW periodicity. The breathing periodicity index (BPI) was defined as the sum of the largest five Fourier coefficients, ranging from 0 to 1. Distortional motion artifacts (excluding blurring) of cine-scan 4DCT at the junctions of adjacent couch positions around the diaphragm were classified in three categories: incomplete, overlapping, and duplicate anatomies. To quantify these artifacts, discontinuity of the diaphragm at the junctions was measured in distance and averaged along six directions in three orthogonal views. Artifacts per junction (APJ) across the entire diaphragm were calculated in each breathing phase and phase-averaged APJ{sup ¯}, defined as motion-artifact severity (MAS), was obtained for each patient. To make MAS independent of patient-specific motion amplitude, two new MAS quantities were defined: MAS{sup D} is normalized to the maximum diaphragmatic displacement and MAS{sup V} is normalized to the mean diaphragmatic velocity (the breathing period was obtained from DFT analysis of 1DRW). Twenty-six patients’ free-breathing 4DCT images and corresponding 1DRW data were studied. Results: Higher APJ values were found around midventilation and full inhalation while the lowest APJ values were around full exhalation. The distribution of MAS is close to Poisson distribution with a mean of 2.2 mm. The BPI among the 26 patients was calculated with a value

  9. Rapid estimation of 4DCT motion-artifact severity based on 1D breathing-surrogate periodicity

    International Nuclear Information System (INIS)

    Li, Guang; Caraveo, Marshall; Wei, Jie; Rimner, Andreas; Wu, Abraham J.; Goodman, Karyn A.; Yorke, Ellen

    2014-01-01

    Purpose: Motion artifacts are common in patient four-dimensional computed tomography (4DCT) images, leading to an ill-defined tumor volume with large variations for radiotherapy treatment and a poor foundation with low imaging fidelity for studying respiratory motion. The authors developed a method to estimate 4DCT image quality by establishing a correlation between the severity of motion artifacts in 4DCT images and the periodicity of the corresponding 1D respiratory waveform (1DRW) used for phase binning in 4DCT reconstruction. Methods: Discrete Fourier transformation (DFT) was applied to analyze 1DRW periodicity. The breathing periodicity index (BPI) was defined as the sum of the largest five Fourier coefficients, ranging from 0 to 1. Distortional motion artifacts (excluding blurring) of cine-scan 4DCT at the junctions of adjacent couch positions around the diaphragm were classified in three categories: incomplete, overlapping, and duplicate anatomies. To quantify these artifacts, discontinuity of the diaphragm at the junctions was measured in distance and averaged along six directions in three orthogonal views. Artifacts per junction (APJ) across the entire diaphragm were calculated in each breathing phase and phase-averaged APJ ¯ , defined as motion-artifact severity (MAS), was obtained for each patient. To make MAS independent of patient-specific motion amplitude, two new MAS quantities were defined: MAS D is normalized to the maximum diaphragmatic displacement and MAS V is normalized to the mean diaphragmatic velocity (the breathing period was obtained from DFT analysis of 1DRW). Twenty-six patients’ free-breathing 4DCT images and corresponding 1DRW data were studied. Results: Higher APJ values were found around midventilation and full inhalation while the lowest APJ values were around full exhalation. The distribution of MAS is close to Poisson distribution with a mean of 2.2 mm. The BPI among the 26 patients was calculated with a value ranging from 0

  10. Numerical Estimation Method for the NonStationary Thrust of Pulsejet Ejector Nozzle

    Directory of Open Access Journals (Sweden)

    A. Yu. Mikushkin

    2016-01-01

    Full Text Available The article considers a calculation method for the non-stationary thrust of pulsejet ejector nozzle that is based on detonation combustion of gaseous fuel.To determine initial distributions of the thermodynamic parameters inside the detonation tube was carried out a rapid analysis based on x-t-diagrams of motion of glowing combustion products. For this purpose, the section with transparent walls was connected to the outlet of the tube to register the movement of products of combustion.Based on obtained images and gas-dynamic and thermodynamic equations the velocity distribution of the combustion products, its density, pressure and temperature required for numerical analysis were calculated. The world literature presents data on distribution of parameters, however they are given only for direct initiation of detonation at the closed end and for chemically "frozen" gas composition. The article presents the interpolation methods of parameters measured at the temperatures of 2500-2800K.Estimation of the thermodynamic parameters is based on the Chapman-Jouguet theory that the speed of the combustion products directly behind the detonation wave front with respect to the wave front is equal to the speed of sound of these products at a given point. The method of minimizing enthalpy of the final thermodynamic state was used to calculate the equilibrium parameters. Thus, a software package «IVTANTHERMO», which is a database of thermodynamic properties of many individual substances in a wide temperature range, was used.An integral thrust was numerically calculated according to the ejector nozzle surface. We solved the Navier-Stokes equations using the finite-difference Roe scheme of the second order. The combustion products were considered both as an inert mixture with "frozen" composition and as a mixture in chemical equilibrium with the changing temperature. The comparison with experimental results was made.The above method can be used for rapid

  11. Comparison of methods for estimating premorbid intelligence

    OpenAIRE

    Bright, Peter; van der Linde, Ian

    2018-01-01

    To evaluate impact of neurological injury on cognitive performance it is typically necessary to derive a baseline (or ‘premorbid’) estimate of a patient’s general cognitive ability prior to the onset of impairment. In this paper, we consider a range of common methods for producing this estimate, including those based on current best performance, embedded ‘hold/no hold’ tests, demographic information, and word reading ability. Ninety-two neurologically healthy adult participants were assessed ...

  12. Visual and colorimetric methods for rapid determination of total tannins in vegetable raw materials

    Directory of Open Access Journals (Sweden)

    S. P. Kalinkina

    2016-01-01

    Full Text Available The article is dedicated to the development of rapid colorimetric method for determining the amount of tannins in aqueous extracts of vegetable raw materials. The sorption-based colorimetric test is determining sorption tannins polyurethane foam, impregnated of FeCl3, receiving on its surface painted in black and green color of the reaction products and the determination of their in sorbent matrix. Selectivity is achieved by determining the tannins specific interaction of polyphenols with iron ions (III. The conditions of sorption-colorimetric method: the concentration of ferric chloride (III, impregnated in the polyurethane foam; sorbent mass in the analytical cartridge; degree of loading his agent; the contact time of the phases. color scales have been developed for the visual determination of the amount of tannins in terms of gallic acid. Spend a digitized image obtained scales using computer program “Sorbfil TLC”, excluding a subjective assessment of the intensity of the color scale of the test. The results obtained determine the amount of tannins in aqueous extracts of vegetable raw rapid method using tablets and analytical cartridges. The results of the test determination of tannins with visual and densitometric analytical signal registration are compared to known methods. Spend a metrological evaluation of the results of determining the amount of tannins sorption rapid colorimetric methods. Time visual and densitometric rapid determination of tannins, taking into account the sample preparation is 25–30 minutes, the relative error does not exceed 28 %. The developed test methods for quantifying the content of tannins allow to exclude the use of sophisticated analytical equipment, carry out the analysis in non-laboratory conditions do not require highly skilled personnel.

  13. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  14. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  15. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  16. The determination of Sr-90 in environmental material using an improved rapid method

    International Nuclear Information System (INIS)

    Ghods, A.; Veselsky, J.C.; Zhu, S.; Mirna, A.; Schelenz, R.

    1989-01-01

    A short report on strontium 90, its occurrence in the biosphere and its rapid determination methods is given. Classification of determination methods suitable for various environmental and biological materials is established. Interference due to Y-91 and a method to eliminate the activity of Y-90 and Y-91 is discussed. Tabs

  17. Stock price estimation using ensemble Kalman Filter square root method

    Science.gov (United States)

    Karya, D. F.; Katias, P.; Herlambang, T.

    2018-04-01

    Shares are securities as the possession or equity evidence of an individual or corporation over an enterprise, especially public companies whose activity is stock trading. Investment in stocks trading is most likely to be the option of investors as stocks trading offers attractive profits. In determining a choice of safe investment in the stocks, the investors require a way of assessing the stock prices to buy so as to help optimize their profits. An effective method of analysis which will reduce the risk the investors may bear is by predicting or estimating the stock price. Estimation is carried out as a problem sometimes can be solved by using previous information or data related or relevant to the problem. The contribution of this paper is that the estimates of stock prices in high, low, and close categorycan be utilized as investors’ consideration for decision making in investment. In this paper, stock price estimation was made by using the Ensemble Kalman Filter Square Root method (EnKF-SR) and Ensemble Kalman Filter method (EnKF). The simulation results showed that the resulted estimation by applying EnKF method was more accurate than that by the EnKF-SR, with an estimation error of about 0.2 % by EnKF and an estimation error of 2.6 % by EnKF-SR.

  18. Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE

    Science.gov (United States)

    Itai, Akitoshi; Yasukawa, Hiroshi

    This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.

  19. Manure sampling procedures and nutrient estimation by the hydrometer method for gestation pigs.

    Science.gov (United States)

    Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian

    2004-05-01

    Three manure agitation procedures were examined in this study (vertical mixing, horizontal mixing, and no mixing) to determine the efficacy of producing a representative manure sample. The total solids content for manure from gestation pigs was found to be well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.988 and 0.994, respectively. Linear correlations were observed between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.991 and 0.987, respectively). Therefore, it may be inferred that the nutrients in pig manure can be estimated with reasonable accuracy by measuring the liquid manure specific gravity. A rapid testing method for manure nutrient contents (TN and TP) using a soil hydrometer was also evaluated. The results showed that the estimating error increased from +/-10% to +/-30% with the decrease in TN (from 1000 to 100 ppm) and TP (from 700 to 50 ppm) concentrations in the manure. Data also showed that the hydrometer readings had to be taken within 10 s after mixing to avoid reading drift in specific gravity due to the settling of manure solids.

  20. A rapid method for screening arrayed plasmid cDNA library by PCR

    International Nuclear Information System (INIS)

    Hu Yingchun; Zhang Kaitai; Wu Dechang; Li Gang; Xiang Xiaoqiong

    1999-01-01

    Objective: To develop a PCR-based method for rapid and effective screening of arrayed plasmid cDNA library. Methods: The plasmid cDNA library was arrayed and screened by PCR with a particular set of primers. Results: Four positive clones were obtained through about one week. Conclusion: This method can be applied to screening not only normal cDNA clones, but also cDNA clones-containing small size fragments. This method offers significant advantages over traditional screening method in terms of sensitivity, specificity and efficiency

  1. Methods for risk estimation in nuclear energy

    Energy Technology Data Exchange (ETDEWEB)

    Gauvenet, A [CEA, 75 - Paris (France)

    1979-01-01

    The author presents methods for estimating the different risks related to nuclear energy: immediate or delayed risks, individual or collective risks, risks of accidents and long-term risks. These methods have attained a highly valid level of elaboration and their application to other industrial or human problems is currently under way, especially in English-speaking countries.

  2. Rapid high temperature field test method for evaluation of geothermal calcite scale inhibitors

    Energy Technology Data Exchange (ETDEWEB)

    Asperger, R.G.

    1982-08-01

    A test method is described which allows the rapid field testing of calcite scale inhibitors in high- temperature geothermal brines. Five commercial formulations, chosen on the basis of laboratory screening tests, were tested in brines with low total dissolved solids at ca 500 F. Four were found to be effective; of these, 2 were found to be capable of removing recently deposited scale. One chemical was tested in the full-flow brine line for 6 wks. It was shown to stop a severe surface scaling problem at the well's control valve, thus proving the viability of the rapid test method. (12 refs.)

  3. Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods

    Science.gov (United States)

    Morimoto, Emi; Namerikawa, Susumu

    The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.

  4. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  5. Ore reserve estimation: a summary of principles and methods

    International Nuclear Information System (INIS)

    Marques, J.P.M.

    1985-01-01

    The mining industry has experienced substantial improvements with the increasing utilization of computerized and electronic devices throughout the last few years. In the ore reserve estimation field the main methods have undergone recent advances in order to improve their overall efficiency. This paper presents the three main groups of ore reserve estimation methods presently used worldwide: Conventional, Statistical and Geostatistical, and elaborates a detaited description and comparative analysis of each. The Conventional Methods are the oldest, less complex and most employed ones. The Geostatistical Methods are the most recent precise and more complex ones. The Statistical Methods are intermediate to the others in complexity, diffusion and chronological order. (D.J.M.) [pt

  6. Radiometric method for the rapid detection of Leptospira organisms

    International Nuclear Information System (INIS)

    Manca, N.; Verardi, R.; Colombrita, D.; Ravizzola, G.; Savoldi, E.; Turano, A.

    1986-01-01

    A rapid and sensitive radiometric method for detection of Leptospira interrogans serovar pomona and Leptospira interrogans serovar copenhageni is described. Stuart's medium and Middlebrook TB (12A) medium supplemented with bovine serum albumin, catalase, and casein hydrolysate and labeled with 14 C-fatty acids were used. The radioactivity was measured in a BACTEC 460. With this system, Leptospira organisms were detected in human blood in 2 to 5 days, a notably shorter time period than that required for the majority of detection techniques

  7. A Bayes linear Bayes method for estimation of correlated event rates.

    Science.gov (United States)

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  8. Dual ant colony operational modal analysis parameter estimation method

    Science.gov (United States)

    Sitarz, Piotr; Powałka, Bartosz

    2018-01-01

    Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.

  9. Estimation of water percolation by different methods using TDR

    Directory of Open Access Journals (Sweden)

    Alisson Jadavi Pereira da Silva

    2014-02-01

    Full Text Available Detailed knowledge on water percolation into the soil in irrigated areas is fundamental for solving problems of drainage, pollution and the recharge of underground aquifers. The aim of this study was to evaluate the percolation estimated by time-domain-reflectometry (TDR in a drainage lysimeter. We used Darcy's law with K(θ functions determined by field and laboratory methods and by the change in water storage in the soil profile at 16 points of moisture measurement at different time intervals. A sandy clay soil was saturated and covered with plastic sheet to prevent evaporation and an internal drainage trial in a drainage lysimeter was installed. The relationship between the observed and estimated percolation values was evaluated by linear regression analysis. The results suggest that percolation in the field or laboratory can be estimated based on continuous monitoring with TDR, and at short time intervals, of the variations in soil water storage. The precision and accuracy of this approach are similar to those of the lysimeter and it has advantages over the other evaluated methods, of which the most relevant are the possibility of estimating percolation in short time intervals and exemption from the predetermination of soil hydraulic properties such as water retention and hydraulic conductivity. The estimates obtained by the Darcy-Buckingham equation for percolation levels using function K(θ predicted by the method of Hillel et al. (1972 provided compatible water percolation estimates with those obtained in the lysimeter at time intervals greater than 1 h. The methods of Libardi et al. (1980, Sisson et al. (1980 and van Genuchten (1980 underestimated water percolation.

  10. Rapid qualitative research methods during complex health emergencies: A systematic review of the literature.

    Science.gov (United States)

    Johnson, Ginger A; Vindrola-Padros, Cecilia

    2017-09-01

    The 2013-2016 Ebola outbreak in West Africa highlighted both the successes and limitations of social science contributions to emergency response operations. An important limitation was the rapid and effective communication of study findings. A systematic review was carried out to explore how rapid qualitative methods have been used during global heath emergencies to understand which methods are commonly used, how they are applied, and the difficulties faced by social science researchers in the field. We also asses their value and benefit for health emergencies. The review findings are used to propose recommendations for qualitative research in this context. Peer-reviewed articles and grey literature were identified through six online databases. An initial search was carried out in July 2016 and updated in February 2017. The PRISMA checklist was used to guide the reporting of methods and findings. The articles were assessed for quality using the MMAT and AACODS checklist. From an initial search yielding 1444 articles, 22 articles met the criteria for inclusion. Thirteen of the articles were qualitative studies and nine used a mixed-methods design. The purpose of the rapid studies included: the identification of causes of the outbreak, and assessment of infrastructure, control strategies, health needs and health facility use. The studies varied in duration (from 4 days to 1 month). The main limitations identified by the authors were: the low quality of the collected data, small sample sizes, and little time for cross-checking facts with other data sources to reduce bias. Rapid qualitative methods were seen as beneficial in highlighting context-specific issues that need to be addressed locally, population-level behaviors influencing health service use, and organizational challenges in response planning and implementation. Recommendations for carrying out rapid qualitative research in this context included the early designation of community leaders as a point of

  11. StereoGene: rapid estimation of genome-wide correlation of continuous or interval feature data.

    Science.gov (United States)

    Stavrovskaya, Elena D; Niranjan, Tejasvi; Fertig, Elana J; Wheelan, Sarah J; Favorov, Alexander V; Mironov, Andrey A

    2017-10-15

    Genomics features with similar genome-wide distributions are generally hypothesized to be functionally related, for example, colocalization of histones and transcription start sites indicate chromatin regulation of transcription factor activity. Therefore, statistical algorithms to perform spatial, genome-wide correlation among genomic features are required. Here, we propose a method, StereoGene, that rapidly estimates genome-wide correlation among pairs of genomic features. These features may represent high-throughput data mapped to reference genome or sets of genomic annotations in that reference genome. StereoGene enables correlation of continuous data directly, avoiding the data binarization and subsequent data loss. Correlations are computed among neighboring genomic positions using kernel correlation. Representing the correlation as a function of the genome position, StereoGene outputs the local correlation track as part of the analysis. StereoGene also accounts for confounders such as input DNA by partial correlation. We apply our method to numerous comparisons of ChIP-Seq datasets from the Human Epigenome Atlas and FANTOM CAGE to demonstrate its wide applicability. We observe the changes in the correlation between epigenomic features across developmental trajectories of several tissue types consistent with known biology and find a novel spatial correlation of CAGE clusters with donor splice sites and with poly(A) sites. These analyses provide examples for the broad applicability of StereoGene for regulatory genomics. The StereoGene C ++ source code, program documentation, Galaxy integration scripts and examples are available from the project homepage http://stereogene.bioinf.fbb.msu.ru/. favorov@sensi.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  12. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation.

    Science.gov (United States)

    Harbert, Robert S; Nixon, Kevin C

    2015-08-01

    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  13. A Rapid, Accurate, and Efficient Method to Map Heavy Metal-Contaminated Soils of Abandoned Mine Sites Using Converted Portable XRF Data and GIS

    Directory of Open Access Journals (Sweden)

    Jangwon Suh

    2016-12-01

    Full Text Available The use of portable X-ray fluorescence (PXRF and inductively coupled plasma atomic emission spectrometry (ICP-AES increases the rapidity and accuracy of soil contamination mapping, respectively. In practice, it is often necessary to repeat the soil contamination assessment and mapping procedure several times during soil management within a limited budget. In this study, we have developed a rapid, inexpensive, and accurate soil contamination mapping method using a PXRF data and geostatistical spatial interpolation. To obtain a large quantity of high quality data for interpolation, in situ PXRF data analyzed at 40 points were transformed to converted PXRF data using the correlation between PXRF and ICP-AES data. The method was applied to an abandoned mine site in Korea to generate a soil contamination map for copper and was validated for investigation speed and prediction accuracy. As a result, regions that required soil remediation were identified. Our method significantly shortened the time required for mapping compared to the conventional mapping method and provided copper concentration estimates with high accuracy similar to those measured by ICP-AES. Therefore, our method is an effective way of mapping soil contamination if we consistently construct a database based on the correlation between PXRF and ICP-AES data.

  14. A New Method for Estimation of Velocity Vectors

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Munk, Peter

    1998-01-01

    The paper describes a new method for determining the velocity vector of a remotely sensed object using either sound or electromagnetic radiation. The movement of the object is determined from a field with spatial oscillations in both the axial direction of the transducer and in one or two...... directions transverse to the axial direction. By using a number of pulse emissions, the inter-pulse movement can be estimated and the velocity found from the estimated movement and the time between pulses. The method is based on the principle of using transverse spatial modulation for making the received...

  15. Comparison of methods used for estimating pharmacist counseling behaviors.

    Science.gov (United States)

    Schommer, J C; Sullivan, D L; Wiederholt, J B

    1994-01-01

    To compare the rates reported for provision of types of information conveyed by pharmacists among studies for which different methods of estimation were used and different dispensing situations were studied. Empiric studies conducted in the US, reported from 1982 through 1992, were selected from International Pharmaceutical Abstracts, MEDLINE, and noncomputerized sources. Empiric studies were selected for review if they reported the provision of at least three types of counseling information. Four components of methods used for estimating pharmacist counseling behaviors were extracted and summarized in a table: (1) sample type and area, (2) sampling unit, (3) sample size, and (4) data collection method. In addition, situations that were investigated in each study were compiled. Twelve studies met our inclusion criteria. Patients were interviewed via telephone in four studies and were surveyed via mail in two studies. Pharmacists were interviewed via telephone in one study and surveyed via mail in two studies. For three studies, researchers visited pharmacy sites for data collection using the shopper method or observation method. Studies with similar methods and situations provided similar results. Data collected by using patient surveys, pharmacist surveys, and observation methods can provide useful estimations of pharmacist counseling behaviors if researchers measure counseling for specific, well-defined dispensing situations.

  16. Rapid methods for jugular bleeding of dogs requiring one technician.

    Science.gov (United States)

    Frisk, C S; Richardson, M R

    1979-06-01

    Two methods were used to collect blood from the jugular vein of dogs. In both techniques, only one technician was required. A rope with a slip knot was placed around the base of the neck to assist in restraint and act as a tourniquet for the vein. The technician used one hand to restrain the dog by the muzzle and position the head. The other hand was used for collecting the sample. One of the methods could be accomplished with the dog in its cage. The bleeding techniques were rapid, requiring approximately 1 minute per dog.

  17. Rapid HPLC-MS method for the simultaneous determination of tea catechins and folates.

    Science.gov (United States)

    Araya-Farias, Monica; Gaudreau, Alain; Rozoy, Elodie; Bazinet, Laurent

    2014-05-14

    An effective and rapid HPLC-MS method for the simultaneous separation of the eight most abundant tea catechins, gallic acid, and caffeine was developed. These compounds were rapidly separated within 9 min by a linear gradient elution using a Zorbax SB-C18 packed with sub 2 μm particles. This methodology did not require preparative and semipreparative HPLC steps. In fact, diluted tea samples can be easily analyzed using HPLC-MS as described in this study. The use of mass spectrometry detection for quantification of catechins ensured a higher specificity of the method. The percent relative standard deviation was generally lower than 4 and 7% for most of the compounds tested in tea drinks and tea extracts, respectively. Furthermore, the method provided excellent resolution for folate determination alone or in combination with catechins. To date, no HPLC method able to discriminate catechins and folates in a quick analysis has been reported in the literature.

  18. The Brazilian version of the 20-item rapid estimate of adult literacy in medicine and dentistry

    Directory of Open Access Journals (Sweden)

    Agnes Fátima P. Cruvinel

    2017-08-01

    Full Text Available Background The misunderstanding of specific vocabulary may hamper the patient-health provider communication. The 20-item Rapid Estimate Adult Literacy in Medicine and Dentistry (REALMD-20 was constructed to screen patients by their ability in reading medical/dental terminologies in a simple and rapid way. This study aimed to perform the cross-cultural adaptation and validation of this instrument for its application in Brazilian dental patients. Methods The cross-cultural adaptation was performed through conceptual equivalence, verbatim translation, semantic, item and operational equivalence, and back-translation. After that, 200 participants responded the adapted version of the REALMD-20, the Brazilian version of the Rapid Estimate of Adult Literacy in Dentistry (BREALD-30, ten questions of the Brazilian National Functional Literacy Index (BNFLI, and a questionnaire with socio-demographic and oral health-related questions. Statistical analysis was conducted to assess the reliability and validity of the REALMD-20 (P < 0.05. Results The sample was composed predominantly by women (55.5% and white/brown (76% individuals, with an average age of 39.02 years old (±15.28. The average REALMD-20 score was 17.48 (±2.59, range 8–20. It displayed a good internal consistency (Cronbach’s alpha = 0.789 and test-retest reliability (ICC = 0.73; 95% CI [0.66 − 0.79]. In the exploratory factor analysis, six factors were extracted according to Kaiser’s criterion. The factor I (eigenvalue = 4.53 comprised four terms— “Jaundice”, “Amalgam”, “Periodontitis” and “Abscess”—accounted for 25.18% of total variance, while the factor II (eigenvalue = 1.88 comprised other four terms—“Gingivitis”, “Instruction”, “Osteoporosis” and “Constipation”—accounted for 10.46% of total variance. The first four factors accounted for 52.1% of total variance. The REALMD-20 was positively correlated with the BREALD-30 (Rs = 0

  19. Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2009-10-01

    Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.

  20. Simple rapid methods for freezing hybridomas in 96-well microculture plates.

    Science.gov (United States)

    Wells, D E; Price, P J

    1983-04-15

    Macroscopic hybridoma colonies were frozen and recovered in a good state of viability in 96-well microculture plates using 2 freezing procedures. These methods offer convenient and rapid means of preserving hybridomas and will permit laboratories developing monoclonal antibodies to distribute workloads to more manageable levels without discarding possibly valuable hybridomas.

  1. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  2. Radiometric method for the rapid detection of Leptospira organisms

    Energy Technology Data Exchange (ETDEWEB)

    Manca, N.; Verardi, R.; Colombrita, D.; Ravizzola, G.; Savoldi, E.; Turano, A.

    1986-02-01

    A rapid and sensitive radiometric method for detection of Leptospira interrogans serovar pomona and Leptospira interrogans serovar copenhageni is described. Stuart's medium and Middlebrook TB (12A) medium supplemented with bovine serum albumin, catalase, and casein hydrolysate and labeled with /sup 14/C-fatty acids were used. The radioactivity was measured in a BACTEC 460. With this system, Leptospira organisms were detected in human blood in 2 to 5 days, a notably shorter time period than that required for the majority of detection techniques.

  3. Liquid Chromatography with Electrospray Ionization and Tandem Mass Spectrometry Applied in the Quantitative Analysis of Chitin-Derived Glucosamine for a Rapid Estimation of Fungal Biomass in Soil

    Directory of Open Access Journals (Sweden)

    Madelen A. Olofsson

    2016-01-01

    Full Text Available This method employs liquid chromatography-tandem mass spectrometry to rapidly quantify chitin-derived glucosamine for estimating fungal biomass. Analyte retention was achieved using hydrophilic interaction liquid chromatography, with a zwitter-ionic stationary phase (ZIC-HILIC, and isocratic elution using 60% 5 mM ammonium formate buffer (pH 3.0 and 40% ACN. Inclusion of muramic acid and its chromatographic separation from glucosamine enabled calculation of the bacterial contribution to the latter. Galactosamine, an isobaric isomer to glucosamine, found in significant amounts in soil samples, was also investigated. The two isomers form the same precursor and product ions and could not be chromatographically separated using this rapid method. Instead, glucosamine and galactosamine were distinguished mathematically, using the linear relationships describing the differences in product ion intensities for the two analytes. The m/z transitions of 180 → 72 and 180 → 84 were applied for the detection of glucosamine and galactosamine and that of 252 → 126 for muramic acid. Limits of detection were in the nanomolar range for all included analytes. The total analysis time was 6 min, providing a high sample throughput method.

  4. THE METHODS FOR ESTIMATING REGIONAL PROFESSIONAL MOBILE RADIO MARKET POTENTIAL

    Directory of Open Access Journals (Sweden)

    Y.À. Korobeynikov

    2008-12-01

    Full Text Available The paper represents the author’s methods of estimating regional professional mobile radio market potential, that belongs to high-tech b2b markets. These methods take into consideration such market peculiarities as great range and complexity of products, technological constraints and infrastructure development for the technological systems operation. The paper gives an estimation of professional mobile radio potential in Perm region. This estimation is already used by one of the systems integrator for its strategy development.

  5. Comparative study of the geostatistical ore reserve estimation method over the conventional methods

    International Nuclear Information System (INIS)

    Kim, Y.C.; Knudsen, H.P.

    1975-01-01

    Part I contains a comprehensive treatment of the comparative study of the geostatistical ore reserve estimation method over the conventional methods. The conventional methods chosen for comparison were: (a) the polygon method, (b) the inverse of the distance squared method, and (c) a method similar to (b) but allowing different weights in different directions. Briefly, the overall result from this comparative study is in favor of the use of geostatistics in most cases because the method has lived up to its theoretical claims. A good exposition on the theory of geostatistics, the adopted study procedures, conclusions and recommended future research are given in Part I. Part II of this report contains the results of the second and the third study objectives, which are to assess the potential benefits that can be derived by the introduction of the geostatistical method to the current state-of-the-art in uranium reserve estimation method and to be instrumental in generating the acceptance of the new method by practitioners through illustrative examples, assuming its superiority and practicality. These are given in the form of illustrative examples on the use of geostatistics and the accompanying computer program user's guide

  6. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  7. Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar

    Directory of Open Access Journals (Sweden)

    Zhenxin Cao

    2018-02-01

    Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.

  8. A rapid, simple method for obtaining radiochemically pure hepatic heme

    International Nuclear Information System (INIS)

    Bonkowski, H.L.; Bement, W.J.; Erny, R.

    1978-01-01

    Radioactively-labelled heme has usually been isolated from liver to which unlabelled carrier has been added by long, laborious techniques involving organic solvent extraction followed by crystallization. A simpler, rapid method is devised for obtaining radiochemically-pure heme synthesized in vivo in rat liver from delta-amino[4- 14 C]levulinate. This method, in which the heme is extracted into ethyl acetate/glacial acetic acid and in which porphyrins are removed from the heme-containing organic phase with HCl washes, does not require addition of carrier heme. The new method gives better heme recoveries than and heme specific activities identical to, those obtained using the crystallization method. In this new method heme must be synthesized from delta-amino[4- 14 C]levulinate; it is not satisfactory to use [2- 14 C]glycine substrate because non-heme counts are isolated in the heme fraction. (Auth.)

  9. Rapid estimation of high-parameter auditory-filter shapes

    Science.gov (United States)

    Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.

    2014-01-01

    A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086

  10. ORIGINAL ARTICLES Initial burden of disease estimates for South ...

    African Journals Online (AJOL)

    method is used to estimate the YLDs from the YLL estimates. Results. ... HIV I AIDS can be expected to grow very rapidly in the next few years. ... and II diseases, excluding AIDSY Ill-defined causes within a disease .... Protein-energy malnutrition. COPD. Fires ..... provided.16 National government expenditure on HIV I AIDS.

  11. Characterizing the Frequency and Elevation of Rapid Drainage Events in West Greenland

    Science.gov (United States)

    Cooley, S.; Christoffersen, P.

    2016-12-01

    Rapid drainage of supraglacial lakes on the Greenland Ice Sheet is critical for the establishment of surface-to-bed hydrologic connections and the subsequent transfer of water from surface to bed. Yet, estimates of the number and spatial distribution of rapidly draining lakes vary widely due to limitations in the temporal frequency of image collection and obscureness by cloud. So far, no study has assessed the impact of these observation biases. In this study, we examine the frequency and elevation of rapidly draining lakes in central West Greenland, from 68°N to 72.6°N, and we make a robust statistical analysis to estimate more accurately the likelihood of lakes draining rapidly. Using MODIS imagery and a fully automated lake detection method, we map more than 500 supraglacial lakes per year over a 63000 km2 study area from 2000-2015. Through testing four different definitions of rapidly draining lakes from previously published studies, we find that the number of rapidly draining lakes varies from 3% to 38%. Logistic regression between rapid drainage events and image sampling frequency demonstrates that the number of rapid drainage events is strongly dependent on cloud-free observation percentage. We then develop three new drainage criteria and apply an observation bias correction that suggests a true rapid drainage probability between 36% and 45%, considerably higher than previous studies without bias assessment have reported. We find rapid-draining lakes are on average larger and disappear earlier than slow-draining lakes, and we also observe no elevation differences for the lakes detected as rapidly draining. We conclude a) that methodological problems in rapid drainage research caused by observation bias and varying detection methods have obscured large-scale rapid drainage characteristics and b) that the lack of evidence for an elevation limit on rapid drainage suggests surface-to-bed hydrologic connections may continue to propagate inland as climate warms.

  12. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.

    2009-01-01

    We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.

  13. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    Directory of Open Access Journals (Sweden)

    Julius Hannink

    2017-08-01

    Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.

  14. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  15. A comparison study of size-specific dose estimate calculation methods

    Energy Technology Data Exchange (ETDEWEB)

    Parikh, Roshni A. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Michigan Health System, Department of Radiology, Ann Arbor, MI (United States); Wien, Michael A.; Jordan, David W.; Ciancibello, Leslie; Berlin, Sheila C. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Novak, Ronald D. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Rebecca D. Considine Research Institute, Children' s Hospital Medical Center of Akron, Center for Mitochondrial Medicine Research, Akron, OH (United States); Klahr, Paul [CT Clinical Science, Philips Healthcare, Highland Heights, OH (United States); Soriano, Stephanie [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Washington, Department of Radiology, Seattle, WA (United States)

    2018-01-15

    The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ{sub c}) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI{sub vol,} there was poor correlation, ρ{sub c}<0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide

  16. Development of spectrophotometric fingerprinting method for ...

    African Journals Online (AJOL)

    Selective and efficient analytical methods are required not only for quality assurance but also for authentication of herbal formulations. A simple, rapid and validated fingerprint method has developed for estimation of piperine in 'Talisadi churna', a well known herbal formulation in India. The estimation was carried out in two ...

  17. Rapid non-destructive quantitative estimation of urania/ thoria in mixed thorium uranium di-oxide pellets by high-resolution gamma-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Shriwastwa, B.B.; Kumar, Anil; Raghunath, B.; Nair, M.R.; Abani, M.C.; Ramachandran, R.; Majumdar, S.; Ghosh, J.K

    2001-06-01

    A non-destructive technique using high-resolution gamma-ray spectrometry has been standardised for quantitative estimation of uranium/thorium in mixed (ThO{sub 2}-UO{sub 2}) fuel pellets of varying composition. Four gamma energies were selected; two each from the uranium and thorium series and the time of counting has been optimised. This technique can be used for rapid estimation of U/Th percentage in a large number of mixed fuel pellets from a production campaign.

  18. Rapid non-destructive quantitative estimation of urania/ thoria in mixed thorium uranium di-oxide pellets by high-resolution gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Shriwastwa, B.B.; Kumar, Anil; Raghunath, B.; Nair, M.R.; Abani, M.C.; Ramachandran, R.; Majumdar, S.; Ghosh, J.K.

    2001-01-01

    A non-destructive technique using high-resolution gamma-ray spectrometry has been standardised for quantitative estimation of uranium/thorium in mixed (ThO 2 -UO 2 ) fuel pellets of varying composition. Four gamma energies were selected; two each from the uranium and thorium series and the time of counting has been optimised. This technique can be used for rapid estimation of U/Th percentage in a large number of mixed fuel pellets from a production campaign

  19. On the estimation method of compressed air consumption during pneumatic caisson sinking

    OpenAIRE

    平川, 修治; ヒラカワ, シュウジ; Shuji, HIRAKAWA

    1990-01-01

    There are several methods in estimation of compressed air consumption during pneumatic caisson sinking. It is re uired in the estimation of compressed air consumption by the methods under the same conditions. In this paper, it is proposed the methods which is able to estimate accurately the compressed air consumption during pnbumatic caissons sinking at this moment.

  20. Application of a rapid screening method to detect irradiated meat in Brazil

    International Nuclear Information System (INIS)

    Villavicencio, A.L.C.H.; Mancini-Filho, J.; Delincee, H.

    2000-01-01

    Based on the enormous potential for food irradiation in Brazil, and to ensure free consumer choice, there is a need to find a convenient and rapid method for detection of irradiated food. Since treatment with ionising radiation causes DNA fragmentation, the analysis of DNA damage might be promising. In this paper, the DNA Comet Assay was used to identify exotic meat (boar, jacare and capybara), irradiated with 60 Co gamma rays. The applied radiation doses were 0, 1.5, 3.0 and 4.5 kGy. Analysis of the DNA migration enabled a rapid identification of the radiation treatment

  1. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.

    Science.gov (United States)

    Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben

    2018-02-22

    This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  2. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves

    Directory of Open Access Journals (Sweden)

    Madaín Pérez-Patricio

    2018-02-01

    Full Text Available This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance, a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica, Canavalia ensiforme, and Lycopersicon esculentum. Experimental results showed that—in terms of accuracy and processing speed—the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica, where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  3. Joint Spatio-Temporal Filtering Methods for DOA and Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Benesty, Jacob

    2015-01-01

    some attention in the community and is quite promising for several applications. The proposed methods are based on optimal, adaptive filters that leave the desired signal, having a certain DOA and fundamental frequency, undistorted and suppress everything else. The filtering methods simultaneously...... operate in space and time, whereby it is possible resolve cases that are otherwise problematic for pitch estimators or DOA estimators based on beamforming. Several special cases and improvements are considered, including a method for estimating the covariance matrix based on the recently proposed...

  4. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.; Stoffa, Paul L.

    2009-01-01

    an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second

  5. A rapid alpha spectrometric method for estimation of 233U in bulk of thorium

    International Nuclear Information System (INIS)

    Rao, K.S.; Sankar, R.; Dhami, P.S.; Tripathi, S.C.; Gandhi, P.M.

    2015-01-01

    Analytical methods play important role in entire nuclear fuel cycle. Almost all the methods find applications in some way or the other in nuclear industry. Methods which cannot be directly used owing to selectivity, find application after chemical separation of analyte from interfering components. The analytical techniques used in PUREX process are almost well matured whereas in THOREX process the analytical techniques are constantly evolving as regards to simplicity, accuracy and time of analysis

  6. Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods

    International Nuclear Information System (INIS)

    Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris

    2016-01-01

    Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.

  7. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland

    2005-04-01

    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  8. Study of a large rapid ashing apparatus and a rapid dry ashing method for biological samples and its application

    International Nuclear Information System (INIS)

    Jin Meisun; Wang Benli; Liu Wencang

    1988-04-01

    A large rapid-dry-ashing apparatus and a rapid ashing method for biological samples are described. The apparatus consists of specially made ashing furnace, gas supply system and temperature-programming control cabinet. The following adventages have been showed by ashing experiment with the above apparatus: (1) high speed of ashing and saving of electric energy; (2) The apparatus can ash a large amount of samples at a time; (3) The ashed sample is pure white (or spotless), loose and easily soluble with few content of residual char; (4) The fresh sample can also be ashed directly. The apparatus is suitable for ashing a large amount of the environmental samples containing low level radioactivity trace elements and the medical, food and agricultural research samples

  9. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    Science.gov (United States)

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be

  10. A meta-model based approach for rapid formability estimation of continuous fibre reinforced components

    Science.gov (United States)

    Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is

  11. Geometric estimation method for x-ray digital intraoral tomosynthesis

    Science.gov (United States)

    Li, Liang; Yang, Yao; Chen, Zhiqiang

    2016-06-01

    It is essential for accurate image reconstruction to obtain a set of parameters that describes the x-ray scanning geometry. A geometric estimation method is presented for x-ray digital intraoral tomosynthesis (DIT) in which the detector remains stationary while the x-ray source rotates. The main idea is to estimate the three-dimensional (3-D) coordinates of each shot position using at least two small opaque balls adhering to the detector surface as the positioning markers. From the radiographs containing these balls, the position of each x-ray focal spot can be calculated independently relative to the detector center no matter what kind of scanning trajectory is used. A 3-D phantom which roughly simulates DIT was designed to evaluate the performance of this method both quantitatively and qualitatively in the sense of mean square error and structural similarity. Results are also presented for real data acquired with a DIT experimental system. These results prove the validity of this geometric estimation method.

  12. A computer-based matrix for rapid calculation of pulmonary hemodynamic parameters in congenital heart disease

    International Nuclear Information System (INIS)

    Lopes, Antonio Augusto; Miranda, Rogerio dos Anjos; Goncalves, Rilvani Cavalcante; Thomaz, Ana Maria

    2009-01-01

    In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Using Microsoft Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups ( P <.001) and between-methods ( P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. (author)

  13. Vegetation index methods for estimating evapotranspiration by remote sensing

    Science.gov (United States)

    Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.

    2010-01-01

    Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45–0.95, and root mean square errors are in the range of 10–30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.

  14. Simple method for quick estimation of aquifer hydrogeological parameters

    Science.gov (United States)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  15. Analysis of blind identification methods for estimation of kinetic parameters in dynamic medical imaging

    Science.gov (United States)

    Riabkov, Dmitri

    Compartment modeling of dynamic medical image data implies that the concentration of the tracer over time in a particular region of the organ of interest is well-modeled as a convolution of the tissue response with the tracer concentration in the blood stream. The tissue response is different for different tissues while the blood input is assumed to be the same for different tissues. The kinetic parameters characterizing the tissue responses can be estimated by blind identification methods. These algorithms use the simultaneous measurements of concentration in separate regions of the organ; if the regions have different responses, the measurement of the blood input function may not be required. In this work it is shown that the blind identification problem has a unique solution for two-compartment model tissue response. For two-compartment model tissue responses in dynamic cardiac MRI imaging conditions with gadolinium-DTPA contrast agent, three blind identification algorithms are analyzed here to assess their utility: Eigenvector-based Algorithm for Multichannel Blind Deconvolution (EVAM), Cross Relations (CR), and Iterative Quadratic Maximum Likelihood (IQML). Comparisons of accuracy with conventional (not blind) identification techniques where the blood input is known are made as well. The statistical accuracies of estimation for the three methods are evaluated and compared for multiple parameter sets. The results show that the IQML method gives more accurate estimates than the other two blind identification methods. A proof is presented here that three-compartment model blind identification is not unique in the case of only two regions. It is shown that it is likely unique for the case of more than two regions, but this has not been proved analytically. For the three-compartment model the tissue responses in dynamic FDG PET imaging conditions are analyzed with the blind identification algorithms EVAM and Separable variables Least Squares (SLS). A method of

  16. Comparing models of rapidly rotating relativistic stars constructed by two numerical methods

    Science.gov (United States)

    Stergioulas, Nikolaos; Friedman, John L.

    1995-05-01

    We present the first direct comparison of codes based on two different numerical methods for constructing rapidly rotating relativistic stars. A code based on the Komatsu-Eriguchi-Hachisu (KEH) method (Komatsu et al. 1989), written by Stergioulas, is compared to the Butterworth-Ipser code (BI), as modified by Friedman, Ipser, & Parker. We compare models obtained by each method and evaluate the accuracy and efficiency of the two codes. The agreement is surprisingly good, and error bars in the published numbers for maximum frequencies based on BI are dominated not by the code inaccuracy but by the number of models used to approximate a continuous sequence of stars. The BI code is faster per iteration, and it converges more rapidly at low density, while KEH converges more rapidly at high density; KEH also converges in regions where BI does not, allowing one to compute some models unstable against collapse that are inaccessible to the BI code. A relatively large discrepancy recently reported (Eriguchi et al. 1994) for models based on Friedman-Pandharipande equation of state is found to arise from the use of two different versions of the equation of state. For two representative equations of state, the two-dimensional space of equilibrium configurations is displayed as a surface in a three-dimensional space of angular momentum, mass, and central density. We find, for a given equation of state, that equilibrium models with maximum values of mass, baryon mass, and angular momentum are (generically) either all unstable to collapse or are all stable. In the first case, the stable model with maximum angular velocity is also the model with maximum mass, baryon mass, and angular momentum. In the second case, the stable models with maximum values of these quantities are all distinct. Our implementation of the KEH method will be available as a public domain program for interested users.

  17. Advances in Time Estimation Methods for Molecular Data.

    Science.gov (United States)

    Kumar, Sudhir; Hedges, S Blair

    2016-04-01

    Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data

  18. Estimating incidence from prevalence in generalised HIV epidemics: methods and validation.

    Directory of Open Access Journals (Sweden)

    Timothy B Hallett

    2008-04-01

    Full Text Available HIV surveillance of generalised epidemics in Africa primarily relies on prevalence at antenatal clinics, but estimates of incidence in the general population would be more useful. Repeated cross-sectional measures of HIV prevalence are now becoming available for general populations in many countries, and we aim to develop and validate methods that use these data to estimate HIV incidence.Two methods were developed that decompose observed changes in prevalence between two serosurveys into the contributions of new infections and mortality. Method 1 uses cohort mortality rates, and method 2 uses information on survival after infection. The performance of these two methods was assessed using simulated data from a mathematical model and actual data from three community-based cohort studies in Africa. Comparison with simulated data indicated that these methods can accurately estimates incidence rates and changes in incidence in a variety of epidemic conditions. Method 1 is simple to implement but relies on locally appropriate mortality data, whilst method 2 can make use of the same survival distribution in a wide range of scenarios. The estimates from both methods are within the 95% confidence intervals of almost all actual measurements of HIV incidence in adults and young people, and the patterns of incidence over age are correctly captured.It is possible to estimate incidence from cross-sectional prevalence data with sufficient accuracy to monitor the HIV epidemic. Although these methods will theoretically work in any context, we have able to test them only in southern and eastern Africa, where HIV epidemics are mature and generalised. The choice of method will depend on the local availability of HIV mortality data.

  19. A direct and rapid method to determine cyanide in urine by capillary electrophoresis.

    Science.gov (United States)

    Zhang, Qiyang; Maddukuri, Naveen; Gong, Maojun

    2015-10-02

    Cyanides are poisonous chemicals that widely exist in nature and industrial processes as well as accidental fires. Rapid and accurate determination of cyanide exposure would facilitate forensic investigation, medical diagnosis, and chronic cyanide monitoring. Here, a rapid and direct method was developed for the determination of cyanide ions in urinary samples. This technique was based on an integrated capillary electrophoresis system coupled with laser-induced fluorescence (LIF) detection. Cyanide ions were derivatized with naphthalene-2,3-dicarboxaldehyde (NDA) and a primary amine (glycine) for LIF detection. Three separate reagents, NDA, glycine, and cyanide sample, were mixed online, which secured uniform conditions between samples for cyanide derivatization and reduced the risk of precipitation formation of mixtures. Conditions were optimized; the derivatization was completed in 2-4min, and the separation was observed in 25s. The limit of detection (LOD) was 4.0nM at 3-fold signal-to-noise ratio for standard cyanide in buffer. The cyanide levels in urine samples from smokers and non-smokers were determined by using the method of standard addition, which demonstrated significant difference of cyanide levels in urinary samples from the two groups of people. The developed method was rapid and accurate, and is anticipated to be applicable to cyanide detection in waste water with appropriate modification. Published by Elsevier B.V.

  20. An optimized rapid bisulfite conversion method with high recovery of cell-free DNA.

    Science.gov (United States)

    Yi, Shaohua; Long, Fei; Cheng, Juanbo; Huang, Daixin

    2017-12-19

    Methylation analysis of cell-free DNA is a encouraging tool for tumor diagnosis, monitoring and prognosis. Sensitivity of methylation analysis is a very important matter due to the tiny amounts of cell-free DNA available in plasma. Most current methods of DNA methylation analysis are based on the difference of bisulfite-mediated deamination of cytosine between cytosine and 5-methylcytosine. However, the recovery of bisulfite-converted DNA based on current methods is very poor for the methylation analysis of cell-free DNA. We optimized a rapid method for the crucial steps of bisulfite conversion with high recovery of cell-free DNA. A rapid deamination step and alkaline desulfonation was combined with the purification of DNA on a silica column. The conversion efficiency and recovery of bisulfite-treated DNA was investigated by the droplet digital PCR. The optimization of the reaction results in complete cytosine conversion in 30 min at 70 °C and about 65% of recovery of bisulfite-treated cell-free DNA, which is higher than current methods. The method allows high recovery from low levels of bisulfite-treated cell-free DNA, enhancing the analysis sensitivity of methylation detection from cell-free DNA.

  1. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  2. Estimating building energy consumption using extreme learning machine method

    International Nuclear Information System (INIS)

    Naji, Sareh; Keivani, Afram; Shamshirband, Shahaboddin; Alengaram, U. Johnson; Jumaat, Mohd Zamin; Mansor, Zulkefli; Lee, Malrey

    2016-01-01

    The current energy requirements of buildings comprise a large percentage of the total energy consumed around the world. The demand of energy, as well as the construction materials used in buildings, are becoming increasingly problematic for the earth's sustainable future, and thus have led to alarming concern. The energy efficiency of buildings can be improved, and in order to do so, their operational energy usage should be estimated early in the design phase, so that buildings are as sustainable as possible. An early energy estimate can greatly help architects and engineers create sustainable structures. This study proposes a novel method to estimate building energy consumption based on the ELM (Extreme Learning Machine) method. This method is applied to building material thicknesses and their thermal insulation capability (K-value). For this purpose up to 180 simulations are carried out for different material thicknesses and insulation properties, using the EnergyPlus software application. The estimation and prediction obtained by the ELM model are compared with GP (genetic programming) and ANNs (artificial neural network) models for accuracy. The simulation results indicate that an improvement in predictive accuracy is achievable with the ELM approach in comparison with GP and ANN. - Highlights: • Buildings consume huge amounts of energy for operation. • Envelope materials and insulation influence building energy consumption. • Extreme learning machine is used to estimate energy usage of a sample building. • The key effective factors in this study are insulation thickness and K-value.

  3. Methods and measurement variance for field estimations of coral colony planar area using underwater photographs and semi-automated image segmentation.

    Science.gov (United States)

    Neal, Benjamin P; Lin, Tsung-Han; Winter, Rivah N; Treibitz, Tali; Beijbom, Oscar; Kriegman, David; Kline, David I; Greg Mitchell, B

    2015-08-01

    Size and growth rates for individual colonies are some of the most essential descriptive parameters for understanding coral communities, which are currently experiencing worldwide declines in health and extent. Accurately measuring coral colony size and changes over multiple years can reveal demographic, growth, or mortality patterns often not apparent from short-term observations and can expose environmental stress responses that may take years to manifest. Describing community size structure can reveal population dynamics patterns, such as periods of failed recruitment or patterns of colony fission, which have implications for the future sustainability of these ecosystems. However, rapidly and non-invasively measuring coral colony sizes in situ remains a difficult task, as three-dimensional underwater digital reconstruction methods are currently not practical for large numbers of colonies. Two-dimensional (2D) planar area measurements from projection of underwater photographs are a practical size proxy, although this method presents operational difficulties in obtaining well-controlled photographs in the highly rugose environment of the coral reef, and requires extensive time for image processing. Here, we present and test the measurement variance for a method of making rapid planar area estimates of small to medium-sized coral colonies using a lightweight monopod image-framing system and a custom semi-automated image segmentation analysis program. This method demonstrated a coefficient of variation of 2.26% for repeated measurements in realistic ocean conditions, a level of error appropriate for rapid, inexpensive field studies of coral size structure, inferring change in colony size over time, or measuring bleaching or disease extent of large numbers of individual colonies.

  4. Conventional estimating method of earthquake response of mechanical appendage system

    International Nuclear Information System (INIS)

    Aoki, Shigeru; Suzuki, Kohei

    1981-01-01

    Generally, for the estimation of the earthquake response of appendage structure system installed in main structure system, the method of floor response analysis using the response spectra at the point of installing the appendage system has been used. On the other hand, the research on the estimation of the earthquake response of appendage system by the statistical procedure based on probability process theory has been reported. The development of a practical method for simply estimating the response is an important subject in aseismatic engineering. In this study, the method of estimating the earthquake response of appendage system in the general case that the natural frequencies of both structure systems were different was investigated. First, it was shown that floor response amplification factor was able to be estimated simply by giving the ratio of the natural frequencies of both structure systems, and its statistical property was clarified. Next, it was elucidated that the procedure of expressing acceleration, velocity and displacement responses with tri-axial response spectra simultaneously was able to be applied to the expression of FRAF. The applicability of this procedure to nonlinear system was examined. (Kako, I.)

  5. Efficient Methods of Estimating Switchgrass Biomass Supplies

    Science.gov (United States)

    Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...

  6. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    Science.gov (United States)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    . Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  7. Methods of albumin estimation in clinical biochemistry: Past, present, and future.

    Science.gov (United States)

    Kumar, Deepak; Banerjee, Dibyajyoti

    2017-06-01

    Estimation of serum and urinary albumin is routinely performed in clinical biochemistry laboratories. In the past, precipitation-based methods were popular for estimation of human serum albumin (HSA). Currently, dye-binding or immunochemical methods are widely practiced. Each of these methods has its limitations. Research endeavors to overcome such limitations are on-going. The current trends in methodological aspects of albumin estimation guiding the field have not been reviewed. Therefore, it is the need of the hour to review several aspects of albumin estimation. The present review focuses on the modern trends of research from a conceptual point of view and gives an overview of recent developments to offer the readers a comprehensive understanding of the subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A rapid method for determining the relative solubility of plutonium aerosols

    International Nuclear Information System (INIS)

    Miglio, J.J.; Muggenburg, B.A.; Brooks, A.L.

    1977-01-01

    An in vitro system for rapidly determining the relative solubilities of plutonium-containing aerosols produced at various temperatures has been developed. Aerosols were prepared by nebulizing a solution of Pu IV in 1 M HCl and by subsequent heating at 50, 325, 600, 900, 1150 and 1300 degrees C. These aerosols were then evaluated as to relative solubility and the results compared with in vivo data from beagle dogs and Chinese hamsters. Aerosol samples from animal inhalation exposures were collected on filters and a section was sandwiched between 100 nm membranes held in a two-piece, cylindrical polyethylene holder. The holder and filter were placed in a container of solvent and stirred gently, after which the filter and solvent were separately analyzed for Pu. The effects of solvent composition, volume and temperature as well as immersion time were investigated. The results showed that using a solvent of 0.1 N HCl at 23 degrees C and an immersion time of 2 hr dissolved a sufficient amount of plutonium as to be easily assayed with a liquid scintillation counter and will provide a rapid estimate of the solubility rate of the aerosol. The in vivo and in vitro results were in relative agreement; as the production temperature of the aerosol increased, the solubility decreased. (author)

  9. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  10. A rapid colorimetric screening method for vanillic acid and vanillin-producing bacterial strains.

    Science.gov (United States)

    Zamzuri, N A; Abd-Aziz, S; Rahim, R A; Phang, L Y; Alitheen, N B; Maeda, T

    2014-04-01

    To isolate a bacterial strain capable of biotransforming ferulic acid, a major component of lignin, into vanillin and vanillic acid by a rapid colorimetric screening method. For the production of vanillin, a natural aroma compound, we attempted to isolate a potential strain using a simple screening method based on pH change resulting from the degradation of ferulic acid. The strain Pseudomonas sp. AZ10 UPM exhibited a significant result because of colour changes observed on the assay plate on day 1 with a high intensity of yellow colour. The biotransformation of ferulic acid into vanillic acid by the AZ10 strain provided the yield (Yp/s ) and productivity (Pr ) of 1·08 mg mg(-1) and 53·1 mg L(-1) h(-1) , respectively. In fact, new investigations regarding lignin degradation revealed that the strain was not able to produce vanillin and vanillic acid directly from lignin; however, partially digested lignin by mixed enzymatic treatment allowed the strain to produce 30·7 mg l(-1) and 1·94 mg l(-1) of vanillic acid and biovanillin, respectively. (i) The rapid colorimetric screening method allowed the isolation of a biovanillin producer using ferulic acid as the sole carbon source. (ii) Enzymatic treatment partially digested lignin, which could then be utilized by the strain to produce biovanillin and vanillic acid. To the best of our knowledge, this is the first study reporting the use of a rapid colorimetric screening method for bacterial strains producing vanillin and vanillic acid from ferulic acid. © 2013 The Society for Applied Microbiology.

  11. Improved Battery Parameter Estimation Method Considering Operating Scenarios for HEV/EV Applications

    Directory of Open Access Journals (Sweden)

    Jufeng Yang

    2016-12-01

    Full Text Available This paper presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted dataset is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.

  12. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    Science.gov (United States)

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  13. Rapid estimation of split renal function in kidney donors using software developed for computed tomographic renal volumetry

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Fumi, E-mail: fumikato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kamishima, Tamotsu, E-mail: ktamotamo2@yahoo.co.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Morita, Ken, E-mail: kenordic@carrot.ocn.ne.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Muto, Natalia S., E-mail: nataliamuto@gmail.com [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Okamoto, Syozou, E-mail: shozo@med.hokudai.ac.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Omatsu, Tokuhiko, E-mail: omatoku@nirs.go.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Oyama, Noriko, E-mail: ZAT04404@nifty.ne.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Terae, Satoshi, E-mail: saterae@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kanegae, Kakuko, E-mail: IZW00143@nifty.ne.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Nonomura, Katsuya, E-mail: k-nonno@med.hokudai.ac.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Shirato, Hiroki, E-mail: shirato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan)

    2011-07-15

    Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using {sup 99m}Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 {+-} 3.9 s) was significantly shorter than that of the workstation (102.6 {+-} 38.9 s, p < 0.0001). The results of n-SRV (49.7 {+-} 4.0%) were highly consistent with those of z-SRV (49.9 {+-} 3.6%), with a mean discrepancy of 0.12 {+-} 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 {+-} 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.

  14. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  15. The estimation of the measurement results with using statistical methods

    International Nuclear Information System (INIS)

    Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" data-affiliation=" (State Enterprise Ukrmetrteststandard, 4, Metrologichna Str., 03680, Kyiv (Ukraine))" >Velychko, O; UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" data-affiliation=" (State Scientific Institution UkrNDIspirtbioprod, 3, Babushkina Lane, 03190, Kyiv (Ukraine))" >Gordiyenko, T

    2015-01-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed

  16. The estimation of the measurement results with using statistical methods

    Science.gov (United States)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  17. Internal Dosimetry Intake Estimation using Bayesian Methods

    International Nuclear Information System (INIS)

    Miller, G.; Inkret, W.C.; Martz, H.F.

    1999-01-01

    New methods for the inverse problem of internal dosimetry are proposed based on evaluating expectations of the Bayesian posterior probability distribution of intake amounts, given bioassay measurements. These expectation integrals are normally of very high dimension and hence impractical to use. However, the expectations can be algebraically transformed into a sum of terms representing different numbers of intakes, with a Poisson distribution of the number of intakes. This sum often rapidly converges, when the average number of intakes for a population is small. A simplified algorithm using data unfolding is described (UF code). (author)

  18. Phylogenetic uncertainty can bias the number of evolutionary transitions estimated from ancestral state reconstruction methods.

    Science.gov (United States)

    Duchêne, Sebastian; Lanfear, Robert

    2015-09-01

    Ancestral state reconstruction (ASR) is a popular method for exploring the evolutionary history of traits that leave little or no trace in the fossil record. For example, it has been used to test hypotheses about the number of evolutionary origins of key life-history traits such as oviparity, or key morphological structures such as wings. Many studies that use ASR have suggested that the number of evolutionary origins of such traits is higher than was previously thought. The scope of such inferences is increasing rapidly, facilitated by the construction of very large phylogenies and life-history databases. In this paper, we use simulations to show that the number of evolutionary origins of a trait tends to be overestimated when the phylogeny is not perfect. In some cases, the estimated number of transitions can be several fold higher than the true value. Furthermore, we show that the bias is not always corrected by standard approaches to account for phylogenetic uncertainty, such as repeating the analysis on a large collection of possible trees. These findings have important implications for studies that seek to estimate the number of origins of a trait, particularly those that use large phylogenies that are associated with considerable uncertainty. We discuss the implications of this bias, and methods to ameliorate it. © 2015 Wiley Periodicals, Inc.

  19. Application of a rapid screening method to detect irradiated meat in Brazil

    International Nuclear Information System (INIS)

    Villavicencio, A.L.C.H.; Delincee, H.

    1998-01-01

    Complete text of publication follows. Based on the enormous potential for food irradiation in Brazil, and to ensure free consumer choice, there is a need to find a convenient and rapid method for detection of irradiated food. Since treatment with ionizing radiation causes DNA fragmentation, the analysis of DNA damage might be promising. In fact, DNA fragmentation measured in single cells by agarose gel electrophoresis - DNA Comet Assay - has shown to offer great potential as a rapid tool to detect whether a wide variety of foodstuffs has been radiation processed. However, more work is needed to exploit the full potential of this promising technique. In this paper, the DNA Comet Assay was used to identify exotic meat (boar, jacare and capybara), irradiated with 60 Co gamma-rays. The applied radiation doses were 0, 1.5, 3.0 and 4.5 kGy. Analysis of the DNA migration enable a rapid identification of the radiation treatment

  20. Power system frequency estimation based on an orthogonal decomposition method

    Science.gov (United States)

    Lee, Chih-Hung; Tsai, Men-Shen

    2018-06-01

    In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.

  1. Rapid estimation of organic nitrogen in oil shale waste waters

    Energy Technology Data Exchange (ETDEWEB)

    Jones, B.M.; Daughton, C.G.; Harris, G.J.

    1984-04-01

    Many of the characteristics of oil shale process waste waters (e.g., malodors, color, and resistance to biotreatment) are imparted by numerous nitrogenous heterocycles and aromatic amines. For the frequent performance assessment of waste treatment processes designed to remove these nitrogenous organic compounds, a rapid and colligative measurement of organic nitrogen is essential. Quantification of organic nitrogen in biological and agricultural samples is usually accomplished using the time-consuming, wet-chemical Kjeldahl method. For oil shale waste waters, whose primary inorganic nitorgen constituent is amonia, organic Kjeldahl nitrogen (OKN) is determined by first eliminating the endogenous ammonia by distillation and then digesting the sample in boiling H/sub 2/SO/sub 4/. The organic material is oxidized, and most forms of organically bound nitrogen are released as ammonium ion. After the addition of base, the ammonia is separated from the digestate by distillation and quantified by acidimetric titrimetry or colorimetry. The major failings of this method are the loss of volatile species such as aliphatic amines (during predistillation) and the inability to completely recover nitrogen from many nitrogenous heterocycles (during digestion). Within the last decade, a new approach has been developed for the quantification of total nitrogen (TN). The sample is first combusted, a

  2. Comparison of methods for estimating herbage intake in grazing dairy cows

    DEFF Research Database (Denmark)

    Hellwing, Anne Louise Frydendahl; Lund, Peter; Weisbjerg, Martin Riis

    2015-01-01

    Estimation of herbage intake is a challenge both under practical and experimental conditions. The aim of this study was to estimate herbage intake with different methods for cows grazing 7 h daily on either spring or autumn pastures. In order to generate variation between cows, the 20 cows per...... season, and the herbage intake was estimated twice during each season. Cows were on pasture from 8:00 until 15:00, and were subsequently housed inside and fed a mixed ration (MR) based on maize silage ad libitum. Herbage intake was estimated with nine different methods: (1) animal performance (2) intake...

  3. A Rapid Method for the Determination of Fucoxanthin in Diatom

    Directory of Open Access Journals (Sweden)

    Li-Juan Wang

    2018-01-01

    Full Text Available Fucoxanthin is a natural pigment found in microalgae, especially diatoms and Chrysophyta. Recently, it has been shown to have anti-inflammatory, anti-tumor, and anti-obesityactivity in humans. Phaeodactylum tricornutum is a diatom with high economic potential due to its high content of fucoxanthin and eicosapentaenoic acid. In order to improve fucoxanthin production, physical and chemical mutagenesis could be applied to generate mutants. An accurate and rapid method to assess the fucoxanthin content is a prerequisite for a high-throughput screen of mutants. In this work, the content of fucoxanthin in P. tricornutum was determined using spectrophotometry instead of high performance liquid chromatography (HPLC. This spectrophotometric method is easier and faster than liquid chromatography and the standard error was less than 5% when compared to the HPLC results. Also, this method can be applied to other diatoms, with standard errors of 3–14.6%. It provides a high throughput screening method for microalgae strains producing fucoxanthin.

  4. Magnetic susceptibility: a proxy method of estimating increased pollution

    International Nuclear Information System (INIS)

    Kluciarova, D.; Gregorova, D.; Tunyi, I.

    2004-01-01

    A need for rapid and inexpensive (proxy) methods of outlining areas exposed to increased pollution by atmospheric particulates of industrial origin caused scientists in various fields to use and validate different non-traditional (or non-chemical) techniques. Among them, soil magnetometry seems to be a suitable tool. This method is based on the knowledge that ferrimagnetic particles, namely magnetite, are produced from pyrite during combustion of fossil fuel. Besides the combustion processes, magnetic particles can also originate from road traffic, for example, or can be included in various waste-water outlets. In our study we examine the magnetic susceptibility as a convenient measure of determining the concentration of (ferri) magnetic minerals by rapid and non-destructive means. We used for measure KLY-2 Kappabridge. Concentration of ferrimagnetic minerals in different soils is linked to pollution sources. Higher χ values were observed in soils on the territory in Istebne (47383x10 -6 SI ). The susceptibility anomaly may be caused by particular geological circumstances and can be related to high content of ferromagnetic minerals in the host rocks. Positive correlation of magnetic susceptibility are conditioned by industrial contamination mainly by metal working factories and by traffic. The proposed method can be successfully applied in determining heavy metal pollution of soils on the city territories. (authors)

  5. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  6. Focused ultrasound transducer spatial peak intensity estimation: a comparison of methods

    Science.gov (United States)

    Civale, John; Rivens, Ian; Shaw, Adam; ter Haar, Gail

    2018-03-01

    Characterisation of the spatial peak intensity at the focus of high intensity focused ultrasound transducers is difficult because of the risk of damage to hydrophone sensors at the high focal pressures generated. Hill et al (1994 Ultrasound Med. Biol. 20 259-69) provided a simple equation for estimating spatial-peak intensity for solid spherical bowl transducers using measured acoustic power and focal beamwidth. This paper demonstrates theoretically and experimentally that this expression is only strictly valid for spherical bowl transducers without a central (imaging) aperture. A hole in the centre of the transducer results in over-estimation of the peak intensity. Improved strategies for determining focal peak intensity from a measurement of total acoustic power are proposed. Four methods are compared: (i) a solid spherical bowl approximation (after Hill et al 1994 Ultrasound Med. Biol. 20 259-69), (ii) a numerical method derived from theory, (iii) a method using measured sidelobe to focal peak pressure ratio, and (iv) a method for measuring the focal power fraction (FPF) experimentally. Spatial-peak intensities were estimated for 8 transducers at three drive powers levels: low (approximately 1 W), moderate (~10 W) and high (20-70 W). The calculated intensities were compared with those derived from focal peak pressure measurements made using a calibrated hydrophone. The FPF measurement method was found to provide focal peak intensity estimates that agreed most closely (within 15%) with the hydrophone measurements, followed by the pressure ratio method (within 20%). The numerical method was found to consistently over-estimate focal peak intensity (+40% on average), however, for transducers with a central hole it was more accurate than using the solid bowl assumption (+70% over-estimation). In conclusion, the ability to make use of an automated beam plotting system, and a hydrophone with good spatial resolution, greatly facilitates characterisation of the FPF, and

  7. [Research on rapid and quantitative detection method for organophosphorus pesticide residue].

    Science.gov (United States)

    Sun, Yuan-Xin; Chen, Bing-Tai; Yi, Sen; Sun, Ming

    2014-05-01

    The methods of physical-chemical inspection is adopted in the traditional pesticide residue detection, which require a lot of pretreatment processes, are time-consuming and complicated. In the present study, the authors take chlorpyrifos applied widely in the present agricultural field as the research object and propose a rapid and quantitative detection method for organophosphorus pesticide residues. At first, according to the chemical characteristics of chlorpyrifos and comprehensive chromogenic effect of several colorimetric reagents and secondary pollution, the pretreatment of the scheme of chromogenic reaction of chlorpyrifos with resorcin in a weak alkaline environment was determined. Secondly, by analyzing Uv-Vis spectrum data of chlorpyrifos samples whose content were between 0. 5 and 400 mg kg-1, it was confirmed that the characteristic information after the color reaction mainly was concentrated among 360 approximately 400 nm. Thirdly, the full spectrum forecasting model was established based on the partial least squares, whose correlation coefficient of calibration was 0. 999 6, correlation coefficient of prediction reached 0. 995 6, standard deviation of calibration (RMSEC) was 2. 814 7 mg kg-1, and standard deviation of verification (RMSEP) was 8. 012 4 mg kg-1. Fourthly, the wavelengths whose center wavelength is 400 nm was extracted as characteristic region to build a forecasting model, whose correlation coefficient of calibration was 0. 999 6, correlation coefficient of prediction reached 0. 999 3, standard deviation of calibration (RMSEC) was 2. 566 7 mg kg-1 , standard deviation of verification (RMSEP) was 4. 886 6 mg kg-1, respectively. At last, by analyzing the near infrared spectrum data of chlorpyrifos samples with contents between 0. 5 and 16 mg kg-1, the authors found that although the characteristics of the chromogenic functional group are not obvious, the change of absorption peaks of resorcin itself in the neighborhood of 5 200 cm

  8. Novel Method for 5G Systems NLOS Channels Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Vladeta Milenkovic

    2017-01-01

    Full Text Available For the development of new 5G systems to operate in mm bands, there is a need for accurate radio propagation modelling at these bands. In this paper novel approach for NLOS channels parameter estimation will be presented. Estimation will be performed based on LCR performance measure, which will enable us to estimate propagation parameters in real time and to avoid weaknesses of ML and moment method estimation approaches.

  9. A Method for Estimation of Death Tolls in Disastrous Earthquake

    Science.gov (United States)

    Pai, C.; Tien, Y.; Teng, T.

    2004-12-01

    Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on

  10. A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers

    Energy Technology Data Exchange (ETDEWEB)

    Melboe, Hallgeir

    2001-10-01

    This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)

  11. New methods for estimating follow-up rates in cohort studies

    Directory of Open Access Journals (Sweden)

    Xiaonan Xue

    2017-12-01

    Full Text Available Abstract Background The follow-up rate, a standard index of the completeness of follow-up, is important for assessing the validity of a cohort study. A common method for estimating the follow-up rate, the “Percentage Method”, defined as the fraction of all enrollees who developed the event of interest or had complete follow-up, can severely underestimate the degree of follow-up. Alternatively, the median follow-up time does not indicate the completeness of follow-up, and the reverse Kaplan-Meier based method and Clark’s Completeness Index (CCI also have limitations. Methods We propose a new definition for the follow-up rate, the Person-Time Follow-up Rate (PTFR, which is the observed person-time divided by total person-time assuming no dropouts. The PTFR cannot be calculated directly since the event times for dropouts are not observed. Therefore, two estimation methods are proposed: a formal person-time method (FPT in which the expected total follow-up time is calculated using the event rate estimated from the observed data, and a simplified person-time method (SPT that avoids estimation of the event rate by assigning full follow-up time to all events. Simulations were conducted to measure the accuracy of each method, and each method was applied to a prostate cancer recurrence study dataset. Results Simulation results showed that the FPT has the highest accuracy overall. In most situations, the computationally simpler SPT and CCI methods are only slightly biased. When applied to a retrospective cohort study of cancer recurrence, the FPT, CCI and SPT showed substantially greater 5-year follow-up than the Percentage Method (92%, 92% and 93% vs 68%. Conclusions The Person-time methods correct a systematic error in the standard Percentage Method for calculating follow-up rates. The easy to use SPT and CCI methods can be used in tandem to obtain an accurate and tight interval for PTFR. However, the FPT is recommended when event rates and

  12. Methods for Measuring and Estimating Methane Emission from Ruminants

    Directory of Open Access Journals (Sweden)

    Jørgen Madsen

    2012-04-01

    Full Text Available This paper is a brief introduction to the different methods used to quantify the enteric methane emission from ruminants. A thorough knowledge of the advantages and disadvantages of these methods is very important in order to plan experiments, understand and interpret experimental results, and compare them with other studies. The aim of the paper is to describe the principles, advantages and disadvantages of different methods used to quantify the enteric methane emission from ruminants. The best-known methods: Chambers/respiration chambers, SF6 technique and in vitro gas production technique and the newer CO2 methods are described. Model estimations, which are used to calculate national budget and single cow enteric emission from intake and diet composition, are also discussed. Other methods under development such as the micrometeorological technique, combined feeder and CH4 analyzer and proxy methods are briefly mentioned. Methods of choice for estimating enteric methane emission depend on aim, equipment, knowledge, time and money available, but interpretation of results obtained with a given method can be improved if knowledge about the disadvantages and advantages are used in the planning of experiments.

  13. A rapid chemical method for lysing Arabidopsis cells for protein analysis

    Directory of Open Access Journals (Sweden)

    Takano Tetsuo

    2011-07-01

    Full Text Available Abstract Background Protein extraction is a frequent procedure in biological research. For preparation of plant cell extracts, plant materials usually have to be ground and homogenized to physically break the robust cell wall, but this step is laborious and time-consuming when a large number of samples are handled at once. Results We developed a chemical method for lysing Arabidopsis cells without grinding. In this method, plants are boiled for just 10 minutes in a solution containing a Ca2+ chelator and detergent. Cell extracts prepared by this method were suitable for SDS-PAGE and immunoblot analysis. This method was also applicable to genomic DNA extraction for PCR analysis. Our method was applied to many other plant species, and worked well for some of them. Conclusions Our method is rapid and economical, and allows many samples to be prepared simultaneously for protein analysis. Our method is useful not only for Arabidopsis research but also research on certain other species.

  14. A novel sample preparation method using rapid nonheated saponification method for the determination of cholesterol in emulsified foods.

    Science.gov (United States)

    Jeong, In-Seek; Kwak, Byung-Man; Ahn, Jang-Hyuk; Leem, Donggil; Yoon, Taehyung; Yoon, Changyong; Jeong, Jayoung; Park, Jung-Min; Kim, Jin-Man

    2012-10-01

    In this study, nonheated saponification was employed as a novel, rapid, and easy sample preparation method for the determination of cholesterol in emulsified foods. Cholesterol content was analyzed using gas chromatography with a flame ionization detector (GC-FID). The cholesterol extraction method was optimized for maximum recovery from baby food and infant formula. Under these conditions, the optimum extraction solvent was 10 mL ethyl ether per 1 to 2 g sample, and the saponification solution was 0.2 mL KOH in methanol. The cholesterol content in the products was determined to be within the certified range of certified reference materials (CRMs), NIST SRM 1544 and SRM 1849. The results of the recovery test performed using spiked materials were in the range of 98.24% to 99.45% with an relative standard devitation (RSD) between 0.83% and 1.61%. This method could be used to reduce sample pretreatment time and is expected to provide an accurate determination of cholesterol in emulsified food matrices such as infant formula and baby food. A novel, rapid, and easy sample preparation method using nonheated saponification was developed for cholesterol detection in emulsified foods. Recovery tests of CRMs were satisfactory, and the recoveries of spiked materials were accurate and precise. This method was effective and decreased the time required for analysis by 5-fold compared to the official method. © 2012 Institute of Food Technologists®

  15. A method for rapid similarity analysis of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Liu Na

    2006-11-01

    Full Text Available Abstract Background Owing to the rapid expansion of RNA structure databases in recent years, efficient methods for structure comparison are in demand for function prediction and evolutionary analysis. Usually, the similarity of RNA secondary structures is evaluated based on tree models and dynamic programming algorithms. We present here a new method for the similarity analysis of RNA secondary structures. Results Three sets of real data have been used as input for the example applications. Set I includes the structures from 5S rRNAs. Set II includes the secondary structures from RNase P and RNase MRP. Set III includes the structures from 16S rRNAs. Reasonable phylogenetic trees are derived for these three sets of data by using our method. Moreover, our program runs faster as compared to some existing ones. Conclusion The famous Lempel-Ziv algorithm can efficiently extract the information on repeated patterns encoded in RNA secondary structures and makes our method an alternative to analyze the similarity of RNA secondary structures. This method will also be useful to researchers who are interested in evolutionary analysis.

  16. Hexographic Method of Complex Town-Planning Terrain Estimate

    Science.gov (United States)

    Khudyakov, A. Ju

    2017-11-01

    The article deals with the vital problem of a complex town-planning analysis based on the “hexographic” graphic analytic method, makes a comparison with conventional terrain estimate methods and contains the method application examples. It discloses a procedure of the author’s estimate of restrictions and building of a mathematical model which reflects not only conventional town-planning restrictions, but also social and aesthetic aspects of the analyzed territory. The method allows one to quickly get an idea of the territory potential. It is possible to use an unlimited number of estimated factors. The method can be used for the integrated assessment of urban areas. In addition, it is possible to use the methods of preliminary evaluation of the territory commercial attractiveness in the preparation of investment projects. The technique application results in simple informative graphics. Graphical interpretation is straightforward from the experts. A definite advantage is the free perception of the subject results as they are not prepared professionally. Thus, it is possible to build a dialogue between professionals and the public on a new level allowing to take into account the interests of various parties. At the moment, the method is used as a tool for the preparation of integrated urban development projects at the Department of Architecture in Federal State Autonomous Educational Institution of Higher Education “South Ural State University (National Research University)”, FSAEIHE SUSU (NRU). The methodology is included in a course of lectures as the material on architectural and urban design for architecture students. The same methodology was successfully tested in the preparation of business strategies for the development of some territories in the Chelyabinsk region. This publication is the first in a series of planned activities developing and describing the methodology of hexographical analysis in urban and architectural practice. It is also

  17. A simple and rapid method of purification of impure plutonium oxide

    International Nuclear Information System (INIS)

    Michael, K.M.; Rakshe, P.R.; Dharmpurikar, G.R.; Thite, B.S.; Lokhande, Manisha; Sinalkar, Nitin; Dakshinamoorthy, A.; Munshi, S.K.; Dey, P.K.

    2007-01-01

    Impure plutonium oxides are conventionally purified by dissolution in HNO 3 in presence of HF followed by ion exchange separation and oxalate precipitation. The method is tedious and use of HF enhances corrosion of the plant equipment's. A simple and rapid method has been developed for the purification of the oxide by leaching with various reagents like DM water, NaOH and oxalic acid. A combination of DM water followed by hot leaching with 0.4 M oxalic acid could bring down the impurity levels in the oxide to the desired level required for fuel fabrication. (author)

  18. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol.2

    International Nuclear Information System (INIS)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios

  19. A rapid method for titration of ascovirus infectivity.

    Science.gov (United States)

    Han, Ningning; Chen, Zishu; Wan, Hu; Huang, Guohua; Li, Jianhong; Jin, Byung Rae

    2018-05-01

    Ascoviruses are a recently described family and the traditional plaque assay and end-point PCR assay have been used for their titration. However, these two methods are time-consuming and inaccurate to titrate ascoviruses. In the present study, a quick method for the determination of the titer of ascovirus stocks was developed based on ascovirus-induced apoptosis in infected insect cells. Briefly, cells infected with serial dilutions of virus (10 -2 -10 -10 ) for 24 h were stained with trypan blue. The stained cells were counted, and the percentage of nonviable cells was calculated. The stained cell rate was compared between virus-infected and control cells. The minimum-dilution group that had a significant difference compared with control and the maximum-dilution group that had no significant difference were selected and then compared each well of the two groups with the average stained cell rate of control. The well was marked as positive well if the stained cell rate was higher than the average stained cell rate of control wells; otherwise, the well was marked as negative wells. The percentage of positive wells were calculated according to the number of positive. Subsequently, the virus titer was calculated through the method of Reed and Muench. This novel method is rapid, simple, reproducible, accurate, and less material-consuming and eliminates the subjectivity of the other procedures for titrating ascoviruses. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Training Methods for Image Noise Level Estimation on Wavelet Components

    Directory of Open Access Journals (Sweden)

    A. De Stefano

    2004-12-01

    Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

  1. Uncertainty estimation with a small number of measurements, part II: a redefinition of uncertainty and an estimator method

    Science.gov (United States)

    Huang, Hening

    2018-01-01

    This paper is the second (Part II) in a series of two papers (Part I and Part II). Part I has quantitatively discussed the fundamental limitations of the t-interval method for uncertainty estimation with a small number of measurements. This paper (Part II) reveals that the t-interval is an ‘exact’ answer to a wrong question; it is actually misused in uncertainty estimation. This paper proposes a redefinition of uncertainty, based on the classical theory of errors and the theory of point estimation, and a modification of the conventional approach to estimating measurement uncertainty. It also presents an asymptotic procedure for estimating the z-interval. The proposed modification is to replace the t-based uncertainty with an uncertainty estimator (mean- or median-unbiased). The uncertainty estimator method is an approximate answer to the right question to uncertainty estimation. The modified approach provides realistic estimates of uncertainty, regardless of whether the population standard deviation is known or unknown, or if the sample size is small or large. As an application example of the modified approach, this paper presents a resolution to the Du-Yang paradox (i.e. Paradox 2), one of the three paradoxes caused by the misuse of the t-interval in uncertainty estimation.

  2. Search for rapid spectral variability in Psi(9) Aurigae

    International Nuclear Information System (INIS)

    Ghosh, K.K.

    1989-01-01

    Observations of Psi(9) Aur on five nights between January 29 and February 3, 1988 were conducted as part of a search for rapid spectral variability in Be stars. In addition, a series of H-alpha profiles with a time resolution of about 45 s was obtained for the star. A method for obtaining the standard deviation in continuum counts measurements is proposed. The estimated value of the standard deviation of the measured equivalent widths of the H-alpha profiles was obtained using the method of Chalabaev and Maillard (1983). Rapid variations of the standard deviations of continuum counts and H-alpha equivalent widths were not observed. For the continuum counts measurement standard deviations a few hourly variations and two night-to-night variations were found. 16 refs

  3. Lake and Reservoir Evaporation Estimation: Sensitivity Analysis and Ranking Existing Methods

    Directory of Open Access Journals (Sweden)

    maysam majidi

    2016-02-01

    Full Text Available Introduction: Water when harvested is commonly stored in dams, but approximately up to half of it may be lost due to evaporation leading to a huge waste of our resources. Estimating evaporation from lakes and reservoirs is not a simple task as there are a number of factors that can affect the evaporation rate, notably the climate and physiography of the water body and its surroundings. Several methods are currently used to predict evaporation from meteorological data in open water reservoirs. Based on the accuracy and simplicity of the application, each of these methods has advantages and disadvantages. Although evaporation pan method is well known to have significant uncertainties both in magnitude and timing, it is extensively used in Iran because of its simplicity. Evaporation pan provides a measurement of the combined effect of temperature, humidity, wind speed and solar radiation on the evaporation. However, they may not be adequate for the reservoir operations/development and water accounting strategies for managing drinking water in arid and semi-arid conditions which require accurate evaporation estimates. However, there has not been a consensus on which methods were better to employ due to the lack of important long-term measured data such as temperature profile, radiation and heat fluxes in most lakes and reservoirs in Iran. Consequently, we initiated this research to find the best cost−effective evaporation method with possibly fewer data requirements in our study area, i.e. the Doosti dam reservoir which is located in a semi-arid region of Iran. Materials and Methods: Our study site was the Doosti dam reservoir located between Iran and Turkmenistan borders, which was constructed by the Ministry of Water and Land Reclamation of the Republic of Turkmenistan and the Khorasan Razavi Regional Water Board of the Islamic Republic of Iran. Meteorological data including maximum and minimum air temperature and evaporation from class A pan

  4. An improved method to estimate reflectance parameters for high dynamic range imaging

    Science.gov (United States)

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  5. Public-Private Investment Partnerships: Efficiency Estimation Methods

    Directory of Open Access Journals (Sweden)

    Aleksandr Valeryevich Trynov

    2016-06-01

    Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in

  6. Estimation of the specific activity of radioiodinated gonadotrophins: comparison of three methods

    Energy Technology Data Exchange (ETDEWEB)

    Englebienne, P [Centre for Research and Diagnosis in Endocrinology, Kain (Belgium); Slegers, G [Akademisch Ziekenhuis, Ghent (Belgium). Lab. voor Analytische Chemie

    1983-01-14

    The authors compared 3 methods for estimating the specific activity of radioiodinated gonadotrophins. Two of the methods (column recovery and isotopic dilution) gave similar results, while the third (autodisplacement) gave significantly higher estimations. In the autodisplacement method, B/T ratios, obtained when either labelled hormone alone, or labelled and unlabelled hormone, are added to the antibody, were compared as estimates of the mass of hormone iodinated. It is likely that immunologically unreactive impurities present in the labelled hormone solution invalidate such comparison.

  7. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    Science.gov (United States)

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    Science.gov (United States)

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  9. Statistical Methods for Estimating the Uncertainty in the Best Basis Inventories

    International Nuclear Information System (INIS)

    WILMARTH, S.R.

    2000-01-01

    This document describes the statistical methods used to determine sample-based uncertainty estimates for the Best Basis Inventory (BBI). For each waste phase, the equation for the inventory of an analyte in a tank is Inventory (Kg or Ci) = Concentration x Density x Waste Volume. the total inventory is the sum of the inventories in the different waste phases. Using tanks sample data: statistical methods are used to obtain estimates of the mean concentration of an analyte the density of the waste, and their standard deviations. The volumes of waste in the different phases, and their standard deviations, are estimated based on other types of data. The three estimates are multiplied to obtain the inventory estimate. The standard deviations are combined to obtain a standard deviation of the inventory. The uncertainty estimate for the Best Basis Inventory (BBI) is the approximate 95% confidence interval on the inventory

  10. A pose estimation method for unmanned ground vehicles in GPS denied environments

    Science.gov (United States)

    Tamjidi, Amirhossein; Ye, Cang

    2012-06-01

    This paper presents a pose estimation method based on the 1-Point RANSAC EKF (Extended Kalman Filter) framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera to estimate the pose of a Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation framework continuy updates the vehicle's 6D pose state and temporary estimates of the extracted visual features' 3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks, the proposed method discards feature estimates from the extended state vector once they are no longer observed for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the EKF-SLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford campus's LIDAR-Vision dataset. The results are compared with the ground truth data of the dataset and the estimation error is ~1.9% of the path length.

  11. A Novel Intelligent Method for the State of Charge Estimation of Lithium-Ion Batteries Using a Discrete Wavelet Transform-Based Wavelet Neural Network

    Directory of Open Access Journals (Sweden)

    Deyu Cui

    2018-04-01

    Full Text Available State of charge (SOC estimation is becoming increasingly important, along with electric vehicle (EV rapid development, while SOC is one of the most significant parameters for the battery management system, indicating remaining energy and ensuring the safety and reliability of EV. In this paper, a hybrid wavelet neural network (WNN model combining the discrete wavelet transform (DWT method and adaptive WNN is proposed to estimate the SOC of lithium-ion batteries. The WNN model is trained by Levenberg-Marquardt (L-M algorithm, whose inputs are processed by discrete wavelet decomposition and reconstitution. Compared with back-propagation neural network (BPNN, L-M based BPNN (LMBPNN, L-M based WNN (LMWNN, DWT with L-M based BPNN (DWTLMBPNN and extend Kalman filter (EKF, the proposed intelligent SOC estimation method is validated and proved to be effective. Under the New European Driving Cycle (NEDC, the mean absolute error and maximum error can be reduced to 0.59% and 3.13%, respectively. The characteristics of high accuracy and strong robustness of the proposed method are verified by comparison study and robustness evaluation results (e.g., measurement noise test and untrained driving cycle test.

  12. Rapid methods for the extraction and archiving of molecular grade fungal genomic DNA.

    Science.gov (United States)

    Borman, Andrew M; Palmer, Michael; Johnson, Elizabeth M

    2013-01-01

    The rapid and inexpensive extraction of fungal genomic DNA that is of sufficient quality for molecular approaches is central to the molecular identification, epidemiological analysis, taxonomy, and strain typing of pathogenic fungi. Although many commercially available and in-house extraction procedures do eliminate the majority of contaminants that commonly inhibit molecular approaches, the inherent difficulties in breaking fungal cell walls lead to protocols that are labor intensive and that routinely take several hours to complete. Here we describe several methods that we have developed in our laboratory that allow the extremely rapid and inexpensive preparation of fungal genomic DNA.

  13. A method of rapidly evaluating image quality of NED optical system

    Science.gov (United States)

    Sun, Qi; Qiu, Chuankai; Yang, Huan

    2014-11-01

    In recent years, with the development of technology of micro-display, advanced optics and the software and hardware, near-to-eye display ( NED) optical system will have a wide range of potential applications in the fields of amusement and virtual reality. However, research on the evaluating image quality of this kind optical system is comparatively lagging behind. Although now there are some methods and equipment for evaluation, they can't be applied in commercial production because of their complex operation and inaccuracy. In this paper, an academic method is proposed and a Rapid Evaluation System (RES) is designed to evaluate the image of optical system rapidly and exactly. Firstly, a set of parameters that eyes are sensitive to and also express the quality of system should be extracted and quantized to be criterion, so the evaluation standards can be established. Then, some parameters can be detected by RES consisted of micro-display, CCD camera and computer and so on. By process of scaling, the measuring results of the RES are exact and creditable, relationship between object measurement, subjective evaluation and the RES will be established. After that, image quality of optical system can be evaluated just by detecting parameters of that. The RES is simple and the results of evaluation are exact and keeping with human vision. So the method can be used not only for optimizing design of optical system, but also for evaluation in commercial production.

  14. Methods for Estimation of Market Power in Electric Power Industry

    Science.gov (United States)

    Turcik, M.; Oleinikova, I.; Junghans, G.; Kolcun, M.

    2012-01-01

    The article is related to a topical issue of the newly-arisen market power phenomenon in the electric power industry. The authors point out to the importance of effective instruments and methods for credible estimation of the market power on liberalized electricity market as well as the forms and consequences of market power abuse. The fundamental principles and methods of the market power estimation are given along with the most common relevant indicators. Furthermore, in the work a proposal for determination of the relevant market place taking into account the specific features of power system and a theoretical example of estimating the residual supply index (RSI) in the electricity market are given.

  15. M-Arctan estimator based on the trust-region method

    Energy Technology Data Exchange (ETDEWEB)

    Hassaine, Yacine; Delourme, Benoit; Panciatici, Patrick [Gestionnaire du Reseau de Transport d Electricite Departement Methodes et appui Immeuble Le Colbert 9, Versailles Cedex (France); Walter, Eric [Laboratoire des signaux et systemes (L2S) Supelec, Gif-sur-Yvette (France)

    2006-11-15

    In this paper a new approach is proposed to increase the robustness of the classical L{sub 2}-norm state estimation. To achieve this task a new formulation of the Levemberg-Marquardt algorithm based on the trust-region method is applied to a new M-estimator, which we called M-Arctan. Results obtained on IEEE networks up to 300 buses are presented. (author)

  16. Methods for design flood estimation in South Africa | Smithers ...

    African Journals Online (AJOL)

    The estimation of design floods is necessary for the design of hydraulic structures and to quantify the risk of failure of the structures. Most of the methods used for design flood estimation in South Africa were developed in the late 1960s and early 1970s and are in need of updating with more than 40 years of additional data ...

  17. Assessment of Methods for Estimating Risk to Birds from ...

    Science.gov (United States)

    The U.S. EPA Ecological Risk Assessment Support Center (ERASC) announced the release of the final report entitled, Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles. This report evaluates approaches for estimating the probability of ingestion by birds of contaminated particles such as pesticide granules or lead particles (i.e. shot or bullet fragments). In addition, it presents an approach for using this information to estimate the risk of mortality to birds from ingestion of lead particles. Response to ERASC Request #16

  18. Monte Carlo Method to Study Properties of Acceleration Factor Estimation Based on the Test Results with Varying Load

    Directory of Open Access Journals (Sweden)

    N. D. Tiannikova

    2014-01-01

    Full Text Available G.D. Kartashov has developed a technique to determine the rapid testing results scaling functions to the normal mode. Its feature is preliminary tests of products of one sample including tests using the alternating modes. Standard procedure of preliminary tests (researches is as follows: n groups of products with m elements in each start being tested in normal mode and, after a failure of one of products in the group, the remained products are tested in accelerated mode. In addition to tests in alternating mode, tests in constantly normal mode are conducted as well. The acceleration factor of rapid tests for this type of products, identical to any lots is determined using such testing results of products from the same lot. A drawback of this technique is that tests are to be conducted in alternating mode till the failure of all products. That is not always is possible. To avoid this shortcoming, the Renyi criterion is offered. It allows us to determine scaling functions using the right-censored data thus giving the opportunity to stop testing prior to all failures of products.In this work a statistical modeling of the acceleration factor estimation owing to Renyi statistics minimization is implemented by the Monte-Carlo method. Results of modeling show that the acceleration factor estimation obtained through Renyi statistics minimization is conceivable for rather large n . But for small sample volumes some systematic bias of acceleration factor estimation, which decreases with growth n is observed for both distributions (exponential and Veybull's distributions. Therefore the paper also presents calculation results of correction factors for a case of exponential distribution and Veybull's distribution.

  19. Development Of A Data Assimilation Capability For RAPID

    Science.gov (United States)

    Emery, C. M.; David, C. H.; Turmon, M.; Hobbs, J.; Allen, G. H.; Famiglietti, J. S.

    2017-12-01

    The global decline of in situ observations associated with the increasing ability to monitor surface water from space motivates the creation of data assimilation algorithms that merge computer models and space-based observations to produce consistent estimates of terrestrial hydrology that fill the spatiotemporal gaps in observations. RAPID is a routing model based on the Muskingum method that is capable of estimating river streamflow over large scales with a relatively short computing time. This model only requires limited inputs: a reach-based river network, and lateral surface and subsurface flow into the rivers. The relatively simple model physics imply that RAPID simulations could be significantly improved by including a data assimilation capability. Here we present the early developments of such data assimilation approach into RAPID. Given the linear and matrix-based structure of the model, we chose to apply a direct Kalman filter, hence allowing for the preservation of high computational speed. We correct the simulated streamflows by assimilating streamflow observations and our early results demonstrate the feasibility of the approach. Additionally, the use of in situ gauges at continental scales motivates the application of our new data assimilation scheme to altimetry measurements from existing (e.g. EnviSat, Jason 2) and upcoming satellite missions (e.g. SWOT), and ultimately apply the scheme globally.

  20. Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods

    Directory of Open Access Journals (Sweden)

    aboalhasan fathabadi

    2017-02-01

    Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to

  1. Resampling methods in Microsoft Excel® for estimating reference intervals.

    Science.gov (United States)

    Theodorsson, Elvar

    2015-01-01

    Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. 
The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular.
 Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.

  2. Own-wage labor supply elasticities: variation across time and estimation methods

    Directory of Open Access Journals (Sweden)

    Olivier Bargain

    2016-10-01

    Full Text Available Abstract There is a huge variation in the size of labor supply elasticities in the literature, which hampers policy analysis. While recent studies show that preference heterogeneity across countries explains little of this variation, we focus on two other important features: observation period and estimation method. We start with a thorough survey of existing evidence for both Western Europe and the USA, over a long period and from different empirical approaches. Then, our meta-analysis attempts to disentangle the role of time changes and estimation methods. We highlight the key role of time changes, documenting the incredible fall in labor supply elasticities since the 1980s not only for the USA but also in the EU. In contrast, we find no compelling evidence that the choice of estimation method explains variation in elasticity estimates. From our analysis, we derive important guidelines for policy simulations.

  3. New method and installation for rapid determination of radon diffusion coefficient in various materials

    International Nuclear Information System (INIS)

    Tsapalov, Andrey; Gulabyants, Loren; Livshits, Mihail; Kovler, Konstantin

    2014-01-01

    The mathematical apparatus and the experimental installation for the rapid determination of radon diffusion coefficient in various materials are developed. The single test lasts not longer than 18 h and allows testing numerous materials, such as gaseous and liquid media, as well as soil, concrete and radon-proof membranes, in which diffusion coefficient of radon may vary in an extremely wide range, from 1·10 −12 to 5·10 −5 m 2 /s. The uncertainty of radon diffusion coefficient estimation depends on the permeability of the sample and varies from about 5% (for the most permeable materials) to 40% (for less permeable materials, such as radon-proof membranes). - Highlights: • The new method and installation for determination of radon diffusion coefficient D are developed. • The measured D-values vary in an extremely wide range, from 5×10 -5 to 1×10 -12 m 2 /s. • The materials include water, air, soil, building materials and radon-proof membranes. • The duration of the single test does not exceed 18 hours. • The measurement uncertainty varies from 5% (in permeable materials) to 40% (in radon gas barriers)

  4. Rapid method to determine actinides and 89/90Sr in limestone and marble samples

    International Nuclear Information System (INIS)

    Maxwell, S.L.; Culligan, Brian; Hutchison, J.B.; Utsey, R.C.; Sudowe, Ralf; McAlister, D.R.

    2016-01-01

    A new method for the determination of actinides and radiostrontium in limestone and marble samples has been developed that utilizes a rapid sodium hydroxide fusion to digest the sample. Following rapid pre-concentration steps to remove sample matrix interferences, the actinides and 89 / 90 Sr are separated using extraction chromatographic resins and measured radiometrically. The advantages of sodium hydroxide fusion versus other fusion techniques will be discussed. This approach has a sample preparation time for limestone and marble samples of <4 h. (author)

  5. Validation of Persian rapid estimate of adult literacy in dentistry.

    Science.gov (United States)

    Pakpour, Amir H; Lawson, Douglas M; Tadakamadla, Santosh K; Fridlund, Bengt

    2016-05-01

    The aim of the present study was to establish the psychometric properties of the Rapid Estimate of adult Literacy in Dentistry-99 (REALD-99) in the Persian language for use in an Iranian population (IREALD-99). A total of 421 participants with a mean age of 28 years (59% male) were included in the study. Participants included those who were 18 years or older and those residing in Quazvin (a city close to Tehran), Iran. A forward-backward translation process was used for the IREALD-99. The Test of Functional Health Literacy in Dentistry (TOFHLiD) was also administrated. The validity of the IREALD-99 was investigated by comparing the IREALD-99 across the categories of education and income levels. To further investigate, the correlation of IREALD-99 with TOFHLiD was computed. A principal component analysis (PCA) was performed on the data to assess unidimensionality and strong first factor. The Rasch mathematical model was used to evaluate the contribution of each item to the overall measure, and whether the data were invariant to differences in sex. Reliability was estimated with Cronbach's α and test-retest correlation. Cronbach's alpha for the IREALD-99 was 0.98, indicating strong internal consistency. The test-retest correlation was 0.97. IREALD-99 scores differed by education levels. IREALD-99 scores were positively related to TOFHLiD scores (rh = 0.72, P < 0.01). In addition, IREALD-99 showed positive correlation with self-rated oral health status (rh = 0.31, P < 0.01) as evidence of convergent validity. The PCA indicated a strong first component, five times the strength of the second component and nine times the third. The empirical data were a close fit with the Rasch mathematical model. There was not a significant difference in scores with respect to income level (P = 0.09), and only the very lowest income level was significantly different (P < 0.01). The IREALD-99 exhibited excellent reliability on repeated administrations, as well as internal

  6. A rapid method for the preparation of 99Tcm hexametazime-labelled leucocytes

    International Nuclear Information System (INIS)

    Solanki, K.K.; Mather, S.J.; Janabi, M.A.; Britton, K.E.

    1988-01-01

    99 Tc m (±)-hexamethylpropyleneamineoxime (HMPAO) ( 99 Tc m Hexametazime) has been recently reported as an alternative for labelling leucocytes. This technique has been modified to give a simpler routine in-house labelling technique. It has three advantages: only about 20 ml of blood is required, the labelling time is just under 1 h and high yields of labelled leucocytes are obtained (mean of 500 MBq per injection dose). The properties of labelled leucocytes using this modified method are; 80% granulocyte-bound radioactivity, a rapid lung transit and a blood granulocyte recovery of 40% at 30 min similar to those described previously. The viability of the labelled leucocytes was tested and confirmed in vitro using a migration technique and in vivo by showing no lung retention on early imaging and high splenic uptake. A rapid in-process chromatography assessment procedure for regulating the protocol has been developed. Successful abscess imaging by 4 h has been achieved in 21 patients with normal results in another 22 patients without abscesses. This simpler method should encourage a more widespread application of scintigraphy using radiolabelled granulocytes. (author)

  7. EVALUATION OF METHODS FOR ESTIMATING FATIGUE PROPERTIES APPLIED TO STAINLESS STEELS AND ALUMINUM ALLOYS

    Directory of Open Access Journals (Sweden)

    Taylor Mac Intyer Fonseca Junior

    2013-12-01

    Full Text Available This work evaluate seven estimation methods of fatigue properties applied to stainless steels and aluminum alloys. Experimental strain-life curves are compared to the estimations obtained by each method. After applying seven different estimation methods at 14 material conditions, it was found that fatigue life can be estimated with good accuracy only by the Bäumel-Seeger method for the martensitic stainless steel tempered between 300°C and 500°C. The differences between mechanical behavior during monotonic and cyclic loading are probably the reason for the absence of a reliable method for estimation of fatigue behavior from monotonic properties for a group of materials.

  8. Methods to estimate historical daily streamflow for ungaged stream locations in Minnesota

    Science.gov (United States)

    Lorenz, David L.; Ziegeweid, Jeffrey R.

    2016-03-14

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water; however, streamgages cannot be installed at every location where streamflow information is needed. Therefore, methods for estimating streamflow at ungaged stream locations need to be developed. This report presents a statewide study to develop methods to estimate the structure of historical daily streamflow at ungaged stream locations in Minnesota. Historical daily mean streamflow at ungaged locations in Minnesota can be estimated by transferring streamflow data at streamgages to the ungaged location using the QPPQ method. The QPPQ method uses flow-duration curves at an index streamgage, relying on the assumption that exceedance probabilities are equivalent between the index streamgage and the ungaged location, and estimates the flow at the ungaged location using the estimated flow-duration curve. Flow-duration curves at ungaged locations can be estimated using recently developed regression equations that have been incorporated into StreamStats (http://streamstats.usgs.gov/), which is a U.S. Geological Survey Web-based interactive mapping tool that can be used to obtain streamflow statistics, drainage-basin characteristics, and other information for user-selected locations on streams.

  9. Method for developing cost estimates for generic regulatory requirements

    International Nuclear Information System (INIS)

    1985-01-01

    The NRC has established a practice of performing regulatory analyses, reflecting costs as well as benefits, of proposed new or revised generic requirements. A method had been developed to assist the NRC in preparing the types of cost estimates required for this purpose and for assigning priorities in the resolution of generic safety issues. The cost of a generic requirement is defined as the net present value of total lifetime cost incurred by the public, industry, and government in implementing the requirement for all affected plants. The method described here is for commercial light-water-reactor power plants. Estimating the cost for a generic requirement involves several steps: (1) identifying the activities that must be carried out to fully implement the requirement, (2) defining the work packages associated with the major activities, (3) identifying the individual elements of cost for each work package, (4) estimating the magnitude of each cost element, (5) aggregating individual plant costs over the plant lifetime, and (6) aggregating all plant costs and generic costs to produce a total, national, present value of lifetime cost for the requirement. The method developed addresses all six steps. In this paper, we discuss on the first three

  10. A Comparative Study of Potential Evapotranspiration Estimation by Eight Methods with FAO Penman–Monteith Method in Southwestern China

    Directory of Open Access Journals (Sweden)

    Dengxiao Lang

    2017-09-01

    Full Text Available Potential evapotranspiration (PET is crucial for water resources assessment. In this regard, the FAO (Food and Agriculture Organization–Penman–Monteith method (PM is commonly recognized as a standard method for PET estimation. However, due to requirement of detailed meteorological data, the application of PM is often constrained in many regions. Under such circumstances, an alternative method with similar efficiency to that of PM needs to be identified. In this study, three radiation-based methods, Makkink (Mak, Abtew (Abt, and Priestley–Taylor (PT, and five temperature-based methods, Hargreaves–Samani (HS, Thornthwaite (Tho, Hamon (Ham, Linacre (Lin, and Blaney–Criddle (BC, were compared with PM at yearly and seasonal scale, using long-term (50 years data from 90 meteorology stations in southwest China. Indicators, viz. (videlicet Nash–Sutcliffe efficiency (NSE, relative error (Re, normalized root mean squared error (NRMSE, and coefficient of determination (R2 were used to evaluate the performance of PET estimations by the above-mentioned eight methods. The results showed that the performance of the methods in PET estimation varied among regions; HS, PT, and Abt overestimated PET, while others underestimated. In Sichuan basin, Mak, Abt and HS yielded similar estimations to that of PM, while, in Yun-Gui plateau, Abt, Mak, HS, and PT showed better performances. Mak performed the best in the east Tibetan Plateau at yearly and seasonal scale, while HS showed a good performance in summer and autumn. In the arid river valley, HS, Mak, and Abt performed better than the others. On the other hand, Tho, Ham, Lin, and BC could not be used to estimate PET in some regions. In general, radiation-based methods for PET estimation performed better than temperature-based methods among the selected methods in the study area. Among the radiation-based methods, Mak performed the best, while HS showed the best performance among the temperature

  11. Rapid column extraction method for actinides and strontium in fish and other animal tissue samples

    International Nuclear Information System (INIS)

    Maxwell III, S.L.; Faison, D.M.

    2008-01-01

    The analysis of actinides and radiostrontium in animal tissue samples is very important for environmental monitoring. There is a need to measure actinide isotopes and strontium with very low detection limits in animal tissue samples, including fish, deer, hogs, beef and shellfish. A new, rapid separation method has been developed that allows the measurement of plutonium, neptunium, uranium, americium, curium and strontium isotopes in large animal tissue samples (100-200 g) with high chemical recoveries and effective removal of matrix interferences. This method uses stacked TEVA Resin R , TRU Resin R and DGA Resin R cartridges from Eichrom Technologies (Darien, IL, USA) that allows the rapid separation of plutonium (Pu), neptunium (Np), uranium (U), americium (Am), and curium (Cm) using a single multi-stage column combined with alphaspectrometry. Strontium is collected on Sr Resin R from Eichrom Technologies (Darien, IL, USA). After acid digestion and furnace heating of the animal tissue samples, the actinides and 89/90 Sr are separated using column extraction chromatography. This method has been shown to be effective over a wide range of animal tissue matrices. Vacuum box cartridge technology with rapid flow rates is used to minimize sample preparation time. (author)

  12. Comparing different methods for estimating radiation dose to the conceptus

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Rendon, X.; Dedulle, A. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); Walgraeve, M.S.; Woussen, S.; Zhang, G. [University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Bosmans, H. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Zanca, F. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); GE Healthcare, Buc (France)

    2017-02-15

    To compare different methods available in the literature for estimating radiation dose to the conceptus (D{sub conceptus}) against a patient-specific Monte Carlo (MC) simulation and a commercial software package (CSP). Eight voxel models from abdominopelvic CT exams of pregnant patients were generated. D{sub conceptus} was calculated with an MC framework including patient-specific longitudinal tube current modulation (TCM). For the same patients, dose to the uterus, D{sub uterus}, was calculated as an alternative for D{sub conceptus}, with a CSP that uses a standard-size, non-pregnant phantom and a generic TCM curve. The percentage error between D{sub uterus} and D{sub conceptus} was studied. Dose to the conceptus and percent error with respect to D{sub conceptus} was also estimated for three methods in the literature. The percentage error ranged from -15.9% to 40.0% when comparing MC to CSP. When comparing the TCM profiles with the generic TCM profile from the CSP, differences were observed due to patient habitus and conceptus position. For the other methods, the percentage error ranged from -30.1% to 13.5% but applicability was limited. Estimating an accurate D{sub conceptus} requires a patient-specific approach that the CSP investigated cannot provide. Available methods in the literature can provide a better estimation if applicable to patient-specific cases. (orig.)

  13. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    Science.gov (United States)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  14. Modulating Function-Based Method for Parameter and Source Estimation of Partial Differential Equations

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-08

    Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters

  15. Rapid estimation of split renal function in kidney donors using software developed for computed tomographic renal volumetry

    International Nuclear Information System (INIS)

    Kato, Fumi; Kamishima, Tamotsu; Morita, Ken; Muto, Natalia S.; Okamoto, Syozou; Omatsu, Tokuhiko; Oyama, Noriko; Terae, Satoshi; Kanegae, Kakuko; Nonomura, Katsuya; Shirato, Hiroki

    2011-01-01

    Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using 99m Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 ± 3.9 s) was significantly shorter than that of the workstation (102.6 ± 38.9 s, p < 0.0001). The results of n-SRV (49.7 ± 4.0%) were highly consistent with those of z-SRV (49.9 ± 3.6%), with a mean discrepancy of 0.12 ± 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 ± 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.

  16. Use of refractometry and colorimetry as field methods to rapidly assess antimalarial drug quality.

    Science.gov (United States)

    Green, Michael D; Nettey, Henry; Villalva Rojas, Ofelia; Pamanivong, Chansapha; Khounsaknalath, Lamphet; Grande Ortiz, Miguel; Newton, Paul N; Fernández, Facundo M; Vongsack, Latsamy; Manolin, Ot

    2007-01-04

    The proliferation of counterfeit and poor-quality drugs is a major public health problem; especially in developing countries lacking adequate resources to effectively monitor their prevalence. Simple and affordable field methods provide a practical means of rapidly monitoring drug quality in circumstances where more advanced techniques are not available. Therefore, we have evaluated refractometry, colorimetry and a technique combining both processes as simple and accurate field assays to rapidly test the quality of the commonly available antimalarial drugs; artesunate, chloroquine, quinine, and sulfadoxine. Method bias, sensitivity, specificity and accuracy relative to high-performance liquid chromatographic (HPLC) analysis of drugs collected in the Lao PDR were assessed for each technique. The HPLC method for each drug was evaluated in terms of assay variability and accuracy. The accuracy of the combined method ranged from 0.96 to 1.00 for artesunate tablets, chloroquine injectables, quinine capsules, and sulfadoxine tablets while the accuracy was 0.78 for enterically coated chloroquine tablets. These techniques provide a generally accurate, yet simple and affordable means to assess drug quality in resource-poor settings.

  17. Evaluation and comparison of estimation methods for failure rates and probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)

    2006-02-01

    An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.

  18. Comparison of estimation methods for fitting weibull distribution to ...

    African Journals Online (AJOL)

    Comparison of estimation methods for fitting weibull distribution to the natural stand of Oluwa Forest Reserve, Ondo State, Nigeria. ... Journal of Research in Forestry, Wildlife and Environment ... The result revealed that maximum likelihood method was more accurate in fitting the Weibull distribution to the natural stand.

  19. Fill rate estimation in periodic review policies with lost sales using simple methods

    Energy Technology Data Exchange (ETDEWEB)

    Cardós, M.; Guijarro Tarradellas, E.; Babiloni Griñón, E.

    2016-07-01

    Purpose: The exact estimation of the fill rate in the lost sales case is complex and time consuming. However, simple and suitable methods are needed for its estimation so that inventory managers could use them. Design/methodology/approach: Instead of trying to compute the fill rate in one step, this paper focuses first on estimating the probabilities of different on-hand stock levels so that the fill rate is computed later. Findings: As a result, the performance of a novel proposed method overcomes the other methods and is relatively simple to compute. Originality/value: Existing methods for estimating stock levels are examined, new procedures are proposed and their performance is assessed.

  20. Dental age estimation using Willems method: A digital orthopantomographic study

    Directory of Open Access Journals (Sweden)

    Rezwana Begum Mohammed

    2014-01-01

    Full Text Available In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. Objectives: The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA in different age groups and to evaluate the possible correlation between DA and chronological age (CA in South Indian population using Willems method. Materials and Methods: Digital Orthopantomogram of 332 subjects (166 males, 166 females who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant development was undertaken and DA was assessed using Willems method. Results and Discussion: The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88. The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P 0.05. Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05. Conclusion: This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA.

  1. Method for rapidly determining a pulp kappa number using spectrophotometry

    Science.gov (United States)

    Chai, Xin-Sheng; Zhu, Jun Yong

    2002-01-01

    A system and method for rapidly determining the pulp kappa number through direct measurement of the potassium permanganate concentration in a pulp-permanganate solution using spectrophotometry. Specifically, the present invention uses strong acidification to carry out the pulp-permanganate oxidation reaction in the pulp-permanganate solution to prevent the precipitation of manganese dioxide (MnO.sub.2). Consequently, spectral interference from the precipitated MnO.sub.2 is eliminated and the oxidation reaction becomes dominant. The spectral intensity of the oxidation reaction is then analyzed to determine the pulp kappa number.

  2. Task-oriented comparison of power spectral density estimation methods for quantifying acoustic attenuation in diagnostic ultrasound using a reference phantom method.

    Science.gov (United States)

    Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A

    2013-07-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.

  3. Groundwater Seepage Estimation into Amirkabir Tunnel Using Analytical Methods and DEM and SGR Method

    OpenAIRE

    Hadi Farhadian; Homayoon Katibeh

    2015-01-01

    In this paper, groundwater seepage into Amirkabir tunnel has been estimated using analytical and numerical methods for 14 different sections of the tunnel. Site Groundwater Rating (SGR) method also has been performed for qualitative and quantitative classification of the tunnel sections. The obtained results of above mentioned methods were compared together. The study shows reasonable accordance with results of the all methods unless for two sections of tunnel. In these t...

  4. Methods and Magnitudes of Rapid Weight Loss in Judo Athletes Over Pre-Competition Periods

    Directory of Open Access Journals (Sweden)

    Kons Rafael Lima

    2017-06-01

    Full Text Available Purpose. The study aimed to analyse the methods and magnitudes of rapid weight loss (RWL in judo team members in distinct periods before the biggest state competition in Southern Brazil.

  5. Numerical method for estimating the size of chaotic regions of phase space

    International Nuclear Information System (INIS)

    Henyey, F.S.; Pomphrey, N.

    1987-10-01

    A numerical method for estimating irregular volumes of phase space is derived. The estimate weights the irregular area on a surface of section with the average return time to the section. We illustrate the method by application to the stadium and oval billiard systems and also apply the method to the continuous Henon-Heiles system. 15 refs., 10 figs

  6. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2013-01-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates

  7. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.

  8. Stress estimation in reservoirs using an integrated inverse method

    Science.gov (United States)

    Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre

    2018-05-01

    Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.

  9. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    Science.gov (United States)

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  10. Bed Evolution under Rapidly Varying Flows by a New Method for Wave Speed Estimation

    Directory of Open Access Journals (Sweden)

    Khawar Rehman

    2016-05-01

    Full Text Available This paper proposes a sediment-transport model based on coupled Saint-Venant and Exner equations. A finite volume method of Godunov type with predictor-corrector steps is used to solve a set of coupled equations. An efficient combination of approximate Riemann solvers is proposed to compute fluxes associated with sediment-laden flow. In addition, a new method is proposed for computing the water depth and velocity values along the shear wave. This method ensures smooth solutions, even for flows with high discontinuities, and on domains with highly distorted grids. The numerical model is tested for channel aggradation on a sloping bottom, dam-break cases at flume-scale and reach-scale with flat bottom configurations and varying downstream water depths. The proposed model is tested for predicting the position of hydraulic jump, wave front propagation, and for predicting magnitude of bed erosion. The comparison between results based on the proposed scheme and analytical, experimental, and published numerical results shows good agreement. Sensitivity analysis shows that the model is computationally efficient and virtually independent of mesh refinement.

  11. Methods for design flood estimation in South Africa

    African Journals Online (AJOL)

    2012-07-04

    Jul 4, 2012 ... 1970s and are in need of updating with more than 40 years of additional data ... This paper reviews methods used for design flood estimation in South Africa and .... transposition of past experience, or a deterministic approach,.

  12. Reliability of Estimation Pile Load Capacity Methods

    Directory of Open Access Journals (Sweden)

    Yudhi Lastiasih

    2014-04-01

    Full Text Available None of numerous previous methods for predicting pile capacity is known how accurate any of them are when compared with the actual ultimate capacity of piles tested to failure. The author’s of the present paper have conducted such an analysis, based on 130 data sets of field loading tests. Out of these 130 data sets, only 44 could be analysed, of which 15 were conducted until the piles actually reached failure. The pile prediction methods used were: Brinch Hansen’s method (1963, Chin’s method (1970, Decourt’s Extrapolation Method (1999, Mazurkiewicz’s method (1972, Van der Veen’s method (1953, and the Quadratic Hyperbolic Method proposed by Lastiasih et al. (2012. It was obtained that all the above methods were sufficiently reliable when applied to data from pile loading tests that loaded to reach failure. However, when applied to data from pile loading tests that loaded without reaching failure, the methods that yielded lower values for correction factor N are more recommended. Finally, the empirical method of Reese and O’Neill (1988 was found to be reliable enough to be used to estimate the Qult of a pile foundation based on soil data only.

  13. Vehicle Speed Estimation and Forecasting Methods Based on Cellular Floating Vehicle Data

    Directory of Open Access Journals (Sweden)

    Wei-Kuang Lai

    2016-02-01

    Full Text Available Traffic information estimation and forecasting methods based on cellular floating vehicle data (CFVD are proposed to analyze the signals (e.g., handovers (HOs, call arrivals (CAs, normal location updates (NLUs and periodic location updates (PLUs from cellular networks. For traffic information estimation, analytic models are proposed to estimate the traffic flow in accordance with the amounts of HOs and NLUs and to estimate the traffic density in accordance with the amounts of CAs and PLUs. Then, the vehicle speeds can be estimated in accordance with the estimated traffic flows and estimated traffic densities. For vehicle speed forecasting, a back-propagation neural network algorithm is considered to predict the future vehicle speed in accordance with the current traffic information (i.e., the estimated vehicle speeds from CFVD. In the experimental environment, this study adopted the practical traffic information (i.e., traffic flow and vehicle speed from Taiwan Area National Freeway Bureau as the input characteristics of the traffic simulation program and referred to the mobile station (MS communication behaviors from Chunghwa Telecom to simulate the traffic information and communication records. The experimental results illustrated that the average accuracy of the vehicle speed forecasting method is 95.72%. Therefore, the proposed methods based on CFVD are suitable for an intelligent transportation system.

  14. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    Science.gov (United States)

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. Copyright © 2016. Published by Elsevier Ltd.

  15. A review of models and micrometeorological methods used to estimate wetland evapotranspiration

    Science.gov (United States)

    Drexler, J.Z.; Snyder, R.L.; Spano, D.; Paw, U.K.T.

    2004-01-01

    Within the past decade or so, the accuracy of evapotranspiration (ET) estimates has improved due to new and increasingly sophisticated methods. Yet despite a plethora of choices concerning methods, estimation of wetland ET remains insufficiently characterized due to the complexity of surface characteristics and the diversity of wetland types. In this review, we present models and micrometeorological methods that have been used to estimate wetland ET and discuss their suitability for particular wetland types. Hydrological, soil monitoring and lysimetric methods to determine ET are not discussed. Our review shows that, due to the variability and complexity of wetlands, there is no single approach that is the best for estimating wetland ET. Furthermore, there is no single foolproof method to obtain an accurate, independent measure of wetland ET. Because all of the methods reviewed, with the exception of eddy covariance and LIDAR, require measurements of net radiation (Rn) and soil heat flux (G), highly accurate measurements of these energy components are key to improving measurements of wetland ET. Many of the major methods used to determine ET can be applied successfully to wetlands of uniform vegetation and adequate fetch, however, certain caveats apply. For example, with accurate Rn and G data and small Bowen ratio (??) values, the Bowen ratio energy balance method can give accurate estimates of wetland ET. However, large errors in latent heat flux density can occur near sunrise and sunset when the Bowen ratio ?? ??? - 1??0. The eddy covariance method provides a direct measurement of latent heat flux density (??E) and sensible heat flux density (II), yet this method requires considerable expertise and expensive instrumentation to implement. A clear advantage of using the eddy covariance method is that ??E can be compared with Rn-G H, thereby allowing for an independent test of accuracy. The surface renewal method is inexpensive to replicate and, therefore, shows

  16. Guideline for Bayesian Net based Software Fault Estimation Method for Reactor Protection System

    International Nuclear Information System (INIS)

    Eom, Heung Seop; Park, Gee Yong; Jang, Seung Cheol

    2011-01-01

    The purpose of this paper is to provide a preliminary guideline for the estimation of software faults in a safety-critical software, for example, reactor protection system's software. As the fault estimation method is based on Bayesian Net which intensively uses subjective probability and informal data, it is necessary to define formal procedure of the method to minimize the variability of the results. The guideline describes assumptions, limitations and uncertainties, and the product of the fault estimation method. The procedure for conducting a software fault-estimation method is then outlined, highlighting the major tasks involved. The contents of the guideline are based on our own experience and a review of research guidelines developed for a PSA

  17. GSMA: Gene Set Matrix Analysis, An Automated Method for Rapid Hypothesis Testing of Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Chris Cheadle

    2007-01-01

    Full Text Available Background: Microarray technology has become highly valuable for identifying complex global changes in gene expression patterns. The assignment of functional information to these complex patterns remains a challenging task in effectively interpreting data and correlating results from across experiments, projects and laboratories. Methods which allow the rapid and robust evaluation of multiple functional hypotheses increase the power of individual researchers to data mine gene expression data more efficiently.Results: We have developed (gene set matrix analysis GSMA as a useful method for the rapid testing of group-wise up- or downregulation of gene expression simultaneously for multiple lists of genes (gene sets against entire distributions of gene expression changes (datasets for single or multiple experiments. The utility of GSMA lies in its flexibility to rapidly poll gene sets related by known biological function or as designated solely by the end-user against large numbers of datasets simultaneously.Conclusions: GSMA provides a simple and straightforward method for hypothesis testing in which genes are tested by groups across multiple datasets for patterns of expression enrichment.

  18. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  19. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Melius, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  20. Estimation of deuterium content in organic compounds by mass spectrometric methods

    International Nuclear Information System (INIS)

    Dave, S.M.; Goomer, N.C.

    1979-01-01

    Many organic sompounds are finding increasing importance in heavy water enrichment programme. New methods based on quantitative chemical conversion have been developed and standardized in for estimating deuterium contents of the exchanging organic molecules by mass spectrometry. The methods have been selected in such a way that the deuterium contents of both exchangeable as well as total hydrogens in the molecule can be conveniently estimated. (auth.)

  1. NEW COMPLETENESS METHODS FOR ESTIMATING EXOPLANET DISCOVERIES BY DIRECT DETECTION

    International Nuclear Information System (INIS)

    Brown, Robert A.; Soummer, Remi

    2010-01-01

    We report on new methods for evaluating realistic observing programs that search stars for planets by direct imaging, where observations are selected from an optimized star list and stars can be observed multiple times. We show how these methods bring critical insight into the design of the mission and its instruments. These methods provide an estimate of the outcome of the observing program: the probability distribution of discoveries (detection and/or characterization) and an estimate of the occurrence rate of planets (η). We show that these parameters can be accurately estimated from a single mission simulation, without the need for a complete Monte Carlo mission simulation, and we prove the accuracy of this new approach. Our methods provide tools to define a mission for a particular science goal; for example, a mission can be defined by the expected number of discoveries and its confidence level. We detail how an optimized star list can be built and how successive observations can be selected. Our approach also provides other critical mission attributes, such as the number of stars expected to be searched and the probability of zero discoveries. Because these attributes depend strongly on the mission scale (telescope diameter, observing capabilities and constraints, mission lifetime, etc.), our methods are directly applicable to the design of such future missions and provide guidance to the mission and instrument design based on scientific performance. We illustrate our new methods with practical calculations and exploratory design reference missions for the James Webb Space Telescope (JWST) operating with a distant starshade to reduce scattered and diffracted starlight on the focal plane. We estimate that five habitable Earth-mass planets would be discovered and characterized with spectroscopy, with a probability of zero discoveries of 0.004, assuming a small fraction of JWST observing time (7%), η = 0.3, and 70 observing visits, limited by starshade fuel.

  2. ESTIMATING RISK ON THE CAPITAL MARKET WITH VaR METHOD

    Directory of Open Access Journals (Sweden)

    Sinisa Bogdan

    2015-06-01

    Full Text Available The two basic questions that every investor tries to answer before investment are questions about predicting return and risk. Risk and return are generally considered two positively correlated sizes, during the growth of risk it is expected increase of return to compensate the higher risk. The quantification of risk in the capital market represents the current topic since occurrence of securities. Together with estimated future returns it represents starting point of any investment. In this study it is described the history of the emergence of VaR methods, usefulness in assessing the risks of financial assets. Three main Value at Risk (VaR methodologies are decribed and explained in detail: historical method, parametric method and Monte Carlo method. After the theoretical review of VaR methods it is estimated risk of liquid stocks and portfolio from the Croatian capital market with historical and parametric VaR method, after which the results were compared and explained.

  3. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  4. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    Science.gov (United States)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  5. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry--BREALD-30.

    Science.gov (United States)

    Junkes, Monica C; Fraiz, Fabian C; Sardenberg, Fernanda; Lee, Jessica Y; Paiva, Saul M; Ferreira, Fernanda M

    2015-01-01

    The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis. The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.

  6. VHTRC experiment for verification test of H∞ reactivity estimation method

    International Nuclear Information System (INIS)

    Fujii, Yoshio; Suzuki, Katsuo; Akino, Fujiyoshi; Yamane, Tsuyoshi; Fujisaki, Shingo; Takeuchi, Motoyoshi; Ono, Toshihiko

    1996-02-01

    This experiment was performed at VHTRC to acquire the data for verifying the H∞ reactivity estimation method. In this report, the experimental method, the measuring circuits and data processing softwares are described in details. (author)

  7. A method to estimate stellar ages from kinematical data

    Science.gov (United States)

    Almeida-Fernandes, F.; Rocha-Pinto, H. J.

    2018-05-01

    We present a method to build a probability density function (PDF) for the age of a star based on its peculiar velocities U, V, and W and its orbital eccentricity. The sample used in this work comes from the Geneva-Copenhagen Survey (GCS) that contains the spatial velocities, orbital eccentricities, and isochronal ages for about 14 000 stars. Using the GCS stars, we fitted the parameters that describe the relations between the distributions of kinematical properties and age. This parametrization allows us to obtain an age probability from the kinematical data. From this age PDF, we estimate an individual average age for the star using the most likely age and the expected age. We have obtained the stellar age PDF for the age of 9102 stars from the GCS and have shown that the distribution of individual ages derived from our method is in good agreement with the distribution of isochronal ages. We also observe a decline in the mean metallicity with our ages for stars younger than 7 Gyr, similar to the one observed for isochronal ages. This method can be useful for the estimation of rough stellar ages for those stars that fall in areas of the Hertzsprung-Russell diagram where isochrones are tightly crowded. As an example of this method, we estimate the age of Trappist-1, which is a M8V star, obtaining the age of t(UVW) = 12.50(+0.29 - 6.23) Gyr.

  8. Evaluation of Model Based State of Charge Estimation Methods for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Zhongyue Zou

    2014-08-01

    Full Text Available Four model-based State of Charge (SOC estimation methods for lithium-ion (Li-ion batteries are studied and evaluated in this paper. Different from existing literatures, this work evaluates different aspects of the SOC estimation, such as the estimation error distribution, the estimation rise time, the estimation time consumption, etc. The equivalent model of the battery is introduced and the state function of the model is deduced. The four model-based SOC estimation methods are analyzed first. Simulations and experiments are then established to evaluate the four methods. The urban dynamometer driving schedule (UDDS current profiles are applied to simulate the drive situations of an electrified vehicle, and a genetic algorithm is utilized to identify the model parameters to find the optimal parameters of the model of the Li-ion battery. The simulations with and without disturbance are carried out and the results are analyzed. A battery test workbench is established and a Li-ion battery is applied to test the hardware in a loop experiment. Experimental results are plotted and analyzed according to the four aspects to evaluate the four model-based SOC estimation methods.

  9. Methods for estimation of internal dose of the public from dietary

    International Nuclear Information System (INIS)

    Zhu Hongda

    1987-01-01

    Following the issue of its Publication 26, ICRP has successively published its Publication 30 to meet the great changes and improvements made in the Basic Recommendations since July of 1979. In Part 1 of Publcation 30, ICRP recommended a new method for internal dose estimation and pressented some important data. In this report, comparison is made among methods for estimation of internal dose for the public from dietary. They include: (1) the new method suggested by ICRP; (2) the simple and convenient method using transfer factors under equilibrium conditions; (3) the methods based on the similarities of several radionuclides to their chemical analogs. It is concluded that the first method is better than the others and should be used from now on

  10. Estimation of groundwater recharge using the chloride mass-balance method, Pingtung Plain, Taiwan

    Science.gov (United States)

    Ting, Cheh-Shyh; Kerh, Tienfuan; Liao, Chiu-Jung

    Due to rapid economic growth in the Pingtung Plain of Taiwan, the use of groundwater resources has changed dramatically. Over-pumping of the groundwater reservoir, which lowers hydraulic heads in the aquifers, is not only affecting the coastal area negatively but has serious consequences for agriculture throughout the plain. In order to determine the safe yield of the aquifer underlying the plain, a reliable estimate of groundwater recharge is desirable. In the present study, for the first time, the chloride mass-balance method is adopted to estimate groundwater recharge in the plain. Four sites in the central part were chosen to facilitate the estimations using the ion-chromatograph and Thiessen polygon-weighting methods. Based on the measured and calculated results, in all sites, including the mountain and river boundaries, recharge to the groundwater is probably 15% of the annual rainfall, excluding recharge from additional irrigation water. This information can improve the accuracy of future groundwater-simulation and management models in the plain. Résumé Du fait de la croissance économique rapide de la plaine de Pingtung à Taiwan, l'utilisation des ressources en eau souterraine s'est considérablement modifié. La surexploitation des aquifères, qui a abaissé le niveau des nappes, n'affecte pas seulement la région côtière, mais a de sérieuses répercutions sur l'agriculture dans toute la plaine. Afin de déterminer les ressources renouvelables de l'aquifère sous la plaine, une estimation précise de la recharge de la nappe est nécessaire. Dans cette étude, le taux de recharge de la nappe a d'abord été estimé au moyen d'un bilan de matière de chlorure. Quatre sites de la partie centrale ont été sélectionnés pour réaliser ces estimations, à l'aide d'un chromatographe ionique et de la méthode des polygones de Thiessen. A partir des résultats mesurés et calculés, à chaque site, et en prenant comme limites les montagnes et les rivi

  11. Seasonal adjustment methods and real time trend-cycle estimation

    CERN Document Server

    Bee Dagum, Estela

    2016-01-01

    This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...

  12. New Vehicle Detection Method with Aspect Ratio Estimation for Hypothesized Windows

    Directory of Open Access Journals (Sweden)

    Jisu Kim

    2015-12-01

    Full Text Available All kinds of vehicles have different ratios of width to height, which are called the aspect ratios. Most previous works, however, use a fixed aspect ratio for vehicle detection (VD. The use of a fixed vehicle aspect ratio for VD degrades the performance. Thus, the estimation of a vehicle aspect ratio is an important part of robust VD. Taking this idea into account, a new on-road vehicle detection system is proposed in this paper. The proposed method estimates the aspect ratio of the hypothesized windows to improve the VD performance. Our proposed method uses an Aggregate Channel Feature (ACF and a support vector machine (SVM to verify the hypothesized windows with the estimated aspect ratio. The contribution of this paper is threefold. First, the estimation of vehicle aspect ratio is inserted between the HG (hypothesis generation and the HV (hypothesis verification. Second, a simple HG method named a signed horizontal edge map is proposed to speed up VD. Third, a new measure is proposed to represent the overlapping ratio between the ground truth and the detection results. This new measure is used to show that the proposed method is better than previous works in terms of robust VD. Finally, the Pittsburgh dataset is used to verify the performance of the proposed method.

  13. Rapid Active Power Control of Photovoltaic Systems for Grid Frequency Support

    Energy Technology Data Exchange (ETDEWEB)

    Hoke, Anderson; Shirazi, Mariko; Chakraborty, Sudipta; Muljadi, Eduard; Maksimovic, Dragan

    2017-01-01

    As deployment of power electronic coupled generation such as photovoltaic (PV) systems increases, grid operators have shown increasing interest in calling on inverter-coupled generation to help mitigate frequency contingency events by rapidly surging active power into the grid. When responding to contingency events, the faster the active power is provided, the more effective it may be for arresting the frequency event. This paper proposes a predictive PV inverter control method for very fast and accurate control of active power. This rapid active power control method will increase the effectiveness of various higher-level controls designed to mitigate grid frequency contingency events, including fast power-frequency droop, inertia emulation, and fast frequency response, without the need for energy storage. The rapid active power control method, coupled with a maximum power point estimation method, is implemented in a prototype PV inverter connected to a PV array. The prototype inverter's response to various frequency events is experimentally confirmed to be fast (beginning within 2 line cycles and completing within 4.5 line cycles of a severe test event) and accurate (below 2% steady-state error).

  14. Fundamental Frequency Estimation using Polynomial Rooting of a Subspace-Based Method

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2010-01-01

    improvements compared to HMUSIC. First, by using the proposed method we can obtain an estimate of the fundamental frequency without doing a grid search like in HMUSIC. This is due to that the fundamental frequency is estimated as the argument of the root lying closest to the unit circle. Second, we obtain...... a higher spectral resolution compared to HMUSIC which is a property of polynomial rooting methods. Our simulation results show that the proposed method is applicable to real-life signals, and that we in most cases obtain a higher spectral resolution than HMUSIC....

  15. Comparison of Experimental Methods for Estimating Matrix Diffusion Coefficients for Contaminant Transport Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Telfeyan, Katherine Christina [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ware, Stuart Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reimus, Paul William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Birdsell, Kay Hanson [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-06

    Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.

  16. Comparison of experimental methods for estimating matrix diffusion coefficients for contaminant transport modeling

    Science.gov (United States)

    Telfeyan, Katherine; Ware, S. Doug; Reimus, Paul W.; Birdsell, Kay H.

    2018-02-01

    Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.

  17. Estimation of arsenic in nail using silver diethyldithiocarbamate method

    Directory of Open Access Journals (Sweden)

    Habiba Akhter Bhuiyan

    2015-08-01

    Full Text Available Spectrophotometric method of arsenic estimation in nails has four steps: a washing of nails, b digestion of nails, c arsenic generation, and finally d reading absorbance using spectrophotometer. Although the method is a cheapest one, widely used and effective, it is time consuming, laborious and need caution while using four acids.

  18. Performance of sampling methods to estimate log characteristics for wildlife.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton

    2004-01-01

    Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...

  19. Investigation on method of estimating the excitation spectrum of vibration source

    International Nuclear Information System (INIS)

    Zhang Kun; Sun Lei; Lin Song

    2010-01-01

    In practical engineer area, it is hard to obtain the excitation spectrum of the auxiliary machines of nuclear reactor through direct measurement. To solve this problem, the general method of estimating the excitation spectrum of vibration source through indirect measurement is proposed. First, the dynamic transfer matrix between the virtual excitation points and the measure points is obtained through experiment. The matrix combined with the response spectrum at the measure points under practical work condition can be used to calculate the excitation spectrum acts on the virtual excitation points. Then a simplified method is proposed which is based on the assumption that the vibration machine can be regarded as rigid body. The method treats the centroid as the excitation point and the dynamic transfer matrix is derived by using the sub structure mobility synthesis method. Thus, the excitation spectrum can be obtained by the inverse of the transfer matrix combined with the response spectrum at the measure points. Based on the above method, a computing example is carried out to estimate the excitation spectrum acts on the centroid of a electrical pump. By comparing the input excitation and the estimated excitation, the reliability of this method is verified. (authors)

  20. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    DEFF Research Database (Denmark)

    Sabourin, David; Snakenborg, Detlef; Dufva, Hans Martin

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observ...

  1. Rapid method for Detection of Irradiation Mango Fruits

    International Nuclear Information System (INIS)

    El Salhy, F.T.

    2011-01-01

    To detect mango fruits which have been exposed to low doses of gamma rays (0.5-3.0 kGy), three recommended methods by European Committee for Standardization (EN 1784:1996, EN 1785:1996 and EN 1787:2000) were used to study the possibility for identification of irradiated mango fruits (Ewais variety). Fresh mangoes were irradiated to different doses (0.5, 0.75, 1.0 and 3.0 kGy). The first method for determining the volatile hydrocarbons (VHC) was carried out by using florisil column then identified by gas chromatography and mass spectrometry (GC-MS). The major VHCs were C14:1, C15:0 and C17:1 at different doses which increased linearly with increasing doses either at low or high doses. The second one for determining the 2-alkyl cyclobutanone (2-DCB) was carried out using florisil chromatography method activated with 20% for separation and identified by GC-MS. 2-DCB bio marker specific for irradiated food proved its presence at the applied doses from 0.75-3.0 kGy but not at 0.5 kGy. All the mentioned compounds could not detected in non-irradiated samples, which mean that these radiolytic products (VHC and 2-DCB) can be used as a detection markers for irradiated mangoes even at low doses. The third one (EN 1787:2000) was conducted by electron spin resonance (ESR) on dried petioles of mangoes. The results proved that ESR was more sensitive for all applied doses.It could be concluded that using the three methods can be succeeded for detection of irradiated mangoes but the rapid one even at low doses with high accuracy was ESR.

  2. A RAPID Method for Blood Processing to Increase the Yield of Plasma Peptide Levels in Human Blood.

    Science.gov (United States)

    Teuffel, Pauline; Goebel-Stengel, Miriam; Hofmann, Tobias; Prinz, Philip; Scharner, Sophie; Körner, Jan L; Grötzinger, Carsten; Rose, Matthias; Klapp, Burghard F; Stengel, Andreas

    2016-04-28

    Research in the field of food intake regulation is gaining importance. This often includes the measurement of peptides regulating food intake. For the correct determination of a peptide's concentration, it should be stable during blood processing. However, this is not the case for several peptides which are quickly degraded by endogenous peptidases. Recently, we developed a blood processing method employing Reduced temperatures, Acidification, Protease inhibition, Isotopic exogenous controls and Dilution (RAPID) for the use in rats. Here, we have established this technique for the use in humans and investigated recovery, molecular form and circulating concentration of food intake regulatory hormones. The RAPID method significantly improved the recovery for (125)I-labeled somatostatin-28 (+39%), glucagon-like peptide-1 (+35%), acyl ghrelin and glucagon (+32%), insulin and kisspeptin (+29%), nesfatin-1 (+28%), leptin (+21%) and peptide YY3-36 (+19%) compared to standard processing (EDTA blood on ice, p processing, while after standard processing 62% of acyl ghrelin were degraded resulting in an earlier peak likely representing desacyl ghrelin. After RAPID processing the acyl/desacyl ghrelin ratio in blood of normal weight subjects was 1:3 compared to 1:23 following standard processing (p = 0.03). Also endogenous kisspeptin levels were higher after RAPID compared to standard processing (+99%, p = 0.02). The RAPID blood processing method can be used in humans, yields higher peptide levels and allows for assessment of the correct molecular form.

  3. A probabilistic method for testing and estimating selection differences between populations.

    Science.gov (United States)

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-12-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.

  4. Rapid and efficient method to extract metagenomic DNA from estuarine sediments.

    Science.gov (United States)

    Shamim, Kashif; Sharma, Jaya; Dubey, Santosh Kumar

    2017-07-01

    Metagenomic DNA from sediments of selective estuaries of Goa, India was extracted using a simple, fast, efficient and environment friendly method. The recovery of pure metagenomic DNA from our method was significantly high as compared to other well-known methods since the concentration of recovered metagenomic DNA ranged from 1185.1 to 4579.7 µg/g of sediment. The purity of metagenomic DNA was also considerably high as the ratio of absorbance at 260 and 280 nm ranged from 1.88 to 1.94. Therefore, the recovered metagenomic DNA was directly used to perform various molecular biology experiments viz. restriction digestion, PCR amplification, cloning and metagenomic library construction. This clearly proved that our protocol for metagenomic DNA extraction using silica gel efficiently removed the contaminants and prevented shearing of the metagenomic DNA. Thus, this modified method can be used to recover pure metagenomic DNA from various estuarine sediments in a rapid, efficient and eco-friendly manner.

  5. Methods of multicriterion estimations in system total quality management

    Directory of Open Access Journals (Sweden)

    Nikolay V. Diligenskiy

    2011-05-01

    Full Text Available In this article the method of multicriterion comparative estimation of efficiency (Data Envelopment Analysis and possibility of its application in system of total quality management is considered.

  6. Research on the Method of Noise Error Estimation of Atomic Clocks

    Science.gov (United States)

    Song, H. J.; Dong, S. W.; Li, W.; Zhang, J. H.; Jing, Y. J.

    2017-05-01

    The simulation methods of different noises of atomic clocks are given. The frequency flicker noise of atomic clock is studied by using the Markov process theory. The method for estimating the maximum interval error of the frequency white noise is studied by using the Wiener process theory. Based on the operation of 9 cesium atomic clocks in the time frequency reference laboratory of NTSC (National Time Service Center), the noise coefficients of the power-law spectrum model are estimated, and the simulations are carried out according to the noise models. Finally, the maximum interval error estimates of the frequency white noises generated by the 9 cesium atomic clocks have been acquired.

  7. Methods to estimate breeding values in honey bees

    NARCIS (Netherlands)

    Brascamp, E.W.; Bijma, P.

    2014-01-01

    Background Efficient methodologies based on animal models are widely used to estimate breeding values in farm animals. These methods are not applicable in honey bees because of their mode of reproduction. Observations are recorded on colonies, which consist of a single queen and thousands of workers

  8. Seismogeodesy for rapid earthquake and tsunami characterization

    Science.gov (United States)

    Bock, Y.

    2016-12-01

    Rapid estimation of earthquake magnitude and fault mechanism is critical for earthquake and tsunami warning systems. Traditionally, the monitoring of earthquakes and tsunamis has been based on seismic networks for estimating earthquake magnitude and slip, and tide gauges and deep-ocean buoys for direct measurement of tsunami waves. These methods are well developed for ocean basin-wide warnings but are not timely enough to protect vulnerable populations and infrastructure from the effects of local tsunamis, where waves may arrive within 15-30 minutes of earthquake onset time. Direct measurements of displacements by GPS networks at subduction zones allow for rapid magnitude and slip estimation in the near-source region, that are not affected by instrumental limitations and magnitude saturation experienced by local seismic networks. However, GPS displacements by themselves are too noisy for strict earthquake early warning (P-wave detection). Optimally combining high-rate GPS and seismic data (in particular, accelerometers that do not clip), referred to as seismogeodesy, provides a broadband instrument that does not clip in the near field, is impervious to magnitude saturation, and provides accurate real-time static and dynamic displacements and velocities in real time. Here we describe a NASA-funded effort to integrate GPS and seismogeodetic observations as part of NOAA's Tsunami Warning Centers in Alaska and Hawaii. It consists of a series of plug-in modules that allow for a hierarchy of rapid seismogeodetic products, including automatic P-wave picking, hypocenter estimation, S-wave prediction, magnitude scaling relationships based on P-wave amplitude (Pd) and peak ground displacement (PGD), finite-source CMT solutions and fault slip models as input for tsunami warnings and models. For the NOAA/NASA project, the modules are being integrated into an existing USGS Earthworm environment, currently limited to traditional seismic data. We are focused on a network of

  9. Human body mass estimation: a comparison of "morphometric" and "mechanical" methods.

    Science.gov (United States)

    Auerbach, Benjamin M; Ruff, Christopher B

    2004-12-01

    In the past, body mass was reconstructed from hominin skeletal remains using both "mechanical" methods which rely on the support of body mass by weight-bearing skeletal elements, and "morphometric" methods which reconstruct body mass through direct assessment of body size and shape. A previous comparison of two such techniques, using femoral head breadth (mechanical) and stature and bi-iliac breadth (morphometric), indicated a good general correspondence between them (Ruff et al. [1997] Nature 387:173-176). However, the two techniques were never systematically compared across a large group of modern humans of diverse body form. This study incorporates skeletal measures taken from 1,173 Holocene adult individuals, representing diverse geographic origins, body sizes, and body shapes. Femoral head breadth, bi-iliac breadth (after pelvic rearticulation), and long bone lengths were measured on each individual. Statures were estimated from long bone lengths using appropriate reference samples. Body masses were calculated using three available femoral head breadth (FH) formulae and the stature/bi-iliac breadth (STBIB) formula, and compared. All methods yielded similar results. Correlations between FH estimates and STBIB estimates are 0.74-0.81. Slight differences in results between the three FH estimates can be attributed to sampling differences in the original reference samples, and in particular, the body-size ranges included in those samples. There is no evidence for systematic differences in results due to differences in body proportions. Since the STBIB method was validated on other samples, and the FH methods produced similar estimates, this argues that either may be applied to skeletal remains with some confidence. 2004 Wiley-Liss, Inc.

  10. Improvement of economic potential estimation methods for enterprise with potential branch clusters use

    Directory of Open Access Journals (Sweden)

    V.Ya. Nusinov

    2017-08-01

    Full Text Available The research determines that the current existing methods of enterprise’s economic potential estimation are based on the use of additive, multiplicative and rating models. It is determined that the existing methods have a row of defects. For example, not all the methods take into account the branch features of the analysis, and also the level of development of the enterprise comparatively with other enterprises. It is suggested to level such defects by an account at the estimation of potential integral level not only by branch features of enterprises activity but also by the intra-account economic clusterization of such enterprises. Scientific works which are connected with the using of clusters for the estimation of economic potential are generalized. According to the results of generalization it is determined that it is possible to distinguish 9 scientific approaches in this direction: the use of natural clusterization of enterprises with the purpose of estimation and increase of region potential; the use of natural clusterization of enterprises with the purpose of estimation and increase of industry potential; use of artificial clusterization of enterprises with the purpose of estimation and increase of region potential; use of artificial clusterization of enterprises with the purpose of estimation and increase of industry potential; the use of artificial clusterization of enterprises with the purpose of clustering potential estimation; the use of artificial clusterization of enterprises with the purpose of estimation of clustering competitiveness potential; the use of natural (artificial clusterization for the estimation of clustering efficiency; the use of natural (artificial clusterization for the increase of level at region (industries development; the use of methods of economic potential of region (industries estimation or its constituents for the construction of the clusters. It is determined that the use of clusterization method in

  11. A Qualitative Method to Estimate HSI Display Complexity

    International Nuclear Information System (INIS)

    Hugo, Jacques; Gertman, David

    2013-01-01

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation

  12. A Survey of Methods for Computing Best Estimates of Endoatmospheric and Exoatmospheric Trajectories

    Science.gov (United States)

    Bernard, William P.

    2018-01-01

    Beginning with the mathematical prediction of planetary orbits in the early seventeenth century up through the most recent developments in sensor fusion methods, many techniques have emerged that can be employed on the problem of endo and exoatmospheric trajectory estimation. Although early methods were ad hoc, the twentieth century saw the emergence of many systematic approaches to estimation theory that produced a wealth of useful techniques. The broad genesis of estimation theory has resulted in an equally broad array of mathematical principles, methods and vocabulary. Among the fundamental ideas and methods that are briefly touched on are batch and sequential processing, smoothing, estimation, and prediction, sensor fusion, sensor fusion architectures, data association, Bayesian and non Bayesian filtering, the family of Kalman filters, models of the dynamics of the phases of a rocket's flight, and asynchronous, delayed, and asequent data. Along the way, a few trajectory estimation issues are addressed and much of the vocabulary is defined.

  13. An Economical Approach to Estimate a Benchmark Capital Stock. An Optimal Consistency Method

    OpenAIRE

    Jose Miguel Albala-Bertrand

    2003-01-01

    There are alternative methods of estimating capital stock for a benchmark year. However, these methods are costly and time-consuming, requiring the gathering of much basic information as well as the use of some convenient assumptions and guesses. In addition, a way is needed of checking whether the estimated benchmark is at the correct level. This paper proposes an optimal consistency method (OCM), which enables a capital stock to be estimated for a benchmark year, and which can also be used ...

  14. An anti-disturbing real time pose estimation method and system

    Science.gov (United States)

    Zhou, Jian; Zhang, Xiao-hu

    2011-08-01

    Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new

  15. Source Estimation for the Damped Wave Equation Using Modulating Functions Method: Application to the Estimation of the Cerebral Blood Flow

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-19

    In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations of blood mass density. The method is described and its performance is assessed through some numerical simulations. The robustness of the method in presence of noise is also studied.

  16. Development and validation of RP-HPLC method for estimation of eplerenone in spiked human plasma

    Directory of Open Access Journals (Sweden)

    Paraag Gide

    2012-10-01

    Full Text Available A rapid and simple high performance liquid chromatography (HPLC method with a UV detection (241 nm was developed and validated for estimation of eplerenone from spiked human plasma. The analyte and the internal standard (valdecoxib were extracted with a mixture of dichloromethane and diethyl ether. The chromatographic separation was performed on a HiQSil C-18HS column (250 mm×4.6 mm, 5 μm with a mobile phase consisting of acetonitrile:water (50:50, v/v at flow rate of 1 mL/min. The calibration curve was linear in the range 100–3200 ng/mL and the heteroscedasticity was minimized by using weighted least squares regression with weighting factor 1/X. Keywords: Eplerenone, Liquid–liquid extraction, Weighted regression, HPLC–UV

  17. Electrochemical method for rapid synthesis of Zinc Pentacyanonitrosylferrate Nanotubes

    Directory of Open Access Journals (Sweden)

    Rogaieh Bargeshadi

    2014-10-01

    Full Text Available In this paper, a rapid and simple approach was developed for the preparation of zinc pentacyanonitrosylferrate nanotubes (ZnPCNF NTs within the cylindrical pores of anodic aluminum oxide (AAO template by electrochemical method. The AAO was fabricated in two steps anodizing from aluminum foil. The first anodization of aluminum foil was performed in 0.2 mol L-1 H2C2O4 followed by removal of the formed porous oxide film by a solution of 6 wt% of phosphoric acid. The second anodization step was then performed using the same conditions as the previous step. Scanning electron microscope (SEM and X-ray diffraction (XRD method were employed to characterize the resulting highly oriented uniform hollow tube array which its diameter was in the range of 25-75 nm depending on the applied voltage and the length of nanotubes was equal to the thickness of AAO which was about 2 m. The growth properties of the ZnPCNF NTs array film can be achieved by controlling the structure of the template and applied potential across the cell.

  18. Comparison of different methods for estimation of potential evapotranspiration

    International Nuclear Information System (INIS)

    Nazeer, M.

    2010-01-01

    Evapotranspiration can be estimated with different available methods. The aim of this research study to compare and evaluate the originally measured potential evapotranspiration from Class A pan with the Hargreaves equation, the Penman equation, the Penman-Montheith equation, and the FAO56 Penman-Monteith equation. The evaporation rate from pan recorded greater than stated methods. For each evapotranspiration method, results were compared against mean monthly potential evapotranspiration (PET) from Pan data according to FAO (ET/sub o/=K/sub pan X E/sub pan)), from daily measured recorded data of the twenty-five years (1984-2008). On the basis of statistical analysis between the pan data and the FAO56- Penman-Monteith method are not considered to be very significant (=0.98) at 95% confidence and prediction intervals. All methods required accurate weather data for precise results, for the purpose of this study the past twenty five years data were analyzed and used including maximum and minimum air temperature, relative humidity, wind speed, sunshine duration and rainfall. Based on linear regression analysis results the FAO56 PMM ranked first (R/sup 2/=0.98) followed by Hergreaves method (R/sup 2/=0.96), Penman-Monteith method (R/sup 2/=0.94) and Penman method (=0.93). Obviously, using FAO56 Penman Monteith method with precise climatic variables for ET/sub o/ estimation is more reliable than the other alternative methods, Hergreaves is more simple and rely only on air temperatures data and can be used alternative of FAO56 Penman-Monteith method if other climatic data are missing or unreliable. (author)

  19. A new method of hybrid frequency hopping signals selection and blind parameter estimation

    Science.gov (United States)

    Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian

    2018-04-01

    Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.

  20. Research Note A novel method for estimating tree dimensions and ...

    African Journals Online (AJOL)

    The two objects must be adjacent to one another in the photograph. For rapid analysis, multiple photographs of different objects can be taken over a short period of time using the measuring staff. The method is not limited to plants and can be used to determine, for example, browser height, height at which browsers feed, ...

  1. Non-Destructive Lichen Biomass Estimation in Northwestern Alaska: A Comparison of Methods

    Science.gov (United States)

    Rosso, Abbey; Neitlich, Peter; Smith, Robert J.

    2014-01-01

    Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa “community” samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m−2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska. PMID:25079228

  2. Non-destructive lichen biomass estimation in northwestern Alaska: a comparison of methods.

    Directory of Open Access Journals (Sweden)

    Abbey Rosso

    Full Text Available Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa "community" samples, n = 144 at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count, among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4% using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m-2. Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska.

  3. Non-destructive lichen biomass estimation in northwestern Alaska: a comparison of methods.

    Science.gov (United States)

    Rosso, Abbey; Neitlich, Peter; Smith, Robert J

    2014-01-01

    Terrestrial lichen biomass is an important indicator of forage availability for caribou in northern regions, and can indicate vegetation shifts due to climate change, air pollution or changes in vascular plant community structure. Techniques for estimating lichen biomass have traditionally required destructive harvesting that is painstaking and impractical, so we developed models to estimate biomass from relatively simple cover and height measurements. We measured cover and height of forage lichens (including single-taxon and multi-taxa "community" samples, n = 144) at 73 sites on the Seward Peninsula of northwestern Alaska, and harvested lichen biomass from the same plots. We assessed biomass-to-volume relationships using zero-intercept regressions, and compared differences among two non-destructive cover estimation methods (ocular vs. point count), among four landcover types in two ecoregions, and among single-taxon vs. multi-taxa samples. Additionally, we explored the feasibility of using lichen height (instead of volume) as a predictor of stand-level biomass. Although lichen taxa exhibited unique biomass and bulk density responses that varied significantly by growth form, we found that single-taxon sampling consistently under-estimated true biomass and was constrained by the need for taxonomic experts. We also found that the point count method provided little to no improvement over ocular methods, despite increased effort. Estimated biomass of lichen-dominated communities (mean lichen cover: 84.9±1.4%) using multi-taxa, ocular methods differed only nominally among landcover types within ecoregions (range: 822 to 1418 g m-2). Height alone was a poor predictor of lichen biomass and should always be weighted by cover abundance. We conclude that the multi-taxa (whole-community) approach, when paired with ocular estimates, is the most reasonable and practical method for estimating lichen biomass at landscape scales in northwest Alaska.

  4. Improved vertical streambed flux estimation using multiple diurnal temperature methods in series

    Science.gov (United States)

    Irvine, Dylan J.; Briggs, Martin A.; Cartwright, Ian; Scruggs, Courtney; Lautz, Laura K.

    2017-01-01

    Analytical solutions that use diurnal temperature signals to estimate vertical fluxes between groundwater and surface water based on either amplitude ratios (Ar) or phase shifts (Δϕ) produce results that rarely agree. Analytical solutions that simultaneously utilize Ar and Δϕ within a single solution have more recently been derived, decreasing uncertainty in flux estimates in some applications. Benefits of combined (ArΔϕ) methods also include that thermal diffusivity and sensor spacing can be calculated. However, poor identification of either Ar or Δϕ from raw temperature signals can lead to erratic parameter estimates from ArΔϕ methods. An add-on program for VFLUX 2 is presented to address this issue. Using thermal diffusivity selected from an ArΔϕ method during a reliable time period, fluxes are recalculated using an Ar method. This approach maximizes the benefits of the Ar and ArΔϕ methods. Additionally, sensor spacing calculations can be used to identify periods with unreliable flux estimates, or to assess streambed scour. Using synthetic and field examples, the use of these solutions in series was particularly useful for gaining conditions where fluxes exceeded 1 m/d.

  5. Digital baseline estimation method for multi-channel pulse height analyzing

    International Nuclear Information System (INIS)

    Xiao Wuyun; Wei Yixiang; Ai Xianyun

    2005-01-01

    The basic features of digital baseline estimation for multi-channel pulse height analysis are introduced. The weight-function of minimum-noise baseline filter is deduced with functional variational calculus. The frequency response of this filter is also deduced with Fourier transformation, and the influence of parameters on amplitude frequency response characteristics is discussed. With MATLAB software, the noise voltage signal from the charge sensitive preamplifier is simulated, and the processing effect of minimum-noise digital baseline estimation is verified. According to the results of this research, digital baseline estimation method can estimate baseline optimally, and it is very suitable to be used in digital multi-channel pulse height analysis. (authors)

  6. Comparing four methods to estimate usual intake distributions

    NARCIS (Netherlands)

    Souverein, O.W.; Dekkers, A.L.; Geelen, A.; Haubrock, J.; Vries, de J.H.M.; Ocke, M.C.; Harttig, U.; Boeing, H.; Veer, van 't P.

    2011-01-01

    Background/Objectives: The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As ‘true’ usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data

  7. Review of best estimate plus uncertainty methods of thermal-hydraulic safety analysis

    International Nuclear Information System (INIS)

    Prosek, A.; Mavko, B.

    2003-01-01

    In 1988 United States Nuclear Regulatory Commission approved the revised rule on the acceptance of emergency core cooling system (ECCS) performance. Since that there has been significant interest in the development of codes and methodologies for best-estimate loss-of-coolant accident (LOCAs) analyses. Several new best estimate plus uncertainty methods (BEPUs) were developed in the world. The purpose of the paper is to review the developments in the direction of best estimate approaches with uncertainty quantification and to discuss the problems in practical applications of BEPU methods. In general, the licensee methods are following original methods. The study indicated that uncertainty analysis with random sampling of input parameters and the use of order statistics for desired tolerance limits of output parameters is today commonly accepted and mature approach. (author)

  8. Rapid improvement teams.

    Science.gov (United States)

    Alemi, F; Moore, S; Headrick, L; Neuhauser, D; Hekelman, F; Kizys, N

    1998-03-01

    Suggestions, most of which are supported by empirical studies, are provided on how total quality management (TQM) teams can be used to bring about faster organizationwide improvements. Ideas are offered on how to identify the right problem, have rapid meetings, plan rapidly, collect data rapidly, and make rapid whole-system changes. Suggestions for identifying the right problem include (1) postpone benchmarking when problems are obvious, (2) define the problem in terms of customer experience so as not to blame employees nor embed a solution in the problem statement, (3) communicate with the rest of the organization from the start, (4) state the problem from different perspectives, and (5) break large problems into smaller units. Suggestions for having rapid meetings include (1) choose a nonparticipating facilitator to expedite meetings, (2) meet with each team member before the team meeting, (3) postpone evaluation of ideas, and (4) rethink conclusions of a meeting before acting on them. Suggestions for rapid planning include reducing time spent on flowcharting by focusing on the future, not the present. Suggestions for rapid data collection include (1) sample patients for surveys, (2) rely on numerical estimates by process owners, and (3) plan for rapid data collection. Suggestions for rapid organizationwide implementation include (1) change membership on cross-functional teams, (2) get outside perspectives, (3) use unfolding storyboards, and (4) go beyond self-interest to motivate lasting change in the organization. Additional empirical investigations of time saved as a consequence of the strategies provided are needed. If organizations solve their problems rapidly, fewer unresolved problems may remain.

  9. SCoPE: an efficient method of Cosmological Parameter Estimation

    International Nuclear Information System (INIS)

    Das, Santanu; Souradeep, Tarun

    2014-01-01

    Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data

  10. Rapid quantitative estimation of chlorinated methane utilizing bacteria in drinking water and the effect of nanosilver on biodegradation of the trichloromethane in the environment.

    Science.gov (United States)

    Zamani, Isaac; Bouzari, Majid; Emtiazi, Giti; Fanaei, Maryam

    2015-03-01

    Halomethanes are toxic and carcinogenic chemicals, which are widely used in industry. Also they can be formed during water disinfection by chlorine. Biodegradation by methylotrophs is the most important way to remove these pollutants from the environment. This study aimed to represent a simple and rapid method for quantitative study of halomethanes utilizing bacteria in drinking water and also a method to facilitate the biodegradation of these compounds in the environment compared to cometabolism. Enumeration of chlorinated methane utilizing bacteria in drinking water was carried out by most probable number (MPN) method in two steps. First, the presence and the number of methylotroph bacteria were confirmed on methanol-containing medium. Then, utilization of dichloromethane was determined by measuring the released chloride after the addition of 0.04 mol/L of it to the growth medium. Also, the effect of nanosilver particles on biodegradation of multiple chlorinated methanes was studied by bacterial growth on Bushnell-Haas Broth containing chloroform (trichloromethane) that was treated with 0.2 ppm nanosilver. Most probable number of methylotrophs and chlorinated methane utilizing bacteria in tested drinking water were 10 and 4 MPN Index/L, respectively. Chloroform treatment by nanosilver leads to dechlorination and the production of formaldehyde. The highest growth of bacteria and formic acid production were observed in the tubes containing 1% chloroform treated with nanosilver. By combining the two tests, a rapid approach to estimation of most probable number of chlorinated methane utilizing bacteria is introduced. Treatment by nanosilver particles was resulted in the easier and faster biodegradation of chloroform by bacteria. Thus, degradation of these chlorinated compounds is more efficient compared to cometabolism.

  11. Comparative study of various methods of primary energy estimation in nucleon-nucleon interactions

    International Nuclear Information System (INIS)

    Goyal, D.P.; Yugindro Singh, K.; Singh, S.

    1986-01-01

    The various available methods for the estimation of primary energy in nucleon-nucleon interactions have been examined by using the experimental data on angular distributions of shower particles from p-N interactions at two accelerator energies, 67 and 400 GeV. Three different groups of shower particle multiplicities have been considered for interactions at both energies. It is found that the different methods give quite different estimates of primary energy. Moreover, each method is found to give different values of energy according to the choice of multiplicity groups. It is concluded that the E ch method is relatively the better method among all the methods available, and that within this method, the consideration of the group of small multiplicities gives a much better result. The method also yields plausible estimates of inelasticity in high energy nucleon-nucleon interactions. (orig.)

  12. A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation

    Directory of Open Access Journals (Sweden)

    Tianshuang Qiu

    2007-12-01

    Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of “biased” or “unbiased” is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.

  13. Rapid estimation of organic nitrogen in oil shale wastewaters

    Energy Technology Data Exchange (ETDEWEB)

    Jones, B.M.; Harris, G.J.; Daughton, C.G.

    1984-03-01

    Many of the characteristics of oil shale process wastewaters (e.g., malodors, color, and resistance to biotreatment) are imparted by numerous nitrogen heterocycles and aromatic amines. For the frequent performance assessment of waste treatment procsses designed to remove these nitrogenous organic compounds, a rapid and colligative measurement of organic nitrogen is essential.

  14. New modelling method for fast reactor neutronic behaviours analysis; Nouvelles methodes de modelisation neutronique des reacteurs rapides de quatrieme Generation

    Energy Technology Data Exchange (ETDEWEB)

    Jacquet, P.

    2011-05-23

    Due to safety rules running on fourth generation reactors' core development, neutronics simulation tools have to be as accurate as never before. First part of this report enumerates every step of fast reactor's neutronics simulation implemented in current reference code: ECCO. Considering the field of fast reactors that meet criteria of fourth generation, ability of models to describe self-shielding phenomenon, to simulate neutrons leakage in a lattice of fuel assemblies and to produce representative macroscopic sections is evaluated. The second part of this thesis is dedicated to the simulation of fast reactors' core with steel reflector. These require the development of advanced methods of condensation and homogenization. Several methods are proposed and compared on a typical case: the ZONA2B core of MASURCA reactor. (author) [French] Les criteres de surete qui regissent le developpement de coeurs de reacteurs de quatrieme generation implique l'usage d'outils de calcul neutronique performants. Une premiere partie de la these reprend toutes les etapes de modelisation neutronique des reacteurs rapides actuellement d'usage dans le code de reference ECCO. La capacite des modeles a decrire le phenomene d'autoprotection, a representer les fuites neutroniques au niveau d'un reseau d'assemblages combustibles et a generer des sections macroscopiques representatives est appreciee sur le domaine des reacteurs rapides innovants respectant les criteres de quatrieme generation. La deuxieme partie de ce memoire se consacre a la modelisation des coeurs rapides avec reflecteur acier. Ces derniers necessitent le developpement de methodes avancees de condensation et d'homogenisation. Plusieurs methodes sont proposees et confrontees sur un probleme de modelisation typique: le coeur ZONA2B du reacteur maquette MASURCA

  15. Performance evaluation of the spectral centroid downshift method for attenuation estimation.

    Science.gov (United States)

    Samimi, Kayvan; Varghese, Tomy

    2015-05-01

    Estimation of frequency-dependent ultrasonic attenuation is an important aspect of tissue characterization. Along with other acoustic parameters studied in quantitative ultrasound, the attenuation coefficient can be used to differentiate normal and pathological tissue. The spectral centroid downshift (CDS) method is one the most common frequencydomain approaches applied to this problem. In this study, a statistical analysis of this method's performance was carried out based on a parametric model of the signal power spectrum in the presence of electronic noise. The parametric model used for the power spectrum of received RF data assumes a Gaussian spectral profile for the transmit pulse, and incorporates effects of attenuation, windowing, and electronic noise. Spectral moments were calculated and used to estimate second-order centroid statistics. A theoretical expression for the variance of a maximum likelihood estimator of attenuation coefficient was derived in terms of the centroid statistics and other model parameters, such as transmit pulse center frequency and bandwidth, RF data window length, SNR, and number of regression points. Theoretically predicted estimation variances were compared with experimentally estimated variances on RF data sets from both computer-simulated and physical tissue-mimicking phantoms. Scan parameter ranges for this study were electronic SNR from 10 to 70 dB, transmit pulse standard deviation from 0.5 to 4.1 MHz, transmit pulse center frequency from 2 to 8 MHz, and data window length from 3 to 17 mm. Acceptable agreement was observed between theoretical predictions and experimentally estimated values with differences smaller than 0.05 dB/cm/MHz across the parameter ranges investigated. This model helps predict the best attenuation estimation variance achievable with the CDS method, in terms of said scan parameters.

  16. Comparison of least squares and exponential sine sweep methods for Parallel Hammerstein Models estimation

    Science.gov (United States)

    Rebillat, Marc; Schoukens, Maarten

    2018-05-01

    Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.

  17. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  18. RPM-WEBBSYS: A web-based computer system to apply the rational polynomial method for estimating static formation temperatures of petroleum and geothermal wells

    Science.gov (United States)

    Wong-Loya, J. A.; Santoyo, E.; Andaverde, J. A.; Quiroz-Ruiz, A.

    2015-12-01

    A Web-Based Computer System (RPM-WEBBSYS) has been developed for the application of the Rational Polynomial Method (RPM) to estimate static formation temperatures (SFT) of geothermal and petroleum wells. The system is also capable to reproduce the full thermal recovery processes occurred during the well completion. RPM-WEBBSYS has been programmed using advances of the information technology to perform more efficiently computations of SFT. RPM-WEBBSYS may be friendly and rapidly executed by using any computing device (e.g., personal computers and portable computing devices such as tablets or smartphones) with Internet access and a web browser. The computer system was validated using bottomhole temperature (BHT) measurements logged in a synthetic heat transfer experiment, where a good matching between predicted and true SFT was achieved. RPM-WEBBSYS was finally applied to BHT logs collected from well drilling and shut-in operations, where the typical problems of the under- and over-estimation of the SFT (exhibited by most of the existing analytical methods) were effectively corrected.

  19. A comparison of two methods for estimating conifer live foliar moisture content

    Science.gov (United States)

    W. Matt Jolly; Ann M. Hadlow

    2012-01-01

    Foliar moisture content is an important factor regulating how wildland fires ignite in and spread through live fuels but moisture content determination methods are rarely standardised between studies. One such difference lies between the uses of rapid moisture analysers or drying ovens. Both of these methods are commonly used in live fuel research but they have never...

  20. Multifrequency Excitation Method for Rapid and Accurate Dynamic Test of Micromachined Gyroscope Chips

    Directory of Open Access Journals (Sweden)

    Yan Deng

    2014-10-01

    Full Text Available A novel multifrequency excitation (MFE method is proposed to realize rapid and accurate dynamic testing of micromachined gyroscope chips. Compared with the traditional sweep-frequency excitation (SFE method, the computational time for testing one chip under four modes at a 1-Hz frequency resolution and 600-Hz bandwidth was dramatically reduced from 10 min to 6 s. A multifrequency signal with an equal amplitude and initial linear-phase-difference distribution was generated to ensure test repeatability and accuracy. The current test system based on LabVIEW using the SFE method was modified to use the MFE method without any hardware changes. The experimental results verified that the MFE method can be an ideal solution for large-scale dynamic testing of gyroscope chips and gyroscopes.

  1. Factors and methods of analysis and estimation of furniture making enterprises competitiveness

    Directory of Open Access Journals (Sweden)

    Vitaliy Aleksandrovich Zhigarev

    2015-06-01

    Full Text Available Objective to describe the author39s methodology for estimating the furnituremaking enterprises competitiveness with a view to carry out the economic evaluation of the efficiency of furniture production the evaluation of the internal component of the furniture production efficiency the identification of factors influencing the efficiency of furnituremaking companies and areas for improving it through improvements in product range production and sales policy of the enterprise. The research subject is modern methods and principles of competitiveness management applicable in a rapidly changing market environment. Methods in general the research methodology consists of six stages differentiated by methods objectives and required outcomes. The first stage of the research was to study the nature of demand within the target market of a furnituremaking enterprise. The second stage was to study the expenditures of a furnituremaking enterprise for implementing individual production and sales strategies. The third stage was to study competition in the market. The fourth stage was the analysis of possibilities of a furnituremaking enterprise in producing and selling furniture in terms of factor values combinations. The fifth stage was the reexamination of the demand with a view to its distribution according to the factor space. The final sixth stage was processing of data obtained at the previous stages and carrying out the necessary calculations. Results in general the above methodology of economic evaluation of the efficiency of furniture production based on the previously developed model gives the managers of enterprises an algorithm for assessing both market and firmlevel component of the furniture production efficiency allowing the subsequent identification and evaluation of the efficiency factors and the development of measures to improve the furniture production and sale efficiency as well as the assortment rationalization production and sales policy

  2. Estimation method for volumes of hot spots created by heavy ions

    International Nuclear Information System (INIS)

    Kanno, Ikuo; Kanazawa, Satoshi; Kajii, Yuji

    1999-01-01

    As a ratio of volumes of hot spots to cones, which have the same lengths and bottom radii with the ones of hot spots, a simple and convenient method for estimating the volumes of hot spots is described. This calculation method is useful for the study of damage producing mechanism in hot spots, and is also convenient for the estimation of the electron-hole densities in plasma columns created by heavy ions in semiconductor detectors. (author)

  3. Rapid determination method of radiocesium in sea water by cesium-selective resin

    International Nuclear Information System (INIS)

    Nakaoka, A.; Yokoyama, H.; Fukushima, M.; Takagi, S.

    1980-01-01

    A rapid and precise method of determining radiocesium corresponding to 5 mrem/y (the Japan AEC's guideline) was proposed. The development and practical performance of cesium-selective resin and the determination method are described in this paper. The resin was prepared by the formation of ammonium molybdophosphate in the structure of Amberlite XAD-7 resin. It took only 3 hours to carry out all the procedures the authors proposed. This value represents 1/10 to 1/2 of the time of the conventional method. The concentration of 137 Cs and 134 Cs in sea water was determined to be 0.13 to 0.16 pCi/l and less than 7.1x10 -2 pCi/l, respectively. (author)

  4. A Method to Represent Heterogeneous Materials for Rapid Prototyping: The Matryoshka Approach.

    Science.gov (United States)

    Lei, Shuangyan; Frank, Matthew C; Anderson, Donald D; Brown, Thomas D

    The purpose of this paper is to present a new method for representing heterogeneous materials using nested STL shells, based, in particular, on the density distributions of human bones. Nested STL shells, called Matryoshka models, are described, based on their namesake Russian nesting dolls. In this approach, polygonal models, such as STL shells, are "stacked" inside one another to represent different material regions. The Matryoshka model addresses the challenge of representing different densities and different types of bone when reverse engineering from medical images. The Matryoshka model is generated via an iterative process of thresholding the Hounsfield Unit (HU) data using computed tomography (CT), thereby delineating regions of progressively increasing bone density. These nested shells can represent regions starting with the medullary (bone marrow) canal, up through and including the outer surface of the bone. The Matryoshka approach introduced can be used to generate accurate models of heterogeneous materials in an automated fashion, avoiding the challenge of hand-creating an assembly model for input to multi-material additive or subtractive manufacturing. This paper presents a new method for describing heterogeneous materials: in this case, the density distribution in a human bone. The authors show how the Matryoshka model can be used to plan harvesting locations for creating custom rapid allograft bone implants from donor bone. An implementation of a proposed harvesting method is demonstrated, followed by a case study using subtractive rapid prototyping to harvest a bone implant from a human tibia surrogate.

  5. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  6. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young

    2017-08-01

    A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.

  7. The efficiency of different estimation methods of hydro-physical limits

    Directory of Open Access Journals (Sweden)

    Emma María Martínez

    2012-12-01

    Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.

  8. Estimating surface acoustic impedance with the inverse method.

    Science.gov (United States)

    Piechowicz, Janusz

    2011-01-01

    Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.

  9. A generic method for estimating system reliability using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples

  10. A generic method for estimating system reliability using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Doguc, Ozge [Stevens Institute of Technology, Hoboken, NJ 07030 (United States); Ramirez-Marquez, Jose Emmanuel [Stevens Institute of Technology, Hoboken, NJ 07030 (United States)], E-mail: jmarquez@stevens.edu

    2009-02-15

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.

  11. An Overview and Comparison of Online Implementable SOC Estimation Methods for Lithium-ion Battery

    DEFF Research Database (Denmark)

    Meng, Jinhao; Ricco, Mattia; Luo, Guangzhao

    2018-01-01

    . Many SOC estimation methods have been proposed in the literature. However, only a few of them consider the real-time applicability. This paper reviews recently proposed online SOC estimation methods and classifies them into five categories. Their principal features are illustrated, and the main pros...... and cons are provided. The SOC estimation methods are compared and discussed in terms of accuracy, robustness, and computation burden. Afterward, as the most popular type of model based SOC estimation algorithms, seven nonlinear filters existing in literature are compared in terms of their accuracy...

  12. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles.

    Science.gov (United States)

    Wu, Zhihong; Lu, Ke; Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment.

  13. A simple method for estimating the convection- dispersion equation ...

    African Journals Online (AJOL)

    Jane

    2011-08-31

    Aug 31, 2011 ... approach of modeling solute transport in porous media uses the deterministic ... Methods of estimating CDE transport parameters can be divided into statistical ..... diffusion-type model for longitudinal mixing of fluids in flow.

  14. Estimating Mean and Variance Through Quantiles : An Experimental Comparison of Different Methods

    NARCIS (Netherlands)

    Moors, J.J.A.; Strijbosch, L.W.G.; van Groenendaal, W.J.H.

    2002-01-01

    If estimates of mean and variance are needed and only experts' opinions are available, the literature agrees that it is wise behaviour to ask only for their (subjective) estimates of quantiles: from these, estimates of the desired parameters are calculated.Quite a number of methods have been

  15. An Empirical Method to Fuse Partially Overlapping State Vectors for Distributed State Estimation

    NARCIS (Netherlands)

    Sijs, J.; Hanebeck, U.; Noack, B.

    2013-01-01

    State fusion is a method for merging multiple estimates of the same state into a single fused estimate. Dealing with multiple estimates is one of the main concerns in distributed state estimation, where an estimated value of the desired state vector is computed in each node of a networked system.

  16. Rapid-viability PCR method for detection of live, virulent Bacillus anthracis in environmental samples.

    Science.gov (United States)

    Létant, Sonia E; Murphy, Gloria A; Alfaro, Teneile M; Avila, Julie R; Kane, Staci R; Raber, Ellen; Bunt, Thomas M; Shah, Sanjiv R

    2011-09-01

    In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real-time PCR analysis is conducted on samples before and after incubation. The method, referred to as rapid-viability (RV)-PCR, uses the change in cycle threshold after incubation to detect the presence of live organisms. In this article, we report a novel RV-PCR method for detection of live, virulent Bacillus anthracis, in which the incubation time was reduced from 14 h to 9 h, bringing the total turnaround time for results below 15 h. The method incorporates a magnetic bead-based DNA extraction and purification step prior to PCR analysis, as well as specific real-time PCR assays for the B. anthracis chromosome and pXO1 and pXO2 plasmids. A single laboratory verification of the optimized method applied to the detection of virulent B. anthracis in environmental samples was conducted and showed a detection level of 10 to 99 CFU/sample with both manual and automated RV-PCR methods in the presence of various challenges. Experiments exploring the relationship between the incubation time and the limit of detection suggest that the method could be further shortened by an additional 2 to 3 h for relatively clean samples.

  17. Development and Validation of RP-HPLC Method for Simultaneous Estimation of Ramipril, Aspirin and Atorvastatin in Pharmaceutical Preparations

    Directory of Open Access Journals (Sweden)

    Rajesh Sharma

    2012-01-01

    Full Text Available A simple, sensitive, accurate and rapid reverse phase high performance liquid chromatographic method is developed for the simultaneous estimation of ramipril, aspirin and atorvastatin in pharmaceutical preparations. Chromatography was performed on a 25cm×4.6 mm i.d, 5µm particle, C18 column with Mixture of (A acetonitrile methanol (65:35 and (B 10 mM sodium dihydrogen phosphate monohydrate (NaH2PO4.H2O buffer and mixture of A:B (60:40 v/v adjusted to pH 3.0 with o-phosphoric acid (5%v/v was used as a mobile phase at a flow rate of 1.5 ml min-1. UV detection was performed at 230 nm. Total run time was less then 12 min; retention time for Ramipril, aspirin and Atorvastatin were 3.620, 4.920 min and 11.710 min respectively. The method was validated for accuracy, precision, linearity, specificity and sensitivity in accordance with ICH guidelines. Validation revealed that the method is specific, rapid, accurate, precise, reliable, and reproducible. Calibration plots were linear over the concentration ranges 05-50 µg mL-1 for Ramipril, 05-100 µgmL-1 for aspirin and 02-20 µg mL-1 for atorvastatin. Limits of detection were 0.014, 0.10 and 0.0095 ng mL-1 limits of quantification were 0.043, 0.329 and 0.029 ng mL-1 for ramipril aspirin and atorvastatin respectively. The high recovery and low coefficients of variation confirm the suitability of the method for simultaneous analysis of the all three drugs in the dosage forms. The validated method was successfully used for quantitative analysis of marketed pharmaceutical preparations.

  18. The Software Cost Estimation Method Based on Fuzzy Ontology

    Directory of Open Access Journals (Sweden)

    Plecka Przemysław

    2014-12-01

    Full Text Available In the course of sales process of Enterprise Resource Planning (ERP Systems, it turns out that the standard system must be extended or changed (modified according to specific customer’s requirements. Therefore, suppliers face the problem of determining the cost of additional works. Most methods of cost estimation bring satisfactory results only at the stage of pre-implementation analysis. However, suppliers need to know the estimated cost as early as at the stage of trade talks. During contract negotiations, they expect not only the information about the costs of works, but also about the risk of exceeding these costs or about the margin of safety. One method that gives more accurate results at the stage of trade talks is the method based on the ontology of implementation costs. This paper proposes modification of the method involving the use of fuzzy attributes, classes, instances and relations in the ontology. The result provides not only the information about the value of work, but also about the minimum and maximum expected cost, and the most likely range of costs. This solution allows suppliers to effectively negotiate the contract and increase the chances of successful completion of the project.

  19. Validated modified Lycopodium spore method development for ...

    African Journals Online (AJOL)

    Validated modified lycopodium spore method has been developed for simple and rapid quantification of herbal powdered drugs. Lycopodium spore method was performed on ingredients of Shatavaryadi churna, an ayurvedic formulation used as immunomodulator, galactagogue, aphrodisiac and rejuvenator. Estimation of ...

  20. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    Directory of Open Access Journals (Sweden)

    Darren Kidney

    Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will

  1. Methods for estimating heterocyclic amine concentrations in cooked meats in the US diet.

    Science.gov (United States)

    Keating, G A; Bogen, K T

    2001-01-01

    Heterocyclic amines (HAs) are formed in numerous cooked foods commonly consumed in the diet. A method was developed to estimate dietary HA levels using HA concentrations in experimentally cooked meats reported in the literature and meat consumption data obtained from a national dietary survey. Cooking variables (meat internal temperature and weight loss, surface temperature and time) were used to develop relationships for estimating total HA concentrations in six meat types. Concentrations of five individual HAs were estimated for specific meat type/cooking method combinations based on linear regression of total and individual HA values obtained from the literature. Using these relationships, total and individual HA concentrations were estimated for 21 meat type/cooking method combinations at four meat doneness levels. Reported consumption of the 21 meat type/cooking method combinations was obtained from a national dietary survey and the age-specific daily HA intake calculated using the estimated HA concentrations (ng/g) and reported meat intakes. Estimated mean daily total HA intakes for children (to age 15 years) and adults (30+ years) were 11 and 7.0 ng/kg/day, respectively, with 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) estimated to comprise approximately 65% of each intake. Pan-fried meats were the largest source of HA in the diet and chicken the largest source of HAs among the different meat types.

  2. Method to Estimate the Dissolved Air Content in Hydraulic Fluid

    Science.gov (United States)

    Hauser, Daniel M.

    2011-01-01

    In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated

  3. Testing an Alternative Method for Estimating the Length of Fungal Hyphae Using Photomicrography and Image Processing.

    Science.gov (United States)

    Shen, Qinhua; Kirschbaum, Miko U F; Hedley, Mike J; Camps Arbestain, Marta

    2016-01-01

    This study aimed to develop and test an unbiased and rapid methodology to estimate the length of external arbuscular mycorrhizal fungal (AMF) hyphae in soil. The traditional visual gridline intersection (VGI) method, which consists in a direct visual examination of the intersections of hyphae with gridlines on a microscope eyepiece after aqueous extraction, membrane-filtration, and staining (e.g., with trypan blue), was refined. For this, (i) images of the stained hyphae were taken by using a digital photomicrography technique to avoid the use of the microscope and the method was referred to as "digital gridline intersection" (DGI) method; and (ii), the images taken in (i) were processed and the hyphal length was measured by using ImageJ software, referred to as the "photomicrography-ImageJ processing" (PIP) method. The DGI and PIP methods were tested using known grade lengths of possum fur. Then they were applied to measure the hyphal lengths in soils with contrasting phosphorus (P) fertility status. Linear regressions were obtained between the known lengths (Lknown) of possum fur and the values determined by using either the DGI (LDGI) (LDGI = 0.37 + 0.97 × Lknown, r2 = 0.86) or PIP (LPIP) methods (LPIP = 0.33 + 1.01 × Lknown, r2 = 0.98). There were no significant (P > 0.05) differences between the LDGI and LPIP values. While both methods provided accurate estimation (slope of regression being 1.0), the PIP method was more precise, as reflected by a higher value of r2 and lower coefficients of variation. The average hyphal lengths (6.5-19.4 m g-1) obtained by the use of these methods were in the range of those typically reported in the literature (3-30 m g-1). Roots growing in P-deficient soil developed 2.5 times as many hyphae as roots growing in P-rich soil (17.4 vs 7.2 m g-1). These tests confirmed that the use of digital photomicrography in conjunction with either the grid-line intersection principle or image processing is a suitable method for the

  4. A rapid, ensemble and free energy based method for engineering protein stabilities.

    Science.gov (United States)

    Naganathan, Athi N

    2013-05-02

    Engineering the conformational stabilities of proteins through mutations has immense potential in biotechnological applications. It is, however, an inherently challenging problem given the weak noncovalent nature of the stabilizing interactions. In this regard, we present here a robust and fast strategy to engineer protein stabilities through mutations involving charged residues using a structure-based statistical mechanical model that accounts for the ensemble nature of folding. We validate the method by predicting the absolute changes in stability for 138 experimental mutations from 16 different proteins and enzymes with a correlation of 0.65 and importantly with a success rate of 81%. Multiple point mutants are predicted with a higher success rate (90%) that is validated further by comparing meosphile-thermophile protein pairs. In parallel, we devise a methodology to rapidly engineer mutations in silico which we benchmark against experimental mutations of ubiquitin (correlation of 0.95) and check for its feasibility on a larger therapeutic protein DNase I. We expect the method to be of importance as a first and rapid step to screen for protein mutants with specific stability in the biotechnology industry, in the construction of stability maps at the residue level (i.e., hot spots), and as a robust tool to probe for mutations that enhance the stability of protein-based drugs.

  5. Method for estimating capacity and predicting remaining useful life of lithium-ion battery

    International Nuclear Information System (INIS)

    Hu, Chao; Jain, Gaurav; Tamirisa, Prabhakar; Gorka, Tom

    2014-01-01

    Highlights: • We develop an integrated method for the capacity estimation and RUL prediction. • A state projection scheme is derived for capacity estimation. • The Gauss–Hermite particle filter technique is used for the RUL prediction. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the capacity of Li-ion battery and predict the remaining useful life (RUL) throughout the whole life-time. This paper presents an integrated method for the capacity estimation and RUL prediction of Li-ion battery used in implantable medical devices. A state projection scheme from the author’s previous study is used for the capacity estimation. Then, based on the capacity estimates, the Gauss–Hermite particle filter technique is used to project the capacity fade to the end-of-service (EOS) value (or the failure limit) for the RUL prediction. Results of 10 years’ continuous cycling test on Li-ion prismatic cells in the lab suggest that the proposed method achieves good accuracy in the capacity estimation and captures the uncertainty in the RUL prediction. Post-explant weekly cycling data obtained from field cells with 4–7 implant years further verify the effectiveness of the proposed method in the capacity estimation

  6. Method for estimating modulation transfer function from sample images.

    Science.gov (United States)

    Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta

    2018-02-01

    The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A projection and density estimation method for knowledge discovery.

    Directory of Open Access Journals (Sweden)

    Adam Stanski

    Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  8. Estimation of oil reservoir thermal properties through temperature log data using inversion method

    International Nuclear Information System (INIS)

    Cheng, Wen-Long; Nian, Yong-Le; Li, Tong-Tong; Wang, Chang-Long

    2013-01-01

    Oil reservoir thermal properties not only play an important role in steam injection well heat transfer, but also are the basic parameters for evaluating the oil saturation in reservoir. In this study, for estimating reservoir thermal properties, a novel heat and mass transfer model of steam injection well was established at first, this model made full analysis on the wellbore-reservoir heat and mass transfer as well as the wellbore-formation, and the simulated results by the model were quite consistent with the log data. Then this study presented an effective inversion method for estimating the reservoir thermal properties through temperature log data. This method is based on the heat transfer model in steam injection wells, and can be used to predict the thermal properties as a stochastic approximation method. The inversion method was applied to estimate the reservoir thermal properties of two steam injection wells, it was found that the relative error of thermal conductivity for the two wells were 2.9% and 6.5%, and the relative error of volumetric specific heat capacity were 6.7% and 7.0%,which demonstrated the feasibility of the proposed method for estimating the reservoir thermal properties. - Highlights: • An effective inversion method for predicting the oil reservoir thermal properties was presented. • A novel model for steam injection well made full study on the wellbore-reservoir heat and mass transfer. • The wellbore temperature field and steam parameters can be simulated by the model efficiently. • Both reservoirs and formation thermal properties could be estimated simultaneously by the proposed method. • The estimated steam temperature was quite consistent with the field data

  9. COMPARATIVE EVALUATION OF CONVENTIONAL VERSUS RAPID METHODS FOR AMPLIFIABLE GENOMIC DNA ISOLATION OF CULTURED Azospirillum sp. JG3

    Directory of Open Access Journals (Sweden)

    Stalis Norma Ethica

    2013-12-01

    Full Text Available As an initial attempt to reveal genetic information of Azospirillum sp. JG3 strain, which is still absence despite of the strains' ability in producing valued enzymes, two groups of conventional methods: lysis-enzyme and column-kit; and two rapid methods: thermal disruption and intact colony were evaluated. The aim is to determine the most practical method for obtaining high-grade PCR product using degenerate primers as part of routine-basis protocols for studying the molecular genetics of the Azospirillal bacteria. The evaluation includes the assessment of electrophoresis gel visualization, pellet appearance, preparation time, and PCR result of extracted genomic DNA from each method. Our results confirmed that the conventional methods were more superior to the rapid methods in generating genomic DNA isolates visible on electrophoresis gel. However, modification made in the previously developed DNA isolation protocol giving the simplest and most rapid method of all methods used in this study for extracting PCR-amplifiable DNA of Azospirillum sp. JG3. Intact bacterial cells (intact colony loaded on electrophoresis gel could present genomic DNA band, but could not be completely amplified by PCR without thermal treatment. It can also be inferred from our result that the 3 to 5-min heating in dH2O step is critical for the pre-treatment of colony PCR of Azospirillal cells.

  10. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  11. A method for estimating abundance of mobile populations using telemetry and counts of unmarked animals

    Science.gov (United States)

    Clement, Matthew; O'Keefe, Joy M; Walters, Brianne

    2015-01-01

    While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.

  12. Developing rapid methods for analyzing upland riparian functions and values.

    Science.gov (United States)

    Hruby, Thomas

    2009-06-01

    Regulators protecting riparian areas need to understand the integrity, health, beneficial uses, functions, and values of this resource. Up to now most methods providing information about riparian areas are based on analyzing condition or integrity. These methods, however, provide little information about functions and values. Different methods are needed that specifically address this aspect of riparian areas. In addition to information on functions and values, regulators have very specific needs that include: an analysis at the site scale, low cost, usability, and inclusion of policy interpretations. To meet these needs a rapid method has been developed that uses a multi-criteria decision matrix to categorize riparian areas in Washington State, USA. Indicators are used to identify the potential of the site to provide a function, the potential of the landscape to support the function, and the value the function provides to society. To meet legal needs fixed boundaries for assessment units are established based on geomorphology, the distance from "Ordinary High Water Mark" and different categories of land uses. Assessment units are first classified based on ecoregions, geomorphic characteristics, and land uses. This simplifies the data that need to be collected at a site, but it requires developing and calibrating a separate model for each "class." The approach to developing methods is adaptable to other locations as its basic structure is not dependent on local conditions.

  13. Estimation of Cross-Lingual News Similarities Using Text-Mining Methods

    Directory of Open Access Journals (Sweden)

    Zhouhao Wang

    2018-01-01

    Full Text Available In this research, two estimation algorithms for extracting cross-lingual news pairs based on machine learning from financial news articles have been proposed. Every second, innumerable text data, including all kinds news, reports, messages, reviews, comments, and tweets are generated on the Internet, and these are written not only in English but also in other languages such as Chinese, Japanese, French, etc. By taking advantage of multi-lingual text resources provided by Thomson Reuters News, we developed two estimation algorithms for extracting cross-lingual news pairs from multilingual text resources. In our first method, we propose a novel structure that uses the word information and the machine learning method effectively in this task. Simultaneously, we developed a bidirectional Long Short-Term Memory (LSTM based method to calculate cross-lingual semantic text similarity for long text and short text, respectively. Thus, when an important news article is published, users can read similar news articles that are written in their native language using our method.

  14. Cumulant-Based Coherent Signal Subspace Method for Bearing and Range Estimation

    Directory of Open Access Journals (Sweden)

    Bourennane Salah

    2007-01-01

    Full Text Available A new method for simultaneous range and bearing estimation for buried objects in the presence of an unknown Gaussian noise is proposed. This method uses the MUSIC algorithm with noise subspace estimated by using the slice fourth-order cumulant matrix of the received data. The higher-order statistics aim at the removal of the additive unknown Gaussian noise. The bilinear focusing operator is used to decorrelate the received signals and to estimate the coherent signal subspace. A new source steering vector is proposed including the acoustic scattering model at each sensor. Range and bearing of the objects at each sensor are expressed as a function of those at the first sensor. This leads to the improvement of object localization anywhere, in the near-field or in the far-field zone of the sensor array. Finally, the performances of the proposed method are validated on data recorded during experiments in a water tank.

  15. Errors in the estimation method for the rejection of vibrations in adaptive optics systems

    Science.gov (United States)

    Kania, Dariusz

    2017-06-01

    In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.

  16. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  17. A Qualitative Method to Estimate HSI Display Complexity

    Energy Technology Data Exchange (ETDEWEB)

    Hugo, Jacques; Gertman, David [Idaho National Laboratory, Idaho (United States)

    2013-04-15

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

  18. A rapid method of reprocessing for electronic microscopy of cut histological in paraffin

    International Nuclear Information System (INIS)

    Hernandez Chavarri, F.; Vargas Montero, M.; Rivera, P.; Carranza, A.

    2000-01-01

    A simple and rapid method is described for re-processing of light microscopy paraffin sections to observe they under transmission electron microscopy (TEM) and scanning electron microscopy (SEM) The paraffin-embedded tissue is sectioned and deparaffinized in toluene; then exposed to osmium vapor under microwave irradiation using a domestic microwave oven. The tissues were embedded in epoxy resin, polymerized and ultrathin sectioned. The method requires a relatively short time (about 30 minutes for TEM and 15 for SEM), and produces a reasonable quality of the ultrastructure for diagnostic purposes. (Author) [es

  19. Identification of new biomarker of radiation exposure for establishing rapid, simplified biodosimetric method

    International Nuclear Information System (INIS)

    Iizuka, Daisuke; Kawai, Hidehiko; Kamiya, Kenji; Suzuki, Fumio; Izumi, Shunsuke

    2014-01-01

    Until now, counting chromosome aberration is the most accurate method for evaluating radiation doses. However, this method is time consuming and requires skills for evaluating chromosome aberrations. It could be difficult to apply this method to majority of people who are expected to be exposed to ionizing radiation. In this viewpoint, establishment of rapid, simplified biodosimetric methods for triage will be anticipated. Due to the development of mass spectrometry method and the identification of new molecules such as microRNA (miRNA), it is conceivable that new molecular biomarker of radiation exposure using some newly developed mass spectrometry. In this review article, the part of our results including the changes of protein (including the changes of glycosylation), peptide, metabolite, miRNA after radiation exposure will be shown. (author)

  20. Rapid analysis of fertilizers by the direct-reading thermometric method.

    Science.gov (United States)

    Sajó, I; Sipos, B

    1972-05-01

    The authors have developed rapid methods for the determination of the main components of fertilizers, namely phosphate, potassium and nitrogen fixed in various forms. In the absence of magnesium ions phosphate is precipitated with magnesia mixture; in the presence of magnesium ions ammonium phosphomolybdate is precipitated and the excess of molybdate is reacted with hydrogen peroxide. Potassium is determined by precipitation with silico-fluoride. For nitrogen fixed as ammonium salts the ammonium ions are condensed in a basic solution with formalin to hexamethylenetetramine; for nitrogen fixed as carbamide the latter is decomposed with sodium nitrite; for nitrogen fixed as nitrate the latter is reduced with titanium(III). In each case the temperature change of the test solution is measured. Practically all essential components of fertilizers may be determined by direct-reading thermometry; with this method and special apparatus the time of analysis is reduced to at most about 15 min for any determination.